2021 Federal Standard of Excellence


Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY21?

Score
7
Millennium Challenge Corporation
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • Every MCC investment must adhere to MCC’s rigorous Policy for Monitoring and Evaluation (M&E) that requires every MCC program to contain a comprehensive M&E Plan. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry-into-force. The M&E Plan lays out the evaluation strategy and includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In an effort to advance MCC’s evidence base and respond to the Evidence Act, MCC is implementing a learning agenda around women’s economic empowerment (WEE) with short- and long-term objectives. The agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s WEE activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s WEE work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/WEE design?
    • How does MCC’s WEE work increase income and assets for women and girls within those households?
    • How does MCC’s WEE work increase women’s empowerment, defined through measures relevant to the WEE intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders. Agency leadership has named inclusion and gender as a key priority. As such, the agency is considering how to expand the WEE learning agenda to include evidence generation and utilization around gender and inclusion (in addition to women’s economic empowerment) in MCC’s programming.
  • MCC is also increasingly enabling learning agendas and strategies with its partner countries. In MCC’s compact with Liberia, a key program focused on institutional reform and strengthening of the Liberia Electricity Corporation. In recognition of learning in their on-the-job training strategies, the team won top awards for advancements in learning strategy creation and best learning program supporting a change transformation business strategy. These awards recognize the innovation and excellence in the strategies and design deployed in the program, as well as the results achieved. 
2.4 Did the agency publicly release all completed program evaluations?
  • MCC publishes each independent evaluation of every project, underscoring the agency’s commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the new MCC Evidence Platform. As of August 2021, MCC had contracted, planned, and/or published 209 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 117 Final Reports and 36 Interim Reports have been finalized and released to the public.
  • In FY21, MCC also continued producing Evaluation Briefs, an MCC product that distills key findings and lessons learned from MCC’s independent evaluations. MCC will produce Evaluation Briefs for each evaluation moving forward, and is in the process of writing Evaluation Briefs for the backlog of all completed evaluations. MCC expects to have Evaluation Briefs for every published evaluation by the end of 2021. As of October 2021, MCC has published 107 Evaluation Briefs.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • MCC is currently working on a draft capacity assessment in accordance with the Evidence Act. Additionally, once a compact or threshold program is in implementation, Monitoring and Evaluation (M&E) resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of all of MCC’s programs. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E Managing Director, Chief Economist, and Vice President for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY21, MCC began or continued comprehensive, independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • MCC employs rigorous, independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of August 2021, about 32% of MCC’s evaluation portfolio consists of impact evaluations, and 68% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project, and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted according to the program’s Monitoring and Evaluation (M&E) Plan, in accordance with MCC’s Policy for M&E. 
Score
9
U.S. Department of Education
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • The Department’s Evaluation Policy is posted online at ed.gov/data and can be directly accessed here. Key features of the policy include the Department’s commitment to: (1) independence and objectivity; (2) relevance and utility; (3) rigor and quality; (4) transparency; and (5) ethics. Special features include additional guidance to ED staff on considerations for evidence-building conducted by ED program participants, which emphasize the need for grantees to build evidence in a manner consistent with the parameters of their grants (e.g., purpose, scope, and funding levels), up to and including rigorous evaluations that meet WWC standards without reservations.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • The Department’s FY22 Annual Evaluation Plan is posted at https://www.ed.gov/data under “FY22 Evidence-Building Deliverables.” The FY23 Plan will be posted there in February 2022, concurrent with the release of the President’s Budget. 
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • The Department submitted its FY22-FY26 Learning Agenda in concert with its FY22-FY26 Strategic Plan to OMB in September 2021. The previous version is available here. In August 2021, ED published a Federal Register notice seeking comment on key topics within the Learning Agenda and will continue to seek stakeholder feedback on the document. ED will publish its Learning Agenda in February 2022 as part of the Agency’s Strategic Plan. 
2.4 Did the agency publicly release all completed program evaluations?
  • IES publicly releases all peer-reviewed publications from its evaluations on the IES website and also in the Education Resources Information Center (ERIC). Many IES evaluations are also reviewed by its What Works Clearinghouse. IES also maintains profiles of all evaluations on its website, both completed and ongoing, which include key findings, publications, and products. IES regularly conducts briefings on its evaluations for ED, the Office of Management and Budget, Congressional staff, and the public. 
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • The Department submitted its FY22-FY26 Capacity Assessment in concert with its FY22-FY26 Strategic Plan to OMB in September 2021. ED will publish its FY22-FY26 Capacity Assessment in February 2022 as part of the Agency’s Strategic Plan.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The IES website includes a searchable database of planned and completed evaluations, including those that use experimental, quasi-experimental, or regression discontinuity designs. All impact evaluations rely upon experimental trials. Other methods, including matching and regression discontinuity designs, are classified as rigorous outcomes evaluations. IES also publishes studies that are descriptive or correlational in nature, including implementation studies and less rigorous outcomes evaluations.
Score
10
U.S. Agency for International Development
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • The agency-wide USAID Evaluation Policy, published in January 2011, updated in October 2016, and in April 2021, incorporates changes that better integrate with USAID’s Program Cycle Policy and ensure compliance with the Foreign Aid Transparency and Accountability Act (FATAA), and the Foundations for Evidence-Based Policymaking Act of 2018. The 2021 changes to the evaluation policy updated evaluation requirements to simplify implementation and increase the breadth of evaluation coverage, dissemination, and utilization.
  • It also establishes new requirements that will allow for the majority of program funds to be subjected to external evaluations. The requirements include the following (1) at least one evaluation per intermediate result (IR) defined in the operating unit’s strategy; (2) at least one evaluation per activity (contracts, orders, grants, and cooperative agreements) with a budget expected to be $20 million or more; and (3) an impact evaluation for any new, untested approach, anticipated to be expanded in scale and scope. The main way these requirements are communicated is through the USAID Automated Directives System (ADS) 201.
  • The Evaluation Policy requires consultation with in-country partners and beneficiaries as essential, and that evaluation reports could include sufficient local contextual information. To make the conduct and practice of evaluations more inclusive and relevant to the country context, the policy requires that evaluations will be consistent with institutional aims of local ownership through respectful engagement with all partners, including local beneficiaries and stakeholders, while leveraging and building local capacity for program evaluation. As a result, the policy expects that evaluation specialists from partner countries who have appropriate expertise will lead and/or be included in evaluation teams. In addition, USAID focuses its priorities within its sectoral programming on supporting partner government and civil society capacity to undertake evaluations and use the results generated. Data from the USAID Evaluation Registry indicated that annually, about two-thirds of evaluations, were conducted by teams that included one or more local experts. Also, while local experts may be included in the team composition, it is still a rarity to have a local expert as the evaluation team lead for conducting USAID evaluations.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • USAID has produced an agency-wide Annual Evaluation Plan for FY22. This plan also fulfills the Evidence Act requirement that all Federal Agencies should develop an Annual Evaluation Plan, which describes the significant evaluation activities the agency plans to conduct in the fiscal year following the year in which it is submitted. The plan contains 35 significant evaluations that each address a question from the Agency-wide Learning Agenda; performance evaluations of activities with budgets of $40 million or more; impact evaluations; and ex-post evaluations.
  • USAID has an agency-wide evaluation registry that collects information on all evaluations planned to commence within the next three years (as well as tracking ongoing and completed evaluations). Currently, this information is used internally and is not published. To meet the Evidence Act requirement, in March 2021, USAID published its Annual Evaluation Plan for FY22 on the Development Experience Clearinghouse. A draft agency-wide evaluation plan for FY23 will also be submitted in the Agency’s draft Annual Performance Plan/Annual Performance Report submitted to OMB in September 2021. 
  • In addition, USAID’s Office of Learning, Evaluation, and Research works with bureaus to develop internal annual Bureau Monitoring, Evaluation and Learning Plans that review evaluation quality, and evidence building and use within each bureau, and identify challenges and priorities for the year ahead.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • USAID’s agency-wide learning agenda was first established in 2018, prior to the passing of the Evidence Act. The initial set of questions, titled the Self-Reliance Learning Agenda, were developed through a strongly consultative process with internal and external stakeholders and represented the Agency’s priority learning needs related to the Journey to Self-Reliance. Throughout implementation of the learning agenda, USAID has continued to engage external stakeholders through learning events, invitations to share evidence, and by making learning agenda products and resources publicly available on USAID.gov
  • As priorities shift, it is essential that the Agency Learning Agenda adapts to continue to meet the learning needs of the Agency. The Agency Learning Agenda is undergoing a revision process to incorporate new Agency priorities and align with the FY22-26 Joint Strategic Plan. A number of policy areas have been identified for inclusion are COVID-19, climate, and Diversity, Equity, and Inclusion (DEI). Although USAID is still determining where to focus learning efforts, the Agency Learning Agenda is committed to furthering generation and use of evidence to inform agency policies, programs, and operations related to DEI and other critical areas. 
  • Stakeholder consultations with internal and external stakeholders are central to the revision process. Consultations aim to capture a small, prioritized set of Agency learning needs related to Agency policy priorities, and to identify opportunities for collaboration with key stakeholders on this learning. The Agency Learning Agenda team is consulting Mission staff from across all of the regions in which USAID operates and Washington Operating Units to capture a diversity of internal voices. Consultations with external stakeholders include a selection of congressional committees, interagency partners (e.g. Department of State), other donors, think tanks, nongovernmental researchers, and development-focused convening organizations. Revisions to the Agency Learning Agenda will incorporate feedback gathered through these stakeholder consultations, inputs from the Joint Strategic Planning process with the Department of State, and a stocktaking of learning agenda implementation to-date to result in a prioritized set of questions that will focus Agency learning on top policy priorities.
2.4 Did the agency publicly release all completed program evaluations?
  • To increase access and awareness of available evaluation reports, USAID has created an “Evaluations at USAID” dashboard of completed evaluations starting from FY16. The dashboard includes an interactive map showing countries and the respective evaluations completed for each fiscal year, starting from FY16. Using filters, completed evaluations can be searched  by operating unit, sector, evaluation purpose, evaluation type, and evaluation use. The dashboard also has data on the percent of USAID evaluations that include local evaluation experts on the evaluation team that conducted the evaluation. The information for FY20 is being finalized, and will be used to update the dashboard. The dashboard has also served as a resource for USAID Missions. For example, in USAID/Cambodia and USAID/Azerbaijan, the dashboard was used to provide annotated bibliographies to inform the design of civic engagement activities.
  • In addition, all final USAID evaluation reports are published on the Development Experience Clearinghouse (DEC), except for a small number of evaluations that receive a waiver to public disclosure (typically less than 5% of the total completed in a fiscal year). The process to seek a waiver to public disclosure is outlined in the document Limitations to Disclosure and Exemptions to Public Dissemination of USAID Evaluation Reports and includes exceptions for circumstances such as those when “public disclosure is likely to jeopardize the personal safety of U.S. personnel or recipients of U.S. resources.”
  • A review of evaluations as part of an Equity Assessment report to OMB (in response to the Racial and Ethnic Equity Executive Order) found that evaluations that include analysis of racial and ethnic equity are more likely to be commissioned by USAID’s Africa Bureau, and USAID Programs in Ethiopia, Tanzania, Kenya, Liberia, Ghana, Uganda, Malawi, Indonesia, India, Cambodia, Kosovo, and Colombia. Reports on agriculture, education, and health programs most often utilize the words race and ethnicity in the evaluation findings.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • USAID recognizes that sound development programming relies on strong evidence that enables policymakers and program planners to make decisions, improve practice, and achieve development outcomes. As one of the deliverables of the Evidence Act, USAID submitted an interim Capacity Assessment to OMB in September 2020. This report provided an initial overview of coverage, quality, methods, effectiveness, and independence of statistics, evaluation, research, and analysis functions and activities within USAID. The report demonstrates that evaluations conducted by operating units cover the range of program areas of USAID foreign assistance investment. Economic growth, health, and democracy, human rights, and governance, accounted for more than three-quarters of evaluations completed by the Agency in FY19.
  • In addition, USAID has commissioned a Capacity Assessment in response to the Evidence Act requirements. The assessment is using a four-phased approach: assessment design, implementation and analysis, reports, and communication/ dissemination. USAID is currently in Phase 3, which involves developing a Maturity Model to assess the Agency’s capacity to generate, manage, and use evidence.
  • USAID staff also review evaluation quality on an ongoing basis and review the internal Bureau Monitoring, Evaluation and Learning Plans referenced in 2.2 above. Most recently, USAID completed a review of the quality of its impact evaluations. The review assessed the quality of all 133 USAID-funded IE reports published between FY12-19. In addition, there are several studies that have looked at parts of this question over the previous several years. These include GAO reports, such as Agencies Can Improve the Quality and Dissemination of Program Evaluations; From Evidence to Learning: Recommendations to Improve Foreign Assistance Evaluations; reviews by independent organizations like the Center for Global Development’s Evaluating Evaluations: Assessing the Quality of Aid Agency Evaluations in Global Health – Working Paper 461; and studies commissioned by USAID such as the Meta-Evaluation of Quality and Coverage of USAID Evaluations 2009-2012. These studies generally show that USAID’s evaluation quality is improving over time with room for continued improvement.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • USAID uses rigorous evaluation methods, including randomized control trials (i.e. assignment studies) and quasi-experimental methods for research and evaluation purposes. For example, in FY20, USAID’s Development Innovation Ventures (DIV), funded 10 impact evaluations, nine of which used randomized control trials.
  • DIV makes significant investments using randomized controlled trials and quasi-experimental evaluations to provide evidence of impact for pilot approaches to be considered for scaled funding. USAID is also experimenting with cash benchmarking –using household grants to benchmark traditional programming. USAID has undertaken five randomized control trials (RCT) of household grants or “cash transfer” programs, three of which compare more traditional programs against household grants.
Score
10
Administration for Children and Families (HHS)
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • ACF’s evaluation policy confirms ACF’s commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. ACF seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. ACF established the evaluation policy in 2012 and published it in the Federal Register on August 29, 2014. In late 2019, ACF released a short video about the policy’s five principles and how ACF uses them to guide its work.
  • As ACF’s primary representative to the HHS Evidence and Evaluation Council, the ACF Deputy Assistant Secretary for Planning, Research, and Evaluation co-chairs the HHS Evaluation Policy Subcommittee–the body responsible for developing an HHS-wide evaluation policy. HHS released its Department-wide evaluation policy in 2021.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • In accordance with OMB guidance, ACF contributed to the HHS-wide evaluation plan. The Office of Planning, Research, and Evaluation (OPRE) also annually identifies questions relevant to the programs and policies of ACF and proposes a research and evaluation spending plan to the Assistant Secretary for Children and Families. This plan focuses on activities that OPRE plans to conduct during the following fiscal year.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In accordance with OMB guidance, HHS is developing an HHS-wide evidence-building plan. To develop this document, HHS asked each sub-agency to submit examples of their agency’s priority research questions, potential data sources, anticipated approaches, challenges and mitigation strategies, and stakeholder engagement strategies. ACF drew from its existing program-specific learning agendas and research plans, has contributed example priority research questions, and has anticipated learning activities for inclusion in the HHS evidence-building plan. The HHS evidence-building plans set to be released in early 2022 as a part of the HHS strategic plan.
  • In 2020, ACF released a research and evaluation agenda, describing research and evaluation activities and plans in nine ACF program areas with substantial research and evaluation portfolios: Adolescent Pregnancy Prevention and Sexual Risk Avoidance, Child Care, Child Support Enforcement, Child Welfare, Head Start, Health Profession Opportunity Grants, Healthy Marriage and Responsible Fatherhood, Home Visiting, and Welfare and Family Self-Sufficiency.
  • In addition to fulfilling requirements of the Evidence Act, ACF has supported and continues to support systematic learning and stakeholder engagement activities across the agency. For example:
    • Many ACF program offices have or are currently developing detailed program-specific learning agendas to systematically learn about and improve their programs—studying existing knowledge, identifying gaps, and setting program priorities. For example, ACF and HRSA have developed a learning agenda for the MIECHV program, and ACF is supporting ongoing efforts to build a learning agenda for ACF’s Healthy Marriage and Responsible Fatherhood (HMRF) programming.
    • ACF will continue to release annual portfolios that describe key findings from past research and evaluation work and how ongoing projects are addressing gaps in the knowledge base to answer critical questions in the areas of family self-sufficiency, child and family development, and family strengthening. In addition to describing key questions, methods, and data sources for each research and evaluation project, the portfolios provide narratives describing how evaluation and evidence-building activities unfold in specific ACF programs and topical areas over time, and how current research and evaluation initiatives build on past efforts and respond to remaining gaps in knowledge.
    • ACF works closely with many stakeholders to inform priorities for its research and evaluation efforts and solicits their input through conferences and meetings such as the Research and Evaluation Conference on Self-Sufficiency, the National Research Conference on Early Childhood, and the Child Care and Early Education Policy Research Consortium Annual Meetings; meetings with ACF grantees and program administrators; engagement with training and technical assistance networks; surveys, focus groups, interviews, and other activities conducted as a part of research and evaluation studies; and through both project-specific and topical technical working groups, including the agency’s Family Self-Sufficiency Research Technical Working Group. ACF’s ongoing efforts to engage its stakeholders will be described in more detail in ACF’s forthcoming description of its learning activities.
2.4 Did the agency publicly release all completed program evaluations?
  • ACF’s evaluation policy requires that “ACF will release evaluation results regardless of findings…Evaluation reports will present comprehensive findings, including favorable, unfavorable, and null findings. ACF will release evaluation results timely–usually within two months of a report’s completion.” ACF has publicly released the findings of all completed evaluations to date. In 2020, OPRE released over 130 research publications. OPRE publications are publicly available on the OPRE website.
  • Additionally, ACF develops and uses research and evaluation methods that are appropriate for studying diverse populations, taking into account historical and cultural factors and planning data collection with disaggregation and subgroup analyses in mind. Whenever possible, ACF projects report on subgroups. Recent examples include the Parents and Children Together (PACT) Evaluation substudy of program strategies and adaptations used by selected responsible fatherhood programs serving Hispanic fathers, and the American Indian and Alaska Native Head Start Family and Child Experiences Survey (AI/AN FACES) which has been fielded to capture information on the characteristics, experiences, and development of Head Start children and families in Region XI, which predominantly serves AIAN children and families. In February 2021 OPRE released a brief on Methods, Challenges, and Best Practices for Conducting Subgroup Analysis.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • In accordance with OMB guidance, ACF is contributing to an HHS-wide capacity assessment, which is set to be released in early 2022 as a part of the HHS strategic plan. In order to support these and related efforts, OPRE launched the ACF Evidence Capacity Support project in 2020. The Evidence Capacity project provides support to ACF’s efforts to build and strengthen programmatic and operational evidence capacity, including supporting learning agenda development and the development of other foundational evidence through administrative data analysis. Given the centrality of data capacity to evidence capacity, ACF has also been partnering with the HHS OCDO to develop and pilot test a tool to conduct an HHS-wide data capacity assessment, consistent with Title II Evidence Act requirements. In support of specifically modernizing ACF’s Data Governance and related capacity, ACF launched the ACF Data Governance Consulting and Support project. The Data Governance Support project is providing information gathering, analysis, consultation, and technical support to ACF and its partners to strengthen data governance practices within ACF offices, and between ACF and its partners at the federal, state, local, and tribal levels.
  • ACF has also sought to build capacity to support culturally responsive evaluation, including sponsorship of the National Research Center on Hispanic Children & Families and the Tribal Early Childhood Research Center, and development of “A Roadmap for Collaborative and Effective Evaluation in Tribal Communities.” ACF also has a new grant opportunity for an African American Children and Families Research Center, which is intended to lead and support research on the needs of African American populations served by ACF and promising approaches to promote social and economic well-being among low-income African American populations, further providing leadership on culturally competent research that can inform policies concerning low-income African American populations and foster significant scholarship regarding the needs and experiences of the diverse African American population throughout the nation. ACF also continues to support the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts as follows:
  • Coverage: ACF conducts research in areas where Congress has given authorization and appropriations. Programs for which ACF is able to conduct research and evaluation using dedicated funding include Temporary Assistance for Needy Families, Health Profession Opportunity Grants, Head Start, Child Care, Child Welfare, Home Visiting, Healthy Marriage and Responsible Fatherhood, Personal Responsibility Education Program, Sexual Risk Avoidance Education, Teen Pregnancy Prevention, Runaway and Homeless Youth, Family Violence Prevention Services, and Human Trafficking services. These programs represent approximately 85% of overall ACF spending.
  • Quality: ACF’s Evaluation Policy states that ACF is committed to using the most rigorous methods that are appropriate to the evaluation questions and the populations with whom research is being conducted and feasible within budget and other constraints, and that rigor is necessary not only for impact evaluations, but also for implementation/process evaluations, descriptive studies, outcome evaluations, and formative evaluations; and in both qualitative and quantitative approaches.
  • Methods: ACF uses a range of evaluation methods. ACF conducts impact evaluations as well as implementation and process evaluations, cost analyses and cost benefit analyses, descriptive and exploratory studies, research syntheses, and more. ACF also develops and uses methods that are appropriate for studying diverse populations, taking into account historical and cultural factors and planning data collection with disaggregation and subgroup analyses in mind. ACF is committed to learning about and using the most scientifically advanced approaches to determining effectiveness and efficiency of ACF programs; to this end, OPRE annually organizes meetings of scientists and research experts to discuss critical topics in social science research methodology and how innovative methodologies can be applied to policy-relevant questions.
  • Effectiveness: ACF’s Evaluation Policy states that ACF will conduct relevant research and disseminate findings in ways that are accessible and useful to policymakers, practitioners, and the diverse populations that ACF programs serve. OPRE engages in ongoing collaboration with ACF program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions such as ACF regulations and funding opportunity announcements. For example, when ACF’s Office of Head Start significantly revised its Program Performance Standards–the regulations that define the standards and minimum requirements for Head Start services–the revisions drew from decades of OPRE research and the recommendations of the OPRE-led Secretary’s Advisory Committee on Head Start Research and Evaluation. Similarly, ACF’s Office of Child Care drew from research and evaluation findings related to eligibility redetermination, continuity of subsidy use, use of funds dedicated to improving the quality of programs, and other information to inform the regulations accompanying the reauthorization of the Child Care and Development Block Grant.
  • Independence: ACF’s Evaluation Policy states that independence and objectivity are core principles of evaluation and that it is important to insulate evaluation functions from undue influence and from both the appearance and the reality of bias. To promote objectivity, ACF protects independence in the design, conduct, and analysis of evaluations. To this end, ACF conducts evaluations through the competitive award of grants and contracts to external experts who are free from conflicts of interest; and, the Deputy Assistant Secretary for Planning, Research, and Evaluation, a career civil servant, has authority to approve the design of evaluation projects and analysis plans; and has authority to approve, release, and disseminate evaluation reports.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • ACF’s Evaluation Policy states that in assessing the effects of programs or services, ACF evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences and that for causal questions, experimental approaches are preferred. As of April 2021, at least 20 ongoing OPRE projects included one or more random assignment impact evaluations. To date in FY21, OPRE has released RCT impact findings related to Health Profession Opportunity Grants and TANF job search assistance strategies.
  • OPRE’s template for research contracts includes a standard task for stakeholder engagement, which states that “involving stakeholders in the evaluation may increase understanding, acceptance, and utilization of evaluation findings… Where appropriate, stakeholders should have the opportunity for input at multiple phases of a project…accomplished in a transparent way while safeguarding the objectivity and independence of the study.” Four OPRE projects focused on early childhood programs that serve American Indian and Alaska Native (AIAN) families are exemplars of using a stakeholder engaged approach at each stage of the research cycle to understand and co-create knowledge: Tribal Early Childhood Research Center (TRC), AIAN Family and Childhood Experiences Survey (FACES) 2015, Multi-Site Implementation of Evaluation of MIECHV with AIAN Families (MUSE), and AIAN FACES 2019. Additionally, a planned solicitation for FY22, Advancing Contextual Analysis and Methods of Participant Engagement in OPRE (CAMPE), will explore how OPRE can further incorporate participatory methods and analysis of contextual factors into research and evaluation projects.
Score
8
AmeriCorps
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • AmeriCorps has an evaluation policy that presents five key principles that govern the agency’s planning, conduct, and use of program evaluations: rigor, relevance, transparency, independence, and ethics.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • In FY19, AmeriCorps finalized and posted a five year, agency-wide strategic evaluation plan. AmeriCorps is in the process of updating its learning agenda (strategic evidence plan) to align with the agency’s FY22-26 strategic plan.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • AmeriCorps uses the terms learning agenda, evaluation plan, and strategic evidence-building plan synonymously. AmeriCorps has a strategic evidence plan that includes an evergreen learning agenda. The plan has been updated and submitted to OMB for review and comment. In addition, the draft document has been shared with AmeriCorps State and National State Commissions who will have an opportunity to provide feedback for the remainder of 2021. Additionally, the agency is devising a plan to engage external stakeholders in commenting on the revised learning agenda.
2.4 Did the agency publicly release all completed program evaluations?
  • All completed evaluation reports are posted to the Evidence Exchange, an electronic repository for evaluation studies and other reports. This virtual repository was launched in September 2015. 
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • A comprehensive portfolio of research projects has been built to assess the extent to which AmeriCorps is achieving its mission. As findings emerge, future studies are designed to continuously build the agency’s evidence base. R&E relies on scholarship in relevant fields of academic study; a variety of research and program evaluation approaches including field, experimental, and survey research; multiple data sources including internal and external administrative data; and different statistical analytic methods. AmeriCorps relies on partnerships with universities and third party research firms to ensure independence and access to state of the art methodologies. AmeriCorps supports its grantees with evaluation technical assistance and courses to ensure their evaluations are of the highest quality and requires grantees receiving $500,000 or more in annual funding to engage an external evaluator. These efforts have resulted in a robust body of evidence that national service allows: (1) national service participants to experience positive benefits, (2) nonprofit organizations to be strengthened, and (3) national service programs to effectively address local issues (along with a suite of AmeriCorps resources for evaluations).
  • While AmeriCorps is a non-CFO agency, and therefore not required to comply with the Evidence Act, including the mandated Evidence Capacity Assessment, the agency is procuring a third party to support analysis of the agency’s evaluation, research, statistical and analysis workforce capacity.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • AmeriCorps uses the research design most appropriate for addressing the research question. When experimental or quasi-experimental designs are warranted, the agency uses them and encourages its grantees to use them, as noted in the agency evaluation policy: “AmeriCorps is committed to using the most rigorous methods that are appropriate to the evaluation questions and feasible within statutory, budget and other constraints.” As of September 2021, AmeriCorps has received 46 grantee evaluation reports that use experimental design and 140 that use quasi-experimental design.
Score
7
U.S. Department of Labor
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • DOL has an Evaluation Policy that formalizes the principles that govern all program evaluations in the Department, including methodological rigor, independence, transparency, ethics, and relevance. The policy represents a commitment to using evidence from evaluations to inform policy and practice. The policy ​​states that “evaluations should be designed to address DOL’s diverse programs, customers, and stakeholders; and DOL should encourage diversity among those carrying out the evaluations.”
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • The Chief Evaluation Office (CEO) develops, implements, and publicly releases an annual DOL evaluation plan. The evaluation plan is based on the agency learning agendas as well as the Department’s Strategic Plan priorities, statutory requirements for evaluations, and Secretarial and Administration priorities. As of August 2021, the Department is seeking public input and comment on the draft FY22-26 strategic and evaluation plans. The evaluation plan includes the studies the CEO intends to undertake in the next year using set-aside dollars. Appropriations language requires the Chief Evaluation Officer to submit a plan to the U.S. Senate and House Committees on Appropriations outlining the evaluations that will be carried out by the Office using dollars transferred to the CEO–the DOL evaluation plan serves that purpose. The evaluation plan outlines evaluations that the CEO will use its budget to undertake. The CEO also works with agencies to undertake evaluations and evidence building strategies to answer other questions of interest identified in learning agencies, but not undertaken directly by the CEO.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In FY21, the Department is developing its annual evaluation plan, building from individual agencies and learning agendas to create a combined document. DOL has leveraged its existing practices and infrastructure to develop the broad, four-year prospective research agenda, or Evidence-Building Plan, per the Evidence Act requirement. Both documents will outline the process for internal and external stakeholder engagement.
  • The draft FY22-26 Evidence-Building Plan identifies “Equity in Employment and Training Programs” and “Barriers to Women’s Employment” as priority areas.
2.4 Did the agency publicly release all completed program evaluations?
  • All DOL program evaluation reports and findings funded by the CEO are publicly released and posted on the complete reports section of the CEO website. DOL agencies, such as the Employment and Training Administration (ETA), also post and release their own research and evaluation reports. Some program evaluations include data and results disaggregated by race, ethnicity, and gender, among others, where possible. DOL’s website also provides accessible summaries and downloadable one-pagers on each study. CEO is also in the process of ramping up additional methods of communicating and disseminating CEO-funded studies and findings, and published its first quarterly newsletter in September 2020.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • The U.S. Department of Labor’s (DOL) Chief Evaluation Office (CEO) has sponsored an assessment of DOL’s baseline capacity to produce and use evidence, with the aim of helping the Department and its agencies identify key next steps to improve evidence capacity. CEO developed technical requirements and contracted with the American Institutes for Research (AIR)/IMPAQ International, LLC (IMPAQ) (research team) to design and conduct this independent, third-party assessment. 
    This assessment included the 16 DOL agencies in the Department’s Strategic Plan. It reflects data collected through a survey of targeted DOL staff, focus groups with selected DOL staff, and a review of selected evidence documents.
  • DOL’s Evaluation Policy touches on the agency’s commitment to high-quality, methodologically rigorous research through funding independent research activities. Further, CEO staff have expertise in research and evaluation methods as well as in DOL programs and policies and the populations they serve. The CEO also employs technical working groups on the majority of evaluation projects whose members have deep technical and subject matter expertise. The CEO has leveraged the FY20 learning agenda process to create an interim Capacity Assessment, per Evidence Act requirements, and is conducting a more detailed assessment of individual agencies’ capacity, as well as DOL’s overall capacity, in these areas for publication in 2022.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • DOL employs a full range of evaluation methods to answer key research questions of interest, including when appropriate, impact evaluations. Among DOL’s active portfolio of approximately 50 projects, the study type ranges from rigorous evidence syntheses to implementation studies to quasi-experimental outcome studies to impact studies. Examples of current DOL studies with a random assignment component include an evaluation of a Job Corps’ demonstration pilot, the Cascades Job Corps College and Career Academy. An example of a multi-arm randomized control trial was the Reemployment Services and Eligibility Assessments evaluation, which assessed a range of strategies to reduce unemployment insurance duration and improve employment as well as wage outcomes.
Score
10
U.S. Dept. of Housing & Urban Development
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • PD&R published a Program Evaluation Policy in 2016 that established core principles and practices of PD&R’s evaluation and research activities. The six core principles are rigor, relevance, transparency, independence, ethics, and technical innovation.
  • In August 2021, PD&R updated the 2016 Program Evaluation Policy to address issues that have arisen since 2016 as well as stakeholder input received via a town hall that PD&R hosted discussing its experience with sponsoring and publishing evaluations. The new HUD Program Evaluation Policy expands the jurisdiction of the statement to all of HUD and includes principles and practices intended to ensure racial equity, diversity, and inclusion in PD&R’s evaluation and research activities. The language related to equity was developed in coordination with the Department-wide Equity Assessment that HUD is undergoing in response to Executive Order #13985, Executive Order on Advancing Racial Equity and Support for Underserved Communities.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • HUD’s learning agendas, called the Research Roadmap, have served as agency-wide evaluation plans that list and describe research and evaluation priorities for a five-year planning period. HUD released the 2020 Roadmap Update in December 2020. Annual evaluation plans are developed based on a selection of Roadmap proposals, newly emerging research needs, and incremental funding needs for major ongoing research and are submitted to Congress in association with PD&R’s annual budget requests. Actual research activities are substantially determined by Congressional funding and guidance. Under the Evidence Act, PD&R prepares public Annual Evaluation Plans informed by the new Research Roadmap to be submitted in conjunction with the Annual Performance Plan.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • HUD’s Research Roadmap has served as the Department’s evidence-building plan and learning agenda for eight years, and a new Roadmap was developed in FY21. HUD’s participatory process (see Appendix A of Research Roadmap: 2020 Update) engages internal and external stakeholders to identify research questions and other evidence-building activities to support effective policy-making. Stakeholders include program partners in state and local governments and the private sector; researchers and academics; policy officials; and members of the general public who frequently access the HUDuser.gov portal. Outreach mechanisms for Roadmap development include email, web forums, conferences and webcasts, and targeted listening sessions.
  • The 2020 Roadmap Update served as the Department’s draft Learning Agenda under the Evidence Act. To finalize the Learning Agenda, PD&R staff will align foundational learning questions with HUD’s new strategic plan and conduct an additional round of internal stakeholder engagement in FY21 focused on identifying priority research questions across the Department. HUD is also seeking input on the 2020 Roadmap via email and web forums. PD&R staff will coordinate with the team conducting the Department-wide Equity Assessment in response to  the executive order on Advancing Racial Equity and Support for Underserved Communities to identify priority research questions and evidence gaps that emerge as part of the assessment. HUD’s Equity Assessment has prioritized stakeholder engagement as an area for immediate analysis by all program offices. The equity assessment seeks to identify and utilize the knowledge–both lived and professional–of stakeholders who have been historically underrepresented in the Federal government and underserved by, or subject to discrimination in, federal policies and programs. Findings from this assessment will further inform HUD’s long-term “equity transformation,” which aims to sustainably embed and improve equity throughout all of HUD’s work. HUD will release its long-term Action Plan to increase equity in decision-making and access to programs and benefits on January 20, 2022 pursuant to Executive Order #13985.
2.4 Did the agency publicly release all completed program evaluations?
  • PD&R’s Program Evaluation Policy requires timely publishing and dissemination of all evaluations that meet standards of methodological rigor. Completed evaluations and research reports are posted on PD&R’s website, HUDUSER.gov. Additionally, the policy includes language in research and evaluation contracts that allows researchers to independently publish results, even without HUD approval, after not more than six months. HUD’s publicly released program evaluations typically include data and results disaggregated by race, ethnicity, and gender, where the data permit such disaggregation. For example, in 2020 HUD expanded the detail of race and ethnicity breakouts in the Worst Case Housing Needs reports to Congress to the full extent permitted by the data. Executive summaries will highlight disparate impacts if they are found to be statistically significant; otherwise, such findings may be found in the main body of the report or its appendices.
  • PD&R is in the process of reorganizing our published research and enhancing our search capabilities on HUDUSER.gov. These steps are being implemented to enhance the usability of HUD’s research resources for researchers, policymakers, and the general public.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • PD&R is HUD’s independent evaluation office, with scope spanning all the Department’s program operations. In FY20 PD&R led an effort to assess the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts, consistent with the values established in HUD’s Evaluation Policy. The Research Roadmap: 2020 Update covers much of this content, and a formal Capacity Assessment process was designed by evaluation leaders in coordination with the Chief Data Officer and performance management personnel. The draft Capacity Assessment addresses updated content requirements of OMB Circular A-11 (2020) and includes primary data collection through an exploratory key informant survey of senior managers across the Department. The identified weaknesses in evidence-building capacity will become the focus of subsequent in-depth assessments and interventions to be integrated in the Department’s next Strategic Plan.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
10
Administration for Community Living (HHS)
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • ACL’s public evaluation policy confirms ACL’s commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. ACL seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. The policy addresses each of these principles. The policy was updated in 2021 to better reflect OMB guidance provided in M-20-12 and to more explicitly affirm ACL’s commitment to equity in evaluation.
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • ACL’s agency-wide evaluation plan was submitted to the Department of Health and Human Services (HHS) in support of HHS’ requirement to submit an annual evaluation plan to OMB in conjunction with its Agency Performance Plan. ACL’s annual evaluation plan includes the evaluation activities the agency plans related to the learning agenda and any other “significant” evaluation, such as those required by statute. The plan describes the systematic collection and analysis of information about the characteristics and outcomes of programs, projects, and processes as a basis for judgments, to improve effectiveness, and/or inform decision-makers about current and future activities.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • Based on the learning agenda approach that ACL adopted in 2018, ACL published a learning agenda in FY20. In developing the plan, ACL engaged stakeholders through meetings with program staff and grantees as required under OMB M-19-23. Most meetings with stakeholder groups, such as through conference sessions, were put on hold for 2020 due to COVID-19 travel restrictions. In 2021, ACL did communicate with stakeholder groups to contribute to ACL’s learning activities. These included working with members of the RAISE Family Caregiving Advisory Council and a range of stakeholders to inform changes to the 2021 data collection under the National Survey of Older Americans Act Participants. In 2021, ACL also released a request for information (RFI) directed to small businesses to solicit research approaches related to ACL’s current research priorities.
2.4 Did the agency publicly release all completed program evaluations?
  • ACL releases all evaluation reports as well as interim information such as issue briefs, webinar recordings, and factsheets based on data from its evaluation and evidence building activities. 
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 315, subchapter II (c)(3)(9))
  • Staff from the Office of Performance and Evaluation (OPE) play an active role in HHS’s capacity assessment efforts serving on the Capacity Assessment and learning agenda Subcommittees of the HHS Evidence and Evaluation Council. ACL’s self-assessment results were provided to HHS to support HHS’ ability to submit the required information to OMB. ACL’s self-assessment results  provided information about planning and implementing evaluation activities, disseminating best practices and findings, and incorporating employee views and feedback; and carrying out capacity-building activities in order to use evaluation research and analysis approaches and data in the day-to-day operations. Based on this information, in 2021 ACL focused on developing educational materials for ACL staff and data improvement tools for ACL grantees. In 2021 the ACL Data Council published a guide to evaluating system change initiatives,  and additional documents to promote responsible data usage: Data Quality 201:Data Visualization and Data Quality 202: Data Quality Standards. While designed initially for ACL staff, they are available on the ACL website and have been promoted through several industry conferences. 
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • Starting in 2020 and continuing into 2021, ACL is funding contracts to design the most rigorous evaluations appropriate to measure the return on investment of Aging Network, the extent to which ACL services address social determinants of health, and the value of volunteers to ACL programs.  ACL typically funds evaluation design contracts, such as those for the Older Americans Act Title VI Tribal Grants Program evaluation and the Long Term Care Ombudsman Evaluation, that are used to determine the most rigorous evaluation approach that is feasible given the structure of a particular program. While the Ombudsman program is full coverage programs, where comparison groups are not possible, ACL most frequently uses propensity score matching to identify comparison group members. This was the case for the Older Americans Act Nutrition Services Program and National Family Caregivers Support Program evaluations and the Wellness Prospective Evaluation Final Report conducted by CMS in partnership with ACL.  
  • ACL’s  National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) funds the largest percentage of ACL’s RCTs at 151 out of 659 (23%) of research projects employing a randomized clinical trial (RCT)To ensure research quality, NIDILRR adheres to strict peer reviewer evaluation criteria that are used in the grant award process. In addition, ACL’s evaluation policy states that “In assessing the effects of programs or services, ACL evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences such as trends over time, geographic variation, or pre-existing differences between participants and non-participants. For such causal questions, experimental approaches are preferred. When experimental approaches are not feasible, high-quality quasi-experiments offer an alternative.” ACL is in the process of implementing a method for rating each proposed evaluation against OMB’s Program Evaluation Standards and Practices as defined in OMB M-20-12.
Score
8
Substance Abuse and Mental Health Services Administration (HHS)
2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • Under the Evidence Act, federal agencies are expected to expand their capacity for engaging in program evaluation by not only designating evaluation officers and developing learning agendas but also producing annual evaluation plans and enabling a workforce to conduct internal evaluations. In FY2020, SAMHSA developed a Standard Operating Procedure for Program Evaluations: Program Evaluation SOP and completed a capacity assessment and evaluation plan as part of an HHS-wide initiative.
  • SAMHSA’s internal Evaluation Policy and Procedures (P and P), which functions as SAMHSA’s agency-wide evaluation policy, is currently being updated. The P and P documentation is being updated in coordination with the Office of Behavioral Health Equity (OBHE), as OBHE supports efforts to reduce disparities in mental and/or substance use disorders across populations. OBHE is organized around key strategies:
    1. The data strategy utilizes federal and community data to identify, monitor, and respond to behavioral health disparities.
    2. The policy strategy promotes policy initiatives that strengthen the impact of SAMHSA programs in advancing behavioral health equity.
    3. The quality practice and workforce development strategy expands the behavioral health workforce capacity to improve outreach, engagement, and quality of care for minority and underserved populations.
    4. The communication strategy increases awareness and access to information about behavioral health disparities and strategies to promote behavioral health equity.
  • OBHE seeks to impact SAMHSA policy and initiatives by:
    • Creating a more strategic focus on racial, ethnic, and LGBT+ populations in SAMHSA investments
    • Using a data-informed quality improvement approach to address racial and ethnic disparities in SAMHSA programs
    • Ensuring that SAMHSA policy, funding initiatives, and collaborations include emphasis on decreasing disparities
    • Implementing innovative, cost-effective training strategies to a diverse workforce
    • Promoting behavioral health equity at a national level
    • Serving as a trusted broker of behavioral health disparity and equity information
    • Providing consultations and presentations on issues related to behavioral health equity
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • As part of the Evidence Act, agencies within HHS submitted a plan that lists and describes the specific evaluation activities the agency plans to undertake in the fiscal year following the year in which the evaluation plan is submitted (referred to as the HHS Evaluation Plan). The HHS Evaluation Plan and Evidence-Building Plan is organized based on priority areas drawn from HHS’ Departmental Priorities, the proposed Strategic Plan goals, and proposed Agency Priority Goals. Currently, SAMHSA’s evaluation plan is aligned with the Evidence Act. For FY22, the SAMHSA’s research priority is: “How will SAMHSA collect, analyze, and disseminate data to inform policies, programs, and practices?” and has outlined four relevant objectives of the research.  
  • SAMHSA, through its Office of Behavioral Health Equity, focuses on racial equity, diversity, and inclusion. As part of this work, each grantee is required to submit a DIS or Disparity Impact Statement, which requires grantee focus on access to, use of, and outcomes from SAMHSA-funded services as it applies to underserved communities.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • SAMHSA submitted a learning agenda that is currently under review with HHS and OMB and not publicly available. In the learning agenda, SAMHSA highlights key evaluation studies that reflect the administration’s priorities. In the learning agenda, SAMHSA highlighted the following evaluation activities:

    • SAMHSA’s Report to Congress on Garrett Lee Smith (GLS) Youth Suicide Prevention and Early Intervention Program.
    • Summative Program Evaluations (e.g., Strategic Prevention for Prescription Drugs or SPF-Rx). This program is designed to prevent prescription drug misuse among youth aged 12 to 17 and adults aged 18 and older. The program is developed to respond to a critical priority area in SAMHSA’s FY2019-FY2023 Strategic Planning Priority 1: Combating the Opioid Crisis through Expansion of Prevention, Treatment and Recovery Support Services).
    • Performance Measurement of SAMHSA’s discretionary grants (40-50 Program Profiles).
    • Internal Formative Program Evaluations (e.g., Projects for Assistance in Transition from Homelessness or PATH). The PATH evaluation report includes information on funding, staffing, numbers served/contacted and enrolled, client demographics, service provision and service referrals made and attainment. Data are submitted by the PATH providers via the SAMHSA PATH Data Exchange (PDX), though parts are to be provided through local Homeless Management Information Systems (HMIS). The PATH grantees’ State PATH Contacts (SPCs) approve the data submitted by their providers.
    • Evidence Reviews (e.g., Evidence-Based Behavioral Practice (EBBP), which is a project at SAMHSA that creates training resources to help bridge the gap between behavioral health research and practice) (e.g. MOUD implementation in CJ settings).
2.4 Did the agency publicly release all completed program evaluations?
  • SAMHSA evaluations are funded from program funds used for discretionary grants, technical assistance, and evaluation activities. Evaluations have also been funded from funds previously designated for grants or other contract activities. A variety of evaluation models are used including: evaluations funded by the Centers (PEP-C); evaluations funded by the Centers but directed outside of SAMHSA (Naloxone Education and Distribution Program – PDO); and those that CBHSQ directly funds and executes (PATH PDX). Evaluations require different degrees of independence to ensure objectivity and having modeling options afford SAMHSA the latitude to enhance evaluation rigor and independence on a customized basis. 
  • Publicly available evaluations analyze data by race, ethnicity and gender, among other elements such as social determinants of health. SAMHSA strives to share program data whenever possible to promote continuous quality improvement. For example, SAMHSA’s Projects for Assistance in Transition from Homelessness (PATH) funds services for people with serious mental illness (SMI) experiencing homelessness annual data may be found online. Similarly, comparative state mental health data from block grants can be found on the SAMHSA data page through Uniform Reporting System output tables. Although not an evaluation, CBHSQ, in partnership with SAMHSA Centers, develops annual project profiles for selected discretionary grants (such as client demographics, changes in social determinants of health and pre/post changes in substance use) covering a set of performance indicators to track and monitor performance. For FY20 data, these profiles will be shared with grantees through the SPARS system. SAMHSA has publicly released the State Opioid Response (SOR) Grants program profile, and is conducting internal discussions regarding the release of FY2022 program profiles. 
  • With SAMHSA’s Office of Behavioral Health Equity, the agency is in a unique position to be a leader in supporting culturally and linguistically appropriate evaluation to a diverse audience. SAMHSA is already sharing resources for evidence-based and culturally relevant interventions for the public–see Strategies and Lessons Learned (2011-2020).
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • As part of the HHS Evidence and Evaluation Council, all agencies conducted an internal capacity assessment. This assessment was included in the HHS report. In addition, SAMHSA shares resources for evidence-based and culturally relevant interventions – see Strategies and Lessons Learned (2011-2020).
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • SAMHSA does not apply one strategy for all evaluations but employs a variety of models including performance monitoring, formative, process, and summative evaluations using primarily quantitative data and also mixed methods when appropriate and available. SAMHSA strives for a balance between the need for collecting data and the desire to minimize grantee data collection burden. For example, in FY21, an evaluation of SAMHSA’s Naloxone Education and Distribution Program, used a mixed methods approach examining qualitative data from key informant interviews and focus groups coupled with SAMHSA’s discretionary grant data collected through the SAMHSA Performance Accountability and Reporting System (SPARS). Another example is a final report for SAMHSA’s Strategic Prevention Framework–Prescription Drug Misuse program (SPF-Rx) that included several sources of primary and secondary quantitative data (from SAMHSA, CDC, etc.) mixed with interviews all in response to three primary evaluation questions.
  • SAMHSA is in the process of updating its Program Evaluation SOP. In addition, SAMHSA has developed a draft evaluation plan that includes a dissemination strategy for each of its current evaluation projects recognizing that one size does not fit all. The plan is still under review. 
  • SAMHSA is partnering with the National Institute on Drug Abuse (NIDA) to support the HEALing Communities Study (HCS), which is a research initiative that intends to enhance the evidence base for opioid treatment options. Launched in 2019, HCS aims to test the integration of prevention, overdose treatment, and medication-based treatment in select communities hard hit by the opioid crisis. This comprehensive treatment model will be tested in a coordinated array of settings, including primary care, emergency departments, and other community settings. Findings will establish best practices for integrating prevention and treatment strategies that can be replicated by communities nationwide. 
  • SAMHSA has also supported the National Study on Mental Health, which intends to provide national estimates of mental health and substance use disorders among U.S adults ages 18 to 65. For the first time, the NSMH will include adults living in households across the U.S. as well as prisons, jails, state psychiatric hospitals and homeless shelters Data will be available in 2023.
Back to the Standard

Visit Results4America.org