2021 Federal Standard of Excellence
Use of Evidence in Competitive Grant Programs*
Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY21? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)
Score
15
15
Millennium Challenge Corporation
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- MCC awards all of its agency funds through two competitive grants: (1) the compact program ($651.0 million in FY21; eligible grantees: developing countries) and (2) the threshold program ($31 million in FY21; eligible grantees: developing countries).
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- For country partner selection, as part of the compact and threshold competitive programs, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties.
- When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) improved Scorecard performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Per its Policy for Monitoring and Evaluation (M&E), MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its Evidence Platform and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- MCC uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- MCC’s $540 million Senegal Compact (2010-2015) funded the $170 million Irrigation and Water Resources Management Project to improve the productivity of the agricultural sector in certain agricultural-dependent areas of northern Senegal. The project rehabilitated or built 266 km of irrigation and drainage infrastructure, constructed a 450-hectare perimeter, mapped irrigated land, and trained officials to better administer land. The project was based on the theory that improved irrigation and land rights increase agricultural investment, productivity and ultimately household income. Five years after the completion of the project, the evaluation found:
- The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
- The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
- The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
- From the evidence collected for this evaluation, MCC learned that large-scale irrigation projects, especially for smallholder farmers, may have difficulty meeting the economic rate of return (ERR) 10% hurdle rate. However, soft-side interventions, such as farmer trainings, and a strong focus on the market can boost farm incomes and the ERR. MCC is applying this lesson by supporting farmer services in Niger.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- As described above, MCC develops a Monitoring & Evaluation (M&E) Plan for every grantee, which describes the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
- MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity-building for grantees.
Score
15
15
U.S. Department of Education
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- TRIO ($1.097 billion; eligible applicants: eligible grantees: institutions of higher education, public and private organizations);
- Charter Schools Program ($440 million; eligible grantees: varies by program, including state entities, charter management organizations, public and private entities, and local charter schools)
- GEAR UP ($368 million; eligible grantees: state agencies; partnerships that include IHEs and LEAs)
- Teacher and School Leader Incentive Program (TSL) ($200 million; eligible grantees: local education agencies, partnerships between state and local education agencies; and partnerships between nonprofit organizations and local educational agencies);
- Comprehensive Literacy Development Grants ($192 million; eligible grantees: state education agencies).
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- ED uses evidence of effectiveness when making awards in its largest competitive grant programs.
- The vast majority of TRIO funding in FY21 was used to support continuation awards to grantees that were successful in prior competitions that awarded competitive preference priority points for projects that proposed strategies supported by: moderate evidence of effectiveness (Upward Bound and Upward Bound Math and Science); or evidence that demonstrates a rationale (Student Support Services). Additionally, ED will make new awards under the Talent Search and Educational Opportunity Centers programs. These competitions provide points for applicants that propose a project with a key component in its logic model that is informed by research or evaluation findings that suggest it is likely to improve relevant outcomes.
- Under the Charter Schools Program, ED generally requires or encourages applicants to support their projects through logic models–however, applicants are not expected to develop their applications based on rigorous evidence. Within the CSP program, the Grants to Charter School Management Organizations for the Replication and Expansion of High-Quality Charter Schools (CMO Grants) supports charter schools with a previous track record of success.
- For the 2021 competition for GEAR UP State awards, ED used a competitive preference priority for projects implementing activities that are supported by moderate evidence of effectiveness. For the 2021 competition for GEAR UP Partnership awards, ED used a competitive preference priority for projects implementing activities that are supported by Promising evidence.
- The TSL statute requires applicants to provide a description of the rationale for their project and describe how the proposed activities are evidence-based, and grantees are held to these standards in the implementation of the program.
- The Comprehensive Literacy Development (CLD) statute requires that grantees provide subgrants to local educational agencies that conduct evidence-based literacy interventions. ESSA requires ED to give priority to applicants that meet the higher evidence levels of strong or moderate evidence, and in cases where there may not be significant evidence-based literacy strategies or interventions available, for example in early childhood education, encourage applicants to demonstrate a rationale.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- The Evidence Leadership Group (ELG) advises program offices on ways to incorporate evidence in grant programs through encouraging or requiring applicants to propose projects that are based on research and by encouraging applicants to design evaluations for their proposed projects that would build new evidence.
- ED’s grant programs require some form of an evaluation report on a yearly basis to build evidence, demonstrate performance improvement, and account for the utilization of funds. For examples, please see the annual performance reports of TRIO, the Charter Schools Program, and GEAR UP. The Teacher and School Leader Incentive Program is required by ESSA to conduct a national evaluation. The Comprehensive Literacy Development Grant requires evaluation reports. In addition, IES is currently conducting rigorous evaluations to identify successful practices in TRIO-Educational Opportunities Centers and GEAR UP. In FY19, IES released a rigorous evaluation of practices embedded within TRIO-Upward Bound that examined the impact of enhanced college advising practices on students’ pathway to college.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- The Education Innovation and Research (EIR) program supports the creation, development, implementation, replication, and taking to scale of entrepreneurial, evidence-based, field-initiated innovations designed to improve student achievement and attainment for high-need students. The program uses three evidence tiers to allocate funds based on evidence of effectiveness, with larger awards given to applicants who can demonstrate stronger levels of prior evidence and produce stronger evidence of effectiveness through a rigorous, independent evaluation. The FY21 competition included checklists and PowerPoints to help applicants clearly understand the evidence requirements.
- ED incorporates the evidence standards established in EDGAR as priorities and selection criteria in many competitive grant programs.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- The EIR program supports the creation, development, implementation, replication, and scaling up of evidence-based, field-initiated innovations designed to improve student achievement and attainment for high-need students. IES released The Investing in Innovation Fund: Summary of 67 Evaluations, which can be used to inform efforts to move to more effective practices. ED is exploring the results to determine what lessons learned can be applied to other programs.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- In 2016, ED released non-regulatory guidance to provide state educational agencies, local educational agencies (LEAs), schools, educators, and partner organizations with information to assist them in selecting and using “evidence-based” activities, strategies, and interventions, as defined by ESSA, including carrying out evaluations to “examine and reflect” on how interventions are working. However, the guidance does not specify that federal competitive funds can be used to conduct such evaluations. Frequently, though, programs do include a requirement to evaluate the grant during and after the project period.
Score
10
10
U.S. Agency for International Development
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY20, the five largest competitive grant programs are:
- International Disaster Assistance ($4.40 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Migration and Refugee Assistance ($3.43 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Development Assistance ($3.4 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Global Health (USAID) ($3.16 billion; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303);
- Economic Support Fund ($3.05 billion ADS 303).
- See the U.S. Foreign Assistance Reference Guide for more information on each of these accounts. More information can also be found in the FY2021 Congressional Budget Justification (page 2 and 3, column 4). USAID generally does not limit eligibility when awarding grants and cooperative agreements; eligibility may be restricted for an individual notice of funding opportunity in accordance with the procedures in ADS 303.
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the Agency’s work. USAID’s Program Cycle Policy ensures evidence from monitoring, evaluation and other sources informs funding decisions at all levels, including during strategic planning, project and activity design, procurement and implementation.
- USAID’s Senior Obligation Alignment Review (SOAR) helps to ensure the Agency is using evidence to design and approve funding for innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of grant or contract mechanisms and proposed results.
- USAID includes past performance to comprise 30% of the non-cost evaluation criteria for contracts. As part of determining grant awards, USAID’s policy requires an applicant to provide a list of all its cost-reimbursement contracts, grants, or cooperative agreements involving similar or related programs during the past three years. The grant Selection Committee chair must validate the applicant’s past performance reference information based on existing evaluations to the maximum extent possible, and make a reasonable, good faith effort to contact all references to verify or corroborate how well an applicant performed.
- For assistance, as required by 2 CFR 200, USAID also does a risk assessment to review an organization’s ability to meet the goals and objectives outlined by the agency. Internal procedures for conducting the risk assessment are found in ADS 303.3.9, with guidance on how to look for evidence of effectiveness from potential grantees. Per the ADS, this can be done through reviewing past performance and evaluation/performance reports such as the Contractor Performance Assessment Reporting System (CPARS).
- Even though there is no federal requirement (as there is with CPARS), USAID also assesses grantee past performance for use when making funding decisions (detailed in ADS 303, p. 66). Per USAID’s ADS 303 policy, before making an award of any grant or cooperative agreement the Agreement Officer must state in the memorandum of negotiation that the applicant has a satisfactory record of performance. When making the award, the Agreement Officer may consider withholding authority to proceed to the next phase of a grant until provided evidence of acceptable performance within a given period.
- USAID was recognized by GAO in its recent report published on September 5, 2018, Managing for Results: Government-wide Actions Needed to Improve Agencies’ Use of Performance Information in Decision Making (GAO-18-609SP) as one of four agencies (out of 23 surveyed) with proven practices for using performance information. USAID was also the only CFO Act agency with a statistically significant increase in the Agency Use of Performance Information Index since 2007.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Grantees report on the progress of activities through documentation such as Activity Monitoring, Evaluation, and Learning (MEL) Plans, periodic performance reporting, and external and internal evaluation reports (if applicable). These reports help USAID remain transparent and accountable and also help the Agency build evidence of what does and does not work in its interventions. Any internal evaluation undertaken by a grantee must also be provided to USAID for learning purposes. All datasets compiled under USAID-funded projects, activities, and evaluations are to be submitted by grantees to the USAID Development Data Library. All final evaluation reports must also be submitted to the Agency’s Development Experience Clearinghouse (DEC), unless they receive a waiver to the USAID’s public dissemination requirements. These are rare and require the concurrence of the Director of the Office of Learning, Evaluation, and Research.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- USAID is actively engaged in utilizing evidence of effectiveness to allocate funds. For example, Development Innovation Ventures (DIV) uses a tiered funding approach to find, test, and scale evidence-based innovations. DIV’s grants include: Stage 1 for piloting (up to $200,000); Stage 2 for testing (up to $1,500,000); Stage 3 for scaling (up to $15,000,000); and “evidence grants” (up to $1,500,000) for research to determine causal impact of certain interventions. In particular for Stage 2 grants, DIV requires evidence of impact that must be causal and rigorous–the grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period. There must also be significant demonstrated demand for the innovation.
- DIV’s evaluation criteria for its funding is based on its three core principles as further outlined in its annual grant solicitation (DIV Annual Program Statement): (1) Evidence of Impact; (2) Cost-Effectiveness; and (3) Pathways to Scale. DIV’s expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators. Most DIV grants are fixed amount awards, a unique type of federal grant instrument that is tailor-made for pay-for-results approaches. Fixed amount awards are structured by paying for milestones achieved, which emphasizes performance (not just compliance) and reduces some administrative burden for all parties (see 2 CFR 200.201(b)).
- DIV supports innovative solutions across all countries and development sectors in which USAID operates, including education, agriculture, water, energy, and economic development. Since 2010, DIV has provided more than $149 million for 225 grants across 76 countries, reaching more than 55 million beneficiaries. Based on recent research announced in October 2020 led by Nobel Prize-winning economist and DIV advisor, Dr. Michael Kremer, the early portfolio of DIV grants (covering 2010-12) has produced $17 in social benefits per every dollar spent by USAID.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- USAID’s Development Innovation Ventures (DIV) specifically emphasizes rigorous evidence for causal impact in its official grant solicitation (DIV Annual Program Statement (APS), page 4): “DIV supports the piloting and rigorous testing of innovations and helps those innovations that have successfully demonstrated impact to transition to scale. DIV looks for different indicators of impact depending on the stage of financing that the applicant is seeking and on whether the innovation has a public or commercial pathway to scale.” DIV’s evaluation criteria is based on its three core principles as further outlined in its APS: (1) Evidence of Impact; (2) Cost-Effectiveness; and (3) Pathways to Scale.
- Fenix offers expandable, lease-to-own, solar home systems (SHS) financed through ultra-affordable installments over mobile money. In 2016, DIV partnered with USAID’s Scaling Off-Grid Energy team to support Fenix’s expansion from Uganda into Zambia, a nascent and largely underserved market. By the end of the DIV award, Fenix was the leading SHS company in Zambia. In 2017, Fenix was acquired by ENGIE, a multinational electric utility company, and expanded into four new countries–Benin, Côte d’Ivoire, Nigeria, and Mozambique. Fenix has delivered clean, affordable energy to 3.5 million people across six countries in Africa.
- EarthEnable is a social enterprise that has developed durable adobe floor replacements for traditional dirt floors. EarthEnable flooring minimizes exposure to bacteria and parasites–particularly for children–and is 70% less expensive than other clean floor alternatives. Early investments by DIV supported EarthEnable to test different business models and scale up operations, expanding their geographic reach and enabling them to serve lower-income households. To date, EarthEnable has replaced more than 5,000 dirt floors and served over 20,000 people in Rwanda and Uganda.
- In 2013, DIV funded a randomized control trial to evaluate evidence for causal impact of the program, Teaching at the Right Level (TaRL), implemented by Pratham, an Indian NGO. While progress has been made to help more children attend school, millions of students are not actually learning at their grade level. In response, TaRL helps lagging students catch up by teaching to their skill level rather than to their age or grade. The approach works by dividing children (generally in Grades 3 to 5) into groups based on learning needs rather than age or grade. It dedicates time to basic skills rather than focusing solely on the curriculum. And it regularly assesses student performance, not just end-of-year examinations. In 2017, DIV further partnered with J-PAL Africa, UNICEF, USAID/Zambia, and the Zambian Ministry of General Education to scale TaRL across Zambia. To date, DIV’s support has helped catalyze more than $25 million in additional funding beyond USAID to scale the TaRL model to 12 countries across Africa.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- USAID’s Program Cycle Policy states that “[f]unding may be dedicated within a project or activity design for implementing partners to engage in an internal evaluation for institutional learning or accountability purposes.”
Score
7
7
Administration for Children and Families (HHS)
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- Head Start ($10.7 billion; eligible applicants: public or private non-profit organizations, including community-based and faith-based organizations, or for-profit agencies);
- Unaccompanied Children Services ($1.3 billion; eligible applicants: private non-profit and for-profit agencies
- Preschool Development Grants ($275 million; eligible applicants: states);
- Healthy Marriage Promotion and Responsible Fatherhood Grants ($148.8 million; eligible applicants: states, local governments, tribal entities, and community-based organizations, both for profit and non-for-profit, including faith-based);
- Transitional Living Program Runaway and Homeless Youth ($116.8 million; eligible applicants: community-based public and private organizations)
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- ACF reviewed performance data from the 2015 cohort of Healthy Marriage and Responsible Fatherhood grantees (using the nFORM system) to set priorities, interests, and expectations for HMRF grants that were awarded in 2020. For example, because nFORM data indicated that organizations were more likely to meet enrollment targets and engage participants when they focused on implementing one program model, ACF’s 2020 FOA, which led to 113 HMRF grant awards in September 2020, mentioned specific interest in grantee projects, “that implement only one specific program model designed for one specific youth service population (p. 12).”
- In its award decisions, ACF gave “preference to those applicants that were awarded a Healthy Marriage or Responsible Fatherhood grant between 2015 and 2019, and that (a) [were] confirmed by ACF to have met all qualification requirements under Section IV.2, The Project Description, Approach, Organizational Capacity of this FOA; and (b) [were] confirmed by ACF to have received an acceptable rating on their semi-annual grant monitoring statements during years three and four of the project period. [ACF gave] particular consideration to applicants that: (1) designed and successfully implemented, through to end of 2019, an impact evaluation of their program model, and that the impact evaluation was a fair impact test of their program model and that was not terminated prior to analysis; or (2) successfully participated in a federally-led impact evaluation” (p. 17).
- ACF also evaluated HMRF grant applicants based upon their capacity to conduct a local impact evaluation and their proposed approach (for applicants required or electing to conduct local evaluations); their ability to provide a reasonable rationale and/or research base for the program model(s) and curriculum(a) proposed; and their inclusion of a Continuous Quality Improvement Plan, clearly describing the organizational commitment to data-driven approaches to identify areas for program performance, testing potential improvements, and cultivating a culture and environment of learning and improvement, among other things. Further, The Compliance And Performance reviews (CAPstone) entail a thorough review of each grantee’s performance. The Office of Family Assistance (OFA) sends a formal set of questions about grantee performance that the grant program specialists and TA providers answer ahead of time, and then they convene meetings where the performance of each grantee is discussed by OFA, OPRE, and the TA provider at length using nFORM data and the answers to the formal questions mentioned above.
- The Head Start Designation Renewal System (DRS) determines whether Head Start/Early Head Start grantees are delivering high-quality comprehensive services to the children and families that they serve. These determinations are based on seven conditions, one of which looks at how Head Start classrooms within programs perform on the Classroom Assessment Scoring System (CLASS), an observation-based measure of the quality of teacher-child interactions. When the DRS deems grantees to be underperforming, grantees are denied automatic renewal of their grant and must apply for funding renewal through a standard open competition process. In the most recent language, grantees who are re-competing for Head Start funds must include a description of any violations, such as deficiencies, areas of non-compliance, and/or audit finding in their record of Past Performance (p. 28). Applicants may describe the actions they have taken to address these violations. According to Head Start policy, in competitions to replace or potentially replace a current grantee, the responsible HHS official will give priority to applicants that have demonstrated capacity in providing effective, comprehensive, and well-coordinated early childhood education and development services and programs (see section 1304.20: Selection among applicants).
- ACF manages the Runaway and Homeless Youth Training and Technical Assistance Center (RHYTTAC), the national training and technical assistance entity that provides resources and direct assistance to the Runaway and Homeless Youth (RHY) grantees and other youth serving organizations eligible to receive RHY funds. RHYTTAC disseminates information about and supports grantee implementation of high-quality, evidence-informed, and evidence-based practices. In the most recent RHYTTAC grant award, applicants were evaluated based on their strategy for tracking RHY grantee uptake and implementation of evidence-based or evidence-informed strategies. Additionally, as described in the FY21 Transitional Living Program funding opportunity announcement, successful applicants must train all staff and volunteers on evidence-informed practices and provide case management services that include the development of service and treatment plans employing evidence-informed strategies (p. 4 & 47).
- ACF also evaluates Unaccompanied Children Services, Preschool Development Grants, and Runaway and Homeless Youth grant applicants based upon: their proposed program performance evaluation plan; how their data will contribute to continuous quality improvement; and their demonstrated experience with comparable program evaluation, among other factors.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- ACF’s template (see p. 14 in Attachment C) for competitive grant announcements includes standard language that funding opportunity announcement drafters may select to require grantees to either 1) collect performance management data that contributes to continuous quality improvement and is tied to the project’s logic model, or 2) conduct a rigorous evaluation for which applicants must propose an appropriate design specifying research questions, measurement and analysis.
- As a condition of award, Head Start grantees are required to participate fully in ACF-sponsored evaluations, if selected to do so. As such, ACF has an ongoing research portfolio that is building evidence in Head Start. Research sponsored through Head Start funding over the past decade has provided valuable information not only to guide program improvement in Head Start itself, but also to guide the field of early childhood programming and early childhood development. Dozens of Head Start programs have collaborated with researchers in making significant contributions in terms of program innovation and evaluation, as well as the use of systematic data collection, analysis and interpretation in program operations.
- ACF’s 2020 Healthy Marriage and Responsible Fatherhood (HMRF) Grants established required evidence activities by scope of grantee services (p.4). For example, large scope services (requesting funding between $1M-$1.5M) “must propose a rigorous impact evaluation (i.e., randomized-controlled trial (RCT) or high-quality, quasi-experimental design (QED) study)…and must allocate at least 15%, but no more than 20%, of their total annual funding for evaluation” (p.19) Regardless of their scope of services, all 2020 HMRF grantees must plan for and carry out continuous quality improvement activities (p.18) and conduct a local evaluation (p.18) or participate in a federally led evaluation or research effort (p. 22). ACF has an ongoing research portfolio building evidence related to Strengthening Families, Healthy Marriage, and Responsible Fatherhood, and has conducted randomized controlled trials with grantees in each funding round of these grants.
- The 2003 Reauthorization of the Runaway and Homeless Youth Act called for a study of long-term outcomes for youth who are served through the Transitional Living Program (TLP). In response, ACF is sponsoring a study that will capture data from youth at program entry and at intermediate- and longer-term follow-up points after program exit and will assess outcomes related to housing, education, and employment. ACF is also sponsoring a process evaluation of the 2016 Transitional Living Program Special Population Demonstration Project.
- Additionally, Unaccompanied Children Services (p. 33), Preschool Development Grants (p. 30), and Runaway and Homeless Youth (p.24) grantees are required to develop a program performance evaluation plan.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- ACF’s Personal Responsibility Education Program includes three individual discretionary grant programs that fund programs exhibiting evidence of effectiveness, innovative adaptations of evidence-based programs, and promising practices that teach youth about abstinence and contraception to prevent pregnancy and sexually transmitted infections.
- To receive funding through ACFs Sexual Risk Avoidance Education (SRAE) program, applicants must cite evidence published in a peer-reviewed journal and/or a randomized controlled trial or quasi-experimental design to support their chosen interventions or models.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- As mentioned above, ACF is conducting a multi-pronged evaluation of the Health Profession Opportunity Grants Program (HPOG). Findings from the first cohort of HPOG grants influenced the funding opportunity announcement for the second round of HPOG (HPOG 2.0) funding. ACF used findings from the impact evaluation of the first cohort of HPOG grants to provide insights to the field about which HPOG program components are associated with stronger participant outcomes. For example, based on the finding that many participants engaged in short-term training for low-wage, entry-level jobs, the HPOG 2.0 FOA more carefully defined the career pathways framework, described specific strategies for helping participants progress along a career pathway, and identified and defined key HPOG education and training components. Applicants were required to more clearly describe how their program would support career pathways for participants. Based on an analysis, which indicated limited collaborations with healthcare employers, the HPOG 2.0 FOA required applicants to demonstrate the use of labor market information, consult with local employers, and describe their plans for employer engagement. The HPOG 2.0 FOA also placed more emphasis on the importance of providing basic skills education and assessment of barriers to make the programs accessible to clients who were most prepared to benefit, based on the finding that many programs were screening out applicants with low levels of basic literacy, reading, and numeracy skills.
- ACF’s Personal Responsibility Education Innovative Strategies Program (PREIS) grantees must conduct independent evaluations of their innovative strategies for the prevention of teen pregnancy, births, and STIs, supported by ACF training and technical assistance. These rigorous evaluations are designed to meet the HHS Teen Pregnancy Prevention Evidence-Based Standards and are expected to generate lessons learned so that others can benefit from these strategies and innovative approaches.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- ACF’s template (see p. 14 in Attachment C) for competitive grant announcements includes standard language instructing grantees to conduct evaluation efforts. Program offices may use this template to require grantees to collect performance data or conduct a rigorous evaluation. Applicants are instructed to include third-party evaluation contracts in their proposed budget justifications.
- ACF’s 2020 Healthy Marriage and Responsible Fatherhood (HMRF) Grants established required evidence activities by scope of grantee services (p.4). For example, large scope services (requesting funding between $1M-$1.5M) “must propose a rigorous impact evaluation (i.e., randomized-controlled trial (RCT) or high-quality, quasi-experimental design (QED) study)…and must allocate at least 15%, but no more than 20%, of their total annual funding for evaluation” (p.19) Regardless of their scope of services, all 2020 HMRF grantees must plan for and carry out continuous quality improvement activities (p.18) and conduct a local evaluation (p.18) or participate in a federally led evaluation or research effort (p. 22).
- ACF’s 2018 Preschool Development Grants funding announcement notes that “it is intended that States or territories will use a percentage of the total amount of their [renewal] grant award during years two through four to conduct the proposed process, cost, and outcome evaluations, and to implement a data collection system that will allow them to collect, house, and use data on the populations served, the implementation of services, the cost of providing services, and coordination across service partners.”
- ACF’s rules (section 1351.15) allow Runaway and Homeless Youth grant awards to be used for “data collection and analysis.”
- Regional Partnership Grants (RPG) (p.1) require a minimum of 20% of grant funds to be spent on evaluation elements. ACF has supported the evaluation capacity of RPG grantees by providing technical assistance for data collection, performance measurement, and continuous quality improvement; implementation of the cross-site evaluation; support for knowledge dissemination; and provision of group TA via webinars and presentation.
- Community Collaboratives to Strengthen and Preserve Families (CCSPF) grants (p.7) require a minimum of 10% of grant funds to be used on data collection and evaluation activities. ACF has supported the evaluation capacity of CCSPF grantees by providing technical assistance for developing research questions, methodologies, process and outcome measures; implementing grantee-designed evaluations and continuous quality improvement activities; analyzing evaluation data; disseminating findings; and supporting data use in project and organizational decision-making processes
- ACF also provides evaluation technical assistance to:
- Support grantees participating in federal evaluations (e.g., projects supporting grantees from Health Profession Opportunity Grants 2.0 and Tribal Health Profession Opportunity Grants 2.0); and
-
- Support grantees who are conducting their own local evaluations (e.g., projects supporting Healthy Marriage and Responsible Fatherhood grantees, Personal Responsibility Education Program grantees, and YARH grantees.
Score
13
13
AmeriCorps
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- AmeriCorps State and National program (excluding State formula grant funds) ($253,704,774; eligible grantees: nonprofit organizations, state governments, tribal governments, local governments, institutions of higher education);
- Senior Corps RSVP program ($51,355,000; eligible grantees: nonprofit organizations, local governments).
- The Social Innovation Fund (SIF) grants were integrated into the Office of Research and Evaluation in FY19.
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- AmeriCorps’s AmeriCorps State and National grants program (excluding State formula grant funds), allocated up to 44 out of 100 points to organizations that submit applications supported by performance and evaluation data in FY21. Specifically, up to 24 points can be assigned to applications with theories of change supported by relevant research literature, program performance data, or program evaluation data; and up to 20 points can be assigned for an applicant’s incoming level of evidence and the quality of the evidence. Further, in 2020 AmeriCorps prioritized the funding of specific education, economic opportunity, and health interventions with moderate or strong levels of evidence.
- Since AmeriCorps’ implementation of a scoring process that assigns specific points for level of evidence, the percentage of grant dollars allocated to strong, moderate, preliminary, and no evidence categories has shifted over time (see chart below), such that more FY20 grant dollars were awarded to applicants with strong and moderate levels of evidence for proposed interventions, and fewer grant dollars were awarded to applicants with little to no evidence of effectiveness. Note that 68% of FY21 grant dollars versus 51% of FY20 grant dollars were invested in interventions with a strong or moderate evidence base.
- In FY18, Senior Corps RSVP embedded evidence into their grant renewal processes by offering supplemental funding, “augmentation grants,” to grantees interested in deploying volunteers to serve in evidence-based programs. More than $3.3 million of Senior Corps program dollars were allocated, over three years, toward new evidence-based programming augmentations. Grantees will be operating with their augmentations through fiscal year 2021.
- In a survey completed in FY20, Senior Corps grantees reported that 4,043 volunteer stations and 20,320 volunteers (10% of all volunteers) were engaged in evidence-based programming.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- AmeriCorps State and National grantees are required to evaluate their programs as part of the grant’s terms and conditions. Grantees receiving more than $500,000 required to conduct an independent, external evaluation (see p. 23 of the FY21 notice of funding for a description of these requirements).
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- AmeriCorps administers only two competitive grant programs, described above.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- AmeriCorps has summarized the accomplishments of its competitive grant programs in a series of research briefs that describe the core components of effective interventions in the areas of education, economic opportunity, and health. The education brief was used to justify the FY19 funding priority for evidence-based interventions in the AmeriCorps State and National competition. All interventions described in these briefs illustrate how AmeriCorps competitive grant recipients have achieved better outcomes and built knowledge about what works. The agency released four return-on-investment studies, all of which had positive findings.
- College Possible’ s College Access Program (Return on Investment Study: College Possible’s College Access Program | AmeriCorps)
- Community Technology Empowerment Project (CTEP) (Return on Investment Study: Community Technology Empowerment Project | AmeriCorps
- Minnesota Reading Corps – Kindergarten (MRC) (Return on Investment Study: Minnesota Reading Corps – Kindergarten | AmeriCorps)
- AmeriCorps Seniors Foster Grandparent Program and Senior Companion Program (Return on Investment Study: AmeriCorps Seniors Foster Grandparent Program and Senior Companion Program | AmeriCorps
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- AmeriCorps State and National grantees, including city, county, tribal, and state governments, are required to use their AmeriCorps funds to evaluate their programs. In FY21, AmeriCorps awarded $8.5 million for the Commission Investment Fund that supports State Commissions, which are typically housed within state government–approximately one third of these grants will focus on building the capacity of State Commissions and their grantees to collect and use performance and evaluation data.
- AmeriCorps’s Evidence Exchange includes a suite of scaling products on the evidence exchange to help grantees replicate evidence-based interventions.
Score
8
8
U.S. Department of Labor
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- Senior Community Service Employment Program (Approximately $405 million in continuation funds: eligible applicants are non-profit organizations, federal agencies, and tribal organizations)
- State Apprenticeship Expansion, Equity and Innovation Grants (Approximately $99 million: eligible applicants are states)
- National Farmworker Jobs Program (Approximately $94 million in continuation funds: eligible applicants are entities with an understanding of the problems of eligible migrant and seasonal farmworkers; a familiarity with the agricultural industries and the labor market needs of the proposed service area; and the ability to demonstrate a capacity to administer and deliver a diversified program of workforce investment activities)
- YouthBuild (Approximately $89 million: eligible applicants are public and private non-profit agencies)
- Pathway Home (Approximately $61 million: eligible applicants are non-profit organizations with (501)(c)(3) status; public institutions of higher education; nonprofit postsecondary education institutions; state or local governments; any Indian or Native American entity eligible for grants under section 166 of WIOA; and for-profit businesses and business-related nonprofit organizations)
- During the summer of 2021 ETA held a series of stakeholder listening sessions focused on grant equity in an effort to establish a baseline understanding of potential barriers to greater equity in the mix of grant applicants, peer reviewers, awardees, and communities served. This information will help inform future grant making decisions.
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- The Employment and Training Administration’s (ETA) YouthBuild applicants are awarded points based on past performance, viewing these metrics as important to demonstrating successful career outcomes for youth. As a pre-apprenticeship program that prepares young people for the construction industry and other in-demand industries, YouthBuild supports the evidence-based national strategy of apprenticeship. Other competitive grant programs that score applications for past performance and use of evidence-informed strategies are the Senior Community Service Employment Program and the National Farmworker Jobs Program.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- All five of DOL’s largest grant programs may be involved in evaluations designed by the Chief Evaluation Office (CEO) and the relevant DOL agencies. In each case DOL required or encouraged (through language in the funding announcement and proposal review criteria) grantees to use evidence-based models or strategies in grant interventions and/or to participate in an evaluation, especially to test new interventions that theory or research suggest are promising.
- For example, DOL is conducting an evaluation of the Pathway Home grant program. This evaluation will build knowledge about the grant models and include the development of a feasibility and design options paper for implementation and impact evaluations. Additionally, DOL has recently launched a multi-year implementation study of the Senior Community Service Employment Program as well as other workforce programs for older workers to build the evidence base on these programs and identify future research options. There are options for more rigorous evaluations in the contract as appropriate.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- DOL includes requirements of demonstrated effectiveness in the allocation of funds, as well as the commitment to building new evidence in order to receive funds, both of which are of equal importance given the fact that many DOL-funded programs lack a sufficient body of evidence to only support those that are already evidence-based. For example, among recent Employment and Training Administration (ETA) competitive grant programs, this has involved requiring: (1) a demonstration of an approach as being evidence-based or promising for receipt of funds (i.e., Reentry Funding Opportunity Announcement) or for potential to receive additional funds (i.e., TechHire); (2) an independent third-party local or grantee evaluation with priority incentives for rigorous designs (e.g., tiered funding, scoring priorities, bonus scoring for evidence-based interventions or multi-site rigorous tests); or (3) full participation in an evaluation as well as rigorous grantee (or local) evaluations (i.e. American Apprenticeship Initiative and the Strengthening Community College Training Grants). Additionally, applicants for the International Labor Bureau’s (ILAB) competitive funding opportunities are required to conduct and/or participate in evaluations as a condition of award.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- In 2015, DOL funded an evaluation of the 36-month Linking Employment Activities Pre-Release (LEAP) Program which included an implementation study of LEAP pilot programs that provided jail-based American Job Centers (AJCs) to individuals preparing to re-enter society after time in jail. The findings of the evaluation identified many promising practices for offering both pre- and post-release services and were published in 2018 (see the Final Report and Issue Brief Compendium). In 2020, DOL funded a 42-month Pathway Home Pilot Project and accompanying evaluation that builds on lessons learned from the LEAP program by providing workforce services to incarcerated individuals pre- and post-release. For example, the requirement in the Pathway Home grant for participants to maintain the same caseworker pre- and post-release, was suggested as a promising practice in the LEAP Implementation Study.
- DOL funded a national evaluation of the Trade Adjustment Assistance Community College and Career Training (TAACCCT) grant program, which was a $1.9 billion initiative consisting of four rounds of grants, from 2011 to 2018. The grants were awarded to institutions of higher education (mainly community colleges) to build their capacity to provide workforce education and training programs. The implementation study assessed the grantees’ implementation of strategies to better connect and integrate education and workforce systems, address employer needs, and transform training programs and services to adult learners. The synthesis identified key implementation and impact findings based on a review of evaluation reports completed by grantees’ third-party evaluators. The outcomes study examined the training, employment, earnings, and self-sufficiency outcomes of nearly 2,800 participants from nine grants in Round 4. Findings from these studies provide evidence-based practices and insights that are being applied to the new Strengthening Community College Initiative Funding Opportunity Announcement, as well as future DOL investments.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- DOL has a formal Evaluation Policy. Guidance on using funds to conduct and/or participate in program evaluations and/or to strengthen their evaluation-building efforts can be found in each grant funding opportunity, and is a condition of many grants. The “Special Program Requirements” section of the respective grant funding opportunity notifies grantees of this responsibility. Generally, this section states: “As a condition of grant award, grantees are required to participate in an evaluation, if undertaken by DOL. The evaluation may include an implementation assessment across grantees, an impact and/or outcomes analysis of all or selected sites within or across grantees, and a benefit/cost analysis or assessment of return on investment. Conducting an impact analysis could involve random assignment (which involves random assignment of eligible participants into a treatment group that would receive program services or enhanced program services, or into control group(s) that would receive no program services or program services that are not enhanced).
- DOL may require applicants to collect data elements to aid the evaluation. As a part of the evaluation, as a condition of award, grantees must agree to: (1) make records available to the evaluation contractor on participants, employers, and funding; (2) provide access to program operating personnel, participants, and operational and financial records, and any other pertaining documents to calculate program costs and benefits; (3) in the case of an impact analysis, facilitate the assignment by lottery of participants to program services (including the possible increased recruitment of potential participants); and (4) follow evaluation procedures as specified by the evaluation contractor under the direction of DOL, including after the period of operation. After award, grantees will receive detailed guidance on ETA’s evaluation methodology, including requirements for data collection. Grantees will receive technical assistance to support their participation in these activities.
Score
9
9
U.S. Dept. of Housing & Urban Development
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- Continuum of Care ($2.656 billion; eligible grantees: state and local governments and coalitions)
- Lead-Hazard Reduction ($357 million; eligible grantees: local governments)
- Choice Neighborhoods Implementation ($182 million; eligible grantees: state and local governments;FY20 NOFA with funding allocated in FY21)
- Indian Housing Block Grant – Competitive Grant Program ($95 million; eligible grantees: Native American tribal governments and tribal organizations)
- Resident Opportunity and Self-Sufficiency Service Coordinator Program ($35 million; eligible grantees: Native American tribal governments and tribal organizations;, Public housing authorities/Indian housing authorities, nonprofits; resident associations)
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- The Continuum of Care program (CoC) provides homelessness assistance awards on the basis of system performance measures focused on outcomes and evidence of effectiveness. This includes up to 56 points (out of 200) for past “performance related to reducing homelessness” and four points for “reallocat[ing] lower performing projects to create new higher performing projects that are based on performance review of existing projects.” Additionally, a precondition for Continuum of Care applicants to be awarded FY19 expansion bonus funding was that they rank homeless assistance projects on the basis of how they improve system performance (p. 34).
- The Lead Hazard Reduction Grant Program is designed to maximize the number of children under the age of six protected from lead poisoning by strategically targeting lead reduction efforts to neighborhoods where children are at greatest risk. The FY21 grants require grantees to use evidence-based lead hazard control methods, meet cost-savings, effectiveness, and grant compliance benchmarks, and gather pre- and post-treatment data to support and validate their investments. The application assigns 40 points (out of 102) based on grantees’ organizational capacity and relevant experience. Past research showing large returns on investment supported HUD’s decision to request a 24 percent increase in program funding for FY21, and HUD is funding studies using an implementation science framework to continue improving efficiency and efficacy of lead interventions.
- The Resident Opportunity & Self-Sufficiency Service Coordinator (ROSS-SC) grant program is designed to assist residents of Public and Indian Housing make progress towards economic and housing self-sufficiency by removing the educational, professional and health barriers they face. For grantees applying for renewal funding, the application assigns up to 25 points (out of 45) for past performance, including the number of residents served and the grantee’s effectiveness in spending down past funds. The application also assigns 20 points for soundness of approach, which includes the past performance of any subcontractors and the grantee’s plans to track residents’ progress. New applicants are also assessed on their relevant experience, capacity, and soundness of approach.
- The Indian Housing competitive grant program was established to address issues of overcrowded and physically inadequate housing identified by a PD&R needs assessment completed in 2017, Housing Needs of American Indians and Alaska Natives in Tribal Areas. The FY21 grant application assigned 20 points (out of 102) based on grantees’ organizational capacity and relevant experience. The grant application also assigns points for data supporting identified needs and past efforts to address identified needs.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- As a condition of grant award, all HUD competitive grantees are required to cooperate (p. 5) in any HUD-sponsored research or evaluation studies.
- The Continuum of Care program is supported by the National Homeless Data Analysis Project, which provides communities with resources to improve data collection and consistent reporting about individuals experiencing homelessness to support national Annual Homeless Assessment Reports.
- HUD Lead Paint grantees are required to integrate evidence into their work by conducting clearance testing of all housing units treated. Technical studies provide evidence to improve lead hazard detection, evaluation, and control technologies, as well as implementation, and rigorous evaluation has demonstrated the large return on investment related to children’s health from controlling lead hazards.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- HUD’s Housing Counseling Grant Program ($37 million in FY20, as well as $12.5 million in supplemental grants) provides counseling services to tenants and homeowners. One of the program’s main objectives is to “Distribute federal financial support to housing counseling agencies based on past performance.” As such, the program allocates seven points (out of 100) for past performance based on the “the positive impacts that an Applicant’s housing counseling services had on clients.” HUD scores this item based on its own performance records.
- HUD continues to extend the Standards for Success reporting framework to additional competitive grant programs, establishing a performance outcomes framework that will both drive performance and determine future funding recipients by providing strategically aligned performance metrics that are standardized and sufficiently granular to provide information on relative effectiveness.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- The Continuum of Care (CoC) program is HUD’s largest program targeted to adults and children experiencing homelessness. HUD awards COC funding to over 6,500 projects through a national competition. Policy priorities for the CoC program have focused on preventing and ending homelessness through access to permanent housing, including ending homelessness for veterans, families, youth, and people experiencing chronic homelessness. Over more than a decade, increased CoC effectiveness has been supported by Homeless Management Information Systems and evidence-based funding of increased permanent supportive housing. Between 2011 and 2020, the estimated number of people experiencing homelessness in families with children declined by 27%. After a steady decline for the first half of the last decade, the number of people experiencing chronic homelessness increased by 42% from 2016-2020 and is back up to its highest level since 2008. At the same time however, the number of veterans experiencing homelessness declined by 43%. Following federal criteria, 78 communities and three states have effectively ended veteran homelessness.
- HUD has taken a proactive role to address the racial disparities in rates of homeless by publishing resources and providing technical; assistance to grantees. For example, in 2019 HUD created the CoC Racial Equity Access Toolto help communities understand who is accessing their homeless service system and what outcomes those families and individuals are realizing. In 2020, HUD published a guide to Increasing Equity in the Homeless Response System Through Expanding Procurement, which provides communities with recommendations for allocating CARES Act funds to address racial and ethnic disparities in the homeless response system. More recently, responding to the finding 2019 Annual Homeless Assessment Report (AHAR) that African Americans have remained considerably overrepresented among the homeless population compared to the U.S. population, HUD published a rich set of racial equity resources, data toolkits, and research reports related to identifying disparities and implementing responses to address the overrepresentation of people of color in the homeless system.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- HUD operates a centralized evaluation program under the guidance of the evaluation officer. As a condition of grant award, all HUD competitive grantees are required to cooperate in any HUD-sponsored research or evaluation studies and to provide program monitoring data. A number of program statutes do not authorize formal evaluation as an eligible activity for use of program funds. HUD also provides technical assistance to strengthen grantees’ capacity for evaluation and performance management capacity.
- The Continuum of Care FY19 homelessness assistance program NOFA offers one point for applicants who propose to use requested funds to improve their ability to evaluate the outcome of projects funded by the CoC Program and the Emergency Solutions Grant program (p. 39). There was no FY 2020 CoC Program Competition, HUD renewed all awards in recognition of the fact that communities have been and will continue to be consumed with COVID-19 response and have limited capacity to participate in the traditional CoC competition.
- HUD intends to incorporate and disseminate best practices regarding racial equity identified in the ongoing equity assessment to external stakeholders as part of the agency’s long-term equity transformation. HUD has already begun this process by publishing racial equity resources, data toolkits, and research reports related to identifying disparities and implementing responses to address the overrepresentation of people of color in the homeless system. One of these resources is a CoC Racial Equity Analysis Tool, which helps CoCs identify racial disparities in their system by presenting data on poverty rates by race and ethnicity, age, and veteran status at the CoC level of geography.
Score
7
7
Administration for Community Living (HHS)
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- Centers for Independent Living ($116.2 million; eligible applicants: Nonprofits; Public and State controlled institutions of higher education)
- One of their largest competitive grants for was the Centers for Independent Living Training and Technical Assistance Grant
- National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) ($113.0 million; eligible applicants: State, local, and tribal governments and nonprofits, public and State controlled institutions of higher education)
- State Health Insurance Assistance Program ($52.1 million; eligible applicants: Unrestricted)
- One of the relevant NOFAs is for 2020 State Health Insurance Assistance Program (SHIP) Base Grant
- Medicare Improvements for Patients and Providers Act Programs (MIPPA) ($50 million; Eligible applicants are: Nonprofits; City or township governments; Public and State controlled institutions of higher education; Native American tribal; Public housing authorities/Indian housing authorities; Private institutions of higher education; Native American tribal organizations; Special district governments; County governments; State governments; and Independent school districts).
- University Centers for Excellence in Developmental Disabilities Education, Research and Service ($42.1 million; eligible applicants: entities in each State designated as UCEDDs to carry out the four core functions of interdisciplinary pre-service preparation and continuing education, community services, research, and information dissemination)
- Centers for Independent Living ($116.2 million; eligible applicants: Nonprofits; Public and State controlled institutions of higher education)
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- Independent Living (IL) NOFAs describe evaluation criteria including plans for technical assistance to enhance grant effectiveness and the provision of information developed about best practices (full announcement, p. 21). To continue receiving CIL program funding, eligible centers must provide evidence that they have previously had an impact on the goals and objectives for this funding.
- Based on a strict interpretation of the phrase “evidence of prior effectiveness to make grant awards,” NIDILRR currently does not use evidence of prior effectiveness to make grant awards. Instead, ACL makes these grant awards by largely relying on the expert evaluative judgments of ACL peer reviewers. Making grant awards by using peer review is a standard, and widely-accepted, evidenced-based practice. For example, see page 7 and page 19 of the full DPCP full announcement.
- SHIP NOFAs describe evaluation criteria including plans to improve alignment of policies, processes, and procedures to program goals and increased accountability to program expectations at all levels (full announcement, p. 25).
- MIPPA funds are awarded to State grantees and to the National Center for Benefits Outreach and Enrollment. To continue funding without restrictions, State grantees are required to submit state plans that ACL staff review for the specific strategies that grantees will employ to enhance efforts through statewide and local coalition building. The National Center applicants must describe the rationale for using the particular intervention, including factors such as evidence of intervention effectiveness. In 2019, the Center was awarded additional funding based on prior performance–specifically, assisting over 7.6 million individuals to identify over $29.6 billion in potential annual benefits.
- University Centers for Excellence in Developmental Disabilities Education, Research & Service (UCEDDs) are a nationwide network of independent but interlinked centers, representing an expansive national resource for addressing issues, finding solutions, and advancing research related to the needs of individuals with developmental disabilities and their families. Applications are also reviewed based on their description of current or previous evidence of relevant experience (p. 30).
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Independence Living/Centers for Independent living grants are required to show that they are working to “improve performance, outcomes, operations, and governance of CILs.” (Full Announcement, p. 2). These include reports on “issues, goals, outcome measures, performance and compliance” (p. 6).
- NIDILRR, and its grantees, are in the disability and rehabilitation evidence-building business. NIDILRR grantees generate new knowledge, on particular disability topics or develop new disability products which eventually becomes part of a larger evidence base. To generate this new knowledge, NIDILRR grantees must conduct a series of research and development activities that produce important outputs. These research and development activities are guided by the following two frameworks: The NIDILRR Stages of Research Framework, and the NIDILRR Stages of Development Framework. The NIDILRR Stages of Research Framework is published in 45 CFR 1330.4 while the Stages of Development Framework is published in 45 CFR 1330.5.
- SHIP grantees are required to build and disseminate evidence of what works through documenting and promoting “knowledge, successes, and lessons learned within the SHIP network. This includes sharing ideas, products, and materials with other SHIP grantees, ACL, and the SHIP Technical Assistance Center” (Full Announcement, p. 5). They are required to report on specified performance measures, but also encouraged to provide additional evidence and data, such as data related to the cost changes as a result of enrollment in Medicare Part D and Medicare Advantage plans (PDP/MA-PD) (p. 7).
- MIPPA Grant funds support the identification and dissemination of (i.e., practices built upon evidence of effectiveness) improving benefits outreach and enrollment.
- A central purpose of UCEDD grants is the building and dissemination of evidence of what works. UCEDDs are a nationwide network of independent but interlinked centers, representing an expansive national resource for addressing issues, finding solutions, and advancing research related to the needs of individuals with developmental disabilities and their families. Additionally, the UCEDD Annual Report requires grantees to submit information on progress made in the previous year towards achieving the projected goals (Full Announcement, p. 35). Grantees are also specifically asked to describe how innovative designs and methods are “based on evidence and can be replicated” (p. 28).
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- ACL requires that evidence of effectiveness is used in all award decisions. Grant officers attend training regarding ways to include information about evidence-building into funding opportunity announcements. This includes information about text that can be included in funding announcements: 1) describing requirements for developing measurable outcomes; 2) explaining how the inclusion of evidence and evidence building plans can be used to score grant applications; and 3) instructing grant reviewers regarding rating applicants’ presentation of evidence and evidence building plans. The training was recorded and is available to all staff.
- ACL’s Alzheimer’s Disease Programs Initiative (ADPI) translates and implements evidence-based supportive services for persons with ADRD and their caregivers at the community level. Award criteria include the extent to which applicants “describe partnerships, collaborations and innovative activities that will be implemented in support of goal/objective achievement, including the dementia specific evidence-based/evidence informed intervention(s) to be implemented in the project” (Full Announcement, p. 24).
- The review criteria for the Lifespan Respite Care Program: State Program Enhancement Grants includes the applicant’s description of “how the proposed project will build upon the accomplishments made in previous Lifespan Respite Care Program grants” (Full Announcement, p. 23).
- The award for the National Paralysis Research Center requires successful applicants to provide evidence that individuals with paralysis and other disabilities will be actively and meaningfully engaged, and demonstrate experience and expertise in carrying out the kinds of activities required (Full Announcement, pp. 4-5).
- As selection criteria for the National Technical Assistance Center on Kinship and Grandfamilies, points were awarded for demonstrating that they were based on “the most recent, relevant, and available information and knowledge (p. 20) and for demonstrating that staff, consultants, and partners possess the appropriate experience and expertise (Full Announcement, p. 22).
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- Prior to the development of visual scene displays by the NIDILRR-funded Augmentative and Alternative Communication Rehabilitation Engineering Research Center (AAC-RERC), the only Augmentative and Alternative Communication (AAC) option was traditional grid displays with isolated symbols presented in rows and columns. It was difficult for many adults with acquired conditions resulting in significant language and cognitive limitations to use these traditional grid displays. Visual Scene Displays (VSDs) offer an easier alternative to traditional grid displays. They go beyond standard pictures and symbols organized in rows and columns by providing information on the situation or context. Put more simply, VSDs are photos or pictures that people can use to communicate messages to others. These photos depict familiar scenes, objects or people–and users can touch “hot spots” on the photo to speak messages that relate to the pictured scene or object. For example, a person with aphasia might touch a hotspot on a picture of a sibling and say this is my sister. This additional information on the situation and context makes it easier for persons with complex communication needs to express their wants and needs and therefore enhances their ability to interact and participate with others in the community. Research from the AAC RERC and external researchers demonstrates the effectiveness of VSDs with adults with severe chronic aphasia, primary progressive aphasia, dementia, etc. As a result of the continued efforts of the AAC-RERC and their partners, this VSD technology has been successfully transferred to all of the major AAC manufacturers and app developers.
- NIDILRR-funded grant activities regularly produce publications that use evidence to build knowledge and promote diversity and inclusion. This included recommendations for reducing barriers to access to healthcare that face coverings pose particularly to the deaf and hard of hearing. They also produced a mixed methods study identifying barriers to access to healthcare that individuals with disabilities face, and also identifying aspects of the Affordable Care Act which have improved enforcement of laws prohibiting discrimination on the basis of disability.
- ACL’s Alzheimer’s Disease Supportive Services Program (ADSSP) encourages the translation of dementia-specific interventions for use in communities. Examples include: the Savvy Caregiver (evidence-based) psychoeducational intervention focused on training family caregivers about the basic knowledge, skills, and attitudes needed to handle the challenges of caring for a family member with Alzheimer’s disease and to be an effective caregiver; Cuidando con Respeto (evidence-informed), Spanish version of the original Savvy Caregiver Program; and Savvy Caregiver Express (evidence-informed), a condensed version of the original Savvy Caregiver Program. ACL’s requirement for inclusion of dementia specific evidence-based interventions is demonstrated in the 2018 funding opportunity announcement entitled Alzheimer’s Disease Programs to States and Communities.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- Funding opportunity announcements and grant reviews stress the need for strong performance measurement and evaluation. ACL’s technical assistance centers–the National Resource Center on Nutrition and Aging (NRC), the Alzheimer’s Disease Supportive Services Program (ADSSP) and the University Centers for Excellence in Developmental Disabilities Education, Research, and Service–promote the use and generation of evidence with ACL grantees. Grantees manuals also include information about the importance of and requirements for evaluation (see the Administration on Aging: Title VI Resource Manual). Staff of ACL’s Office of Performance and Evaluation make presentations regarding the importance of evidence with regional staff who are in frequent contact with State grantees and at grantee conferences (see ACL Track: The ACL Older Americans Act (OAA) Performance System–Crossing the Finish Line and ACL/CMS Track: Raising the Bar in Medicaid HCBS & Community Inclusion–Showcasing Transformation presented at the 2019 home- and community-based services (HCBS) conference; ACL Track: Assuring the Health & Welfare of Medicaid HCBS Beneficiaries: Federal Findings, Investments, & Promising Practices in Systems Change and ACL Track: Innovative Housing & Health & Human Services Collaborations: A Game-Changer in Supportive Housing & Community Living presented at the 2018 HCBS conference).
Score
8
8
Substance Abuse and Mental Health Services Administration (HHS)
8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
- In FY21, the five largest competitive grant programs are:
- State Opioid Response (SOR) Grant ($1.5 billion; States, Existing USDA Cooperative Extension Grantees, and U.S. Territories, Tribes and tribal organizations are eligible to apply to set-aside funds only)
- Children’s Mental Health Services ($125 million; States, Tribes, Communities, Territories)
- Strategic Prevention Framework (SPF) ($109 million; States, Tribes, and Territories)
- Targeted Capacity Expansion-Special Projects ($102 million; Domestic Public and Private Non-Profit Entities, States, Opioid Medication-Assisted SPF Rx Treatment Service Providers, Outpatient Substance Abuse Providers, Community Mental Health Centers, Federally Qualified Health Centers)
- Certified Community Behavioral Health Clinic (CCBHC) ($250 million; Certified Community Behavioral Health Clinics, Community-Based Behavioral Health Clinics)
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
- As with all SAMHSA grants, the five largest competitive grants programs require applicants to include evidence-based practices and activities that are backed by science to their proposals. The allocation of funds is based on an application that includes a request for evidence of effective work in the area of reducing substance use and mental health disorders.
- State Opioid Response (SOR) Grant requires grantees to describe their evidence-based service and practice. The grantee must describe how the EBP meets the population(s) needs and the outcomes to be achieved. The grantee must also indicate how their practice might be modified and reasons for such modifications.
- Children Mental Health Services requires grantees to describe the evidence-based and culturally competent mental health services to children with SED.
- Strategic Prevention Framework requires grantees to report on the number and percent of evidence-based programs, policies, and or practices that are implemented and describe the types of evidence-based interventions implemented at the community level. Additionally, grantees must coordinate with the Evidence-Based Practices Workgroups.
- Targeted Capacity Expansion–Special Projects requires applicants to describe their proposed evidence-based service/practice. The grantee must describe how the EBP meets the population(s) needs and the outcomes to be achieved. The grantee must also indicate how their practice might be modified and reasons for such modifications.
- Certified Community Behavioral Health Clinic Expansion Grants requires applicants to describe their proposed evidence-based service/practice. The grantee must describe how the EBP meets the population(s) needs and the outcomes to be achieved. The grantee must also indicate how their practice might be modified and reasons for such modifications.
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
- Evaluating some of the largest competitive grant programs is in SAMHSA’s evaluation plan. These evaluations will inform and enable SAMHSA to build evidence. One mechanism for this is through the grantmaking process. In some grants, SAMHSA includes additional terms and conditions that state, depending on the funding opportunity and grant application, a grantee may be asked to participate in a cross-site evaluation.
- All grant programs at SAMHSA are required to submit data on race, ethnicity, gender and sexual orientation (among other demographic data). In addition, SAMHSA’s surveys collect national data in these areas allowing SAMHSA’s Office of Behavioral Health Equity, to utilize federal and community data to identify, monitor and respond to behavioral health disparities.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
- All grant programs at SAMHSA are required to submit data on race, ethnicity, gender and sexual orientation (among other demographic data). In addition, SAMHSA’s surveys collect national data in these areas allowing SAMHSA’s Office of Behavioral Health Equity, to utilize federal and community data to identify, monitor and respond to behavioral health disparities.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
- Competitive grant programs are required to consider evidence-based practices in their application and are referred to the SAMHSA Evidence-based Practices Resources Center for tools they need to achieve better outcomes based on what works. An additional example might be found in SAMHSA’s trauma and justice portfolio, which provided a comprehensive public health approach to addressing trauma and establishing a trauma-informed approach in health, behavioral health, human services, and related systems.
- The intent of this initiative was to reduce both the observable and less visible harmful effects of trauma and violence on children and youth, adults, families, and communities. As part of this initiative, the SPARS team presented the video series, A Trauma-Informed Approach to Data Collection, with commentary from subject matter experts and clientele from the People Encouraging People (PEP) program in Baltimore, MD. This series advised grantees and GPOs about using a trauma-informed approach to collecting client-level data.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
- Grantees are encouraged to allocate grants funds for data collection, data analysis, and program evaluation. Some grantees hire external evaluators using grant funds to assist them in the evaluation process. For example, one funding announcement states, “Provide specific information about how you will collect the required data for this program and how the data will be utilized to manage, monitor and enhance the program.” In addition, up to 20% of the total grant award for the budget period may be used for data collection, performance measurement, and performance assessment expenses.