2021 Federal Standard of Excellence


Millennium Challenge Corporation

84
Score
9
Leadership

Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY21?

1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s Evaluation Officer (or equivalent)? (Example: Evidence Act 313)
  • The Monitoring and Evaluation (M&E) Managing Director serves as the Millennium Challenge Corporation’s (MCC) Evaluation Officer. The Managing Director is a career civil service position with the authority to execute M&E’s budget, an estimated $17.6 million in due diligence funds in FY21, with a staff of 28 people. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated an Evaluation Officer. 
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s Chief Data Officer (or equivalent)? (Example: Evidence Act 202(e))
  • The Director of Product Management in the Office of the Chief Information Officer is MCC’s Chief Data Officer. The Chief Data Officer manages a staff of six and an estimated FY21 budget of $1 million in administrative funds. In accordance with the Foundations for Evidence-Based Policymaking Act, MCC designated a Chief Data Officer.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical officer, performance improvement officer, and other related officials in order to support, improve, and evaluate the agency’s major programs?
  • The MCC Evaluation Management Committee (EMC) oversees decision-making, integration, and quality control of the agency’s evaluation and programmatic decision-making in accordance with the Foundations for Evidence-Based Policymaking Act. The EMC integrates evaluation with program design and implementation to ensure that evaluations are designed and implemented in a manner that increases their utility, to both MCC and in-country stakeholders as well as external stakeholders. The EMC includes the agency’s evaluation officer, Chief Data Officer, representatives from M&E, the project lead, sector specialists, the economist, and gender and environmental safeguards staff. For each evaluation, the EMC has between 11-16 meetings or touchpoints, from evaluation scope-of-work to final evaluation publication. The EMC plays a key role in coordinating MCC’s Evidence Act implementation.
Score
7
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence-building plan), and did it publicly release the findings of all completed program evaluations in FY21?

2.1 Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
2.2 Did the agency have an agency-wide evaluation plan? (Example: Evidence Act 312(b))
  • Every MCC investment must adhere to MCC’s rigorous Policy for Monitoring and Evaluation (M&E) that requires every MCC program to contain a comprehensive M&E Plan. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry-into-force. The M&E Plan lays out the evaluation strategy and includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence-building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda? (Example: Evidence Act 312)
  • In an effort to advance MCC’s evidence base and respond to the Evidence Act, MCC is implementing a learning agenda around women’s economic empowerment (WEE) with short- and long-term objectives. The agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s WEE activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s WEE work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/WEE design?
    • How does MCC’s WEE work increase income and assets for women and girls within those households?
    • How does MCC’s WEE work increase women’s empowerment, defined through measures relevant to the WEE intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders. Agency leadership has named inclusion and gender as a key priority. As such, the agency is considering how to expand the WEE learning agenda to include evidence generation and utilization around gender and inclusion (in addition to women’s economic empowerment) in MCC’s programming.
  • MCC is also increasingly enabling learning agendas and strategies with its partner countries. In MCC’s compact with Liberia, a key program focused on institutional reform and strengthening of the Liberia Electricity Corporation. In recognition of learning in their on-the-job training strategies, the team won top awards for advancements in learning strategy creation and best learning program supporting a change transformation business strategy. These awards recognize the innovation and excellence in the strategies and design deployed in the program, as well as the results achieved.  
2.4 Did the agency publicly release all completed program evaluations?
  • MCC publishes each independent evaluation of every project, underscoring the agency’s commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the new MCC Evidence Platform. As of August 2021, MCC had contracted, planned, and/or published 209 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 117 Final Reports and 36 Interim Reports have been finalized and released to the public.
  • In FY21, MCC also continued producing Evaluation Briefs, an MCC product that distills key findings and lessons learned from MCC’s independent evaluations. MCC will produce Evaluation Briefs for each evaluation moving forward, and is in the process of writing Evaluation Briefs for the backlog of all completed evaluations. MCC expects to have Evaluation Briefs for every published evaluation by the end of 2021. As of October 2021, MCC has published 107 Evaluation Briefs.
2.5 What is the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts? (Example: Evidence Act 3115, subchapter II (c)(3)(9))
  • MCC is currently working on a draft capacity assessment in accordance with the Evidence Act. Additionally, once a compact or threshold program is in implementation, Monitoring and Evaluation (M&E) resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of all of MCC’s programs. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E Managing Director, Chief Economist, and Vice President for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY21, MCC began or continued comprehensive, independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • MCC employs rigorous, independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of August 2021, about 32% of MCC’s evaluation portfolio consists of impact evaluations, and 68% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project, and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted according to the program’s Monitoring and Evaluation (M&E) Plan, in accordance with MCC’s Policy for M&E. 
Score
9
Resources

Did the agency invest at least 1% of program funds in evaluations in FY21?

3.1 ____ (Name of agency) invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY21 budget.
  • MCC invested $17.6 million on evaluations, evaluation technical assistance, and evaluation capacity-building, representing 2.2% of the agency’s $800 million FY21 budget (minus staff/salary expenses).
3.2 Did the agency have a budget for evaluation and how much was it? (Were there any changes in this budget from the previous fiscal year?)
  • MCC budgeted $17.6 million on monitoring and evaluation in FY21, an increase of $1.5 million compared to FY20 ($16.1 million total).
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
  • In support of MCC’s emphasis on country ownership, MCC also provides substantial, intensive, and ongoing capacity building to partner country Monitoring and Evaluation staff in every country in which it invests. As a part of this, MCC provides training and ongoing mentorship in the local language. This includes publishing select independent evaluations, Evaluation Briefs, and other documentation in the country’s local language. The dissemination of local language publications helps further MCC’s reach to its partner country’s government and members of civil society, enabling them to fully reference and utilize evidence and learning beyond the program. MCC also includes data strengthening and national statistical capacity as a part of its evidence-building investments. This agency-wide commitment to building and expanding an evidence-based approach with every partner country is a key component of MCC’s investments.
  • As a prime example of this work, MCC continues to implement a first-of-its-kind evaluation partnership in its Morocco investment. MCA-Morocco, the local implementing entity, signed MCC’s first Cooperation Agreement, a funded partnership within a country program, under the new Partnership Navigator Program Partnership Solicitation process. This first MCA-driven partnership agreement is bringing Nobel prize-winning economic analysis approaches from MIT and Harvard to partner with a Moroccan think tank to create an Employment Lab to conduct rigorous research into Moroccan labor market programs and policies. This research is coupled with training and capacity building to key Moroccan policymakers to promote evidence-based decision-making.
Score
6
Performance Management / Continuous Improvement

Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY21?

4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
4.2 Did the agency use data/evidence to improve outcomes and return on investment?
  • MCC is committed to using high-quality data and evidence to drive its strategic planning and program decisions. The Monitoring and Evaluation plans for all programs and tables of key performance indicators for all projects are available online by compact and threshold program and by sector, for use by both partner countries and the general public. Prior to investment, MCC performs a Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at investment closeout, drawing on information from MCC’s monitoring data (among other data and evidence), to test original assumptions and assess the cost effectiveness of MCC programs. In an effort to complete the evidence loop, MCC now includes evaluation-based cost-benefit analysis as a part of its independent final evaluation. As a part of the independent evaluation, the evaluators analyze the MCC-produced ERR and associated project assumptions five or more years after investment close to understand if and how the benefits actually accrued. These evaluation-based ERRs add to the evidence base by better understanding the long-term effects and sustainable impact of MCC’s programs.
  • In addition, MCC produces periodic reports that capture the results of MCC’s learning efforts in specific sectors and translate that learning into actionable evidence for future programming. Once MCC has a critical number of evaluations in a given sector, the agency endeavors to draw portfolio-wide learning from that sector in the form of Principles into Practice reports. In FY21, MCC published a new Principles into Practice report on its research related to learning in the water, sanitation, and hygiene sector: Lessons from Evaluations of MCC Water, Sanitation, and Hygiene Programs. MCC is also currently working on forthcoming Principles into Practice reports on its general education and evidence-based scorecard selection process.
4.3 Did the agency have a continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement? (Examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)
  • MCC continues to implement and expand a new reporting system that enhances MCC’s credibility around results, transparency, learning, and accountability. The Star Report and its associated quarterly business process captures key information to provide a framework for results and improve the ability to promote and disseminate learning and evidence throughout the compact and threshold program lifecycle. For each compact and threshold program, evidence is collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements. Critically, this information is available in one report after each program ends. Each country will have a Star Report published roughly seven months after completion.
  • Continual learning and improvement is a key aspect of MCC’s operating model. MCC continuously monitors progress towards compact and threshold program results on a quarterly basis using performance indicators that are specified in the Monitoring and Evaluation (M&E) Plan for each country’s investments. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each partner country submits an Indicator Tracking Table that shows actual performance of each indicator relative to the baseline that was established before the activity began and the performance targets that were established in the M&E Plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC management and the relevant country team review this data in a formal Quarterly Performance Review meeting to assess whether results are being achieved and integrate this information into project management and implementation decisions.
  • In FY21, MCC also launched an exciting new interactive sector-level learning product: Sector Results and Learning pages. Sector Results and Learning pages are interactive web pages that promote learning and inform program design by consolidating the latest monitoring data, independent evaluation results, and lessons from the key sectors in which MCC invests. Critically, this information is now publicly available, in one place, for the first time. An interactive learning database allows practitioners to efficiently retrieve past learning to inform new programs. MCC has published Sector Results and Learning pages for the WASH and transportation sectors. Pages that focus on Agriculture and Irrigation, Education, Energy, and Land will become available throughout 2021.
Score
7
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) outcomes, cost-effectiveness, and/or the performance of federal, state, local, and other service providers programs in FY21?

5.1 Did the agency have a strategic data plan, including an open data policy? (Example: Evidence Act 202(c), Strategic Information Resources Plan)
  • In FY21, MCC is continuing to develop a strategic data plan. As detailed on MCC’s Digital Strategy and Open Government pages, MCC promotes transparency to provide people with access to information that facilitates their understanding of MCC’s model, MCC’s decision-making processes, and the results of MCC’s investments. Transparency, and therefore open data, is a core principle for MCC because it is the basis for accountability, provides strong checks against corruption, builds public confidence, and supports informed participation of citizens. 
  • As a testament to MCC’s commitment to and implementation of transparency and open data, the agency was again the highest-ranked U.S. government agency in the 2020 Publish What You Fund Aid Transparency Index for the sixth consecutive Index. In addition, the U.S. government is part of the Open Government Partnership, a signatory to the International Aid Transparency Initiative, and must adhere to the Foreign Aid Transparency and Accountability Act. All of these initiatives require foreign assistance agencies to make it easier to access, use, and understand data. All of these actions have created further impetus for MCC’s work in this area, as they establish specific goals and timelines for adoption of transparent business processes.
  • Additionally, MCC convenes an internal Data Governance Board, an independent group consisting of representatives from departments throughout the agency, to streamline MCC’s approach to data management and advance data-driven decision-making across its investment portfolio. 
5.2 Did the agency have an updated comprehensive data inventory? (Example: Evidence Act 3511)
  • MCC makes extensive program data, including financials and results data, publicly available through its Open Data Catalog, which includes an “enterprise data inventory” of all data resources across the agency for release of data in open, machine readable formats. The Department of Policy and Evaluation leads the MCC Disclosure Review Board process for publicly releasing the de-identified microdata that underlies the independent evaluations on the MCC Evidence Platform, following MCC’s Microdata Management Guidelines to ensure appropriate balance in transparency efforts with protection of human subjects’ confidentiality.
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c))
  • MCC’s new Evidence Platform offers a first-of-its-kind virtual data enclave for users to access and use public- and restricted-use data. The Platform encourages research, learning, and reproducibility and connects datasets to analytical products across the portfolio. In addition to the Evidence Platform, which links and provides access to all of MCC’s microdata from evaluation packages, MCC’s Data Analytics Program (DAP) enables enterprise data-driven decision-making through the capture, storage, analysis, publishing, and governance of MCC’s core programmatic data. The DAP streamlines the agency’s data lifecycle, facilitating increased efficiency. Additionally, the program promotes agency-wide coordination, learning, and transparency. For example, MCC has developed custom software applications to capture program data, established the infrastructure for consolidated storage and analysis, and connected robust data sources to end user tools that power up-to-date, dynamic reporting and also streamlines content maintenance on MCC’s public website. As a part of this effort, the Monitoring and Evaluation team has developed an Evaluation Pipeline application that provides up-to-date information on the status, risk, cost, and milestones of the full evaluation portfolio for better performance management.
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information? (Example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)
  • MCC’s Disclosure Review Board ensures that data collected from surveys and other research activities is made public according to relevant laws and ethical standards that protect research participants, while recognizing the potential value of the data to the public. The board is responsible for: reviewing and approving procedures for the release of data products to the public; reviewing and approving data files for disclosure; ensuring de-identification procedures adhere to legal and ethical standards for the protection of research participants; and initiating and coordinating any necessary research related to disclosure risk potential in individual, household, and enterprise-level survey microdata on MCC’s beneficiaries. 
  • The Microdata Evaluation Guidelines inform MCC staff and contractors, as well as other partners, on how to store, manage, and disseminate evaluation-related microdata. This microdata is distinct from other data MCC disseminates because it typically includes personally identifiable information and sensitive data as required for the independent evaluations. With this in mind, MCC’s Guidelines govern how to manage three competing objectives: share data for verification and replication of the independent evaluations, share data to maximize usability and learning, and protect the privacy and confidentiality of evaluation participants. These Guidelines were established in 2013 and updated in January 2017. Following these Guidelines, MCC has publicly released 117 de-identified, public use, microdata files for its evaluations and evidence studies. MCC also has 25 Disclosure Review Board-cleared, restricted data packages that it can make accessible on the new MCC Evidence Platform. MCC’s experience with developing and implementing this rigorous process for data management and dissemination while protecting human subjects throughout the evaluation life cycle is detailed in Opening Up Evaluation Microdata: Balancing Risks and Benefits of Research Transparency. MCC is committed to ensuring transparent, reproducible, and ethical data and documentation and seeks to further encourage data use through its new MCC Evidence Platform.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
  • Both MCC and its partner in-country teams produce and provide data that is continuously updated and accessed. MCC’s website is routinely updated with the most recent information, and in-country teams are required to do the same on their respective websites. As such, all MCC program data is publicly available on MCC’s website and individual MCA websites for use by MCC country partners, in addition to other stakeholder groups. As a part of each country’s program, MCC provides resources to ensure data and evidence are continually collected, captured, and accessed. In addition, each project’s evaluation has an Evaluation Brief that distills key learning from MCC-commissioned independent evaluations. Select Evaluation Briefs have been posted in local languages, including Mongolian, Georgian, French, and Romanian, to better facilitate use by country partners. 
  • MCC also has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR), referred to as the Data Collaboratives for Local Impact (DCLI). This partnership is improving the use of data analysis for decision-making within PEPFAR and MCC partner countries by working toward evidence-based programs to address challenges in HIV/AIDS and health, empowerment of women and youth, and sustainable economic growth. Data-driven priority setting and insights gathered by citizen-generated data and community mapping initiatives contribute to improved allocation of resources in target communities to address local priorities, such as job creation, access to services, and reduced gender-based violence. DCLI’s impact is being extended through a new partnership in Côte d’Ivoire. MCC, Microsoft, and others are partnering to develop a Women’s Data Lab and Network program. The program will empower women-owned or women-led small and medium enterprises and female innovators and entrepreneurs with digital and data skills to effectively participate in the digital economy and grow their businesses.
Score
6
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY21?

6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • For each investment, MCC’s Economic Analysis (EA) division undertakes a Constraints Analysis to determine the binding constraints to economic growth in a country. To determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with a cost-benefit analysis. The results of these analyses allow MCC to determine which investments will yield the greatest development impact and return on MCC’s investment. Every investment also has its own set of indicators as well as standard, agency-wide sector indicators for monitoring during the lifecycle of the investment and an evaluation plan for determining the results and impact of a given investment. MCC’s Policy for Monitoring and Evaluation details MCC’s evidence-based research and evaluation framework. Per the Policy, each completed evaluation requires a summary of findings, now called the Evaluation Brief, to summarize the key components, results, and lessons learned from the evaluation. Evidence from previous MCC programming is considered during the development of new programs. Per the Policy, “monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.”
6.2 Did the agency have a common evidence framework for funding decisions?
  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses evidence-based selection criteria, generated by independent, objective third parties, to select countries for grant awards. To be eligible for selection, World Bank-designated low- and lower-middle-income countries must first pass the MCC – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. An in-depth description of the country selection procedure can be found in the annual report.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • MCC is a leader in the production of evidence on the results of its international development programs. As a data-driven agency, MCC invests in evidence-generating activities, such as due diligence surveys, willingness to pay surveys, and independent evaluations. MCC has more room to lead, however, in the accessibility and usability of its evidence. Since 2013, MCC has shared the data, documentation, and analysis underlying its independent evaluations. In terms of accessibility of evaluation materials, users have noted that MCC’s central evaluation and data repository, the Evaluation Catalog, is hard to navigate. 
  • Recognizing that transparency is not enough to achieve accountability and learning, MCC has developed the MCC Evidence Platform. The Evidence Platform will offer first-of-its-kind study and data access and usability and encourage the utilization of its vast library of evidence. MCC will invite researchers–from students to experienced professionals–to use the data and documentation provided here to reproduce and build upon MCC’s evidence base to drive development effectiveness for, and beyond, MCC.
  • The MCC Evidence Platform will share:
    • Studies–Users may search by studies to find all the related data and documentation associated with each study. Study Types include: Independent Evaluations, Monitoring, Constraints Analysis, Willingness to Pay, Due Diligence, Country-led Studies, and Other Studies.
    • Documentation–Users may search by specific documentation associated with MCC-funded studies. This documentation is shared as specific Knowledge Products Types, including: Design Report, Baseline Report, Interim Analysis Report, Final Analysis Report, MCC Learning Document, Evaluation-based Cost-Benefit Analysis, and Questionnaires.
    • Data Packages–Users may search by specific data packages associated with MCC-funded studies. Data Package Type includes: Round (Baseline, Interim, Final), Public, Restricted-Access
  • The MCC Evidence Platform will encourage the use of MCC’s data, documentation, and analysis as global public goods to support mutual accountability for the agency and its country partners, and to encourage learning from measured results. 
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • As described above, the new MCC Evidence Platform was intentionally designed and launched with utilization at its core. The Platform specifically encourages users to take MCC learning and evidence and apply and reproduce it for new learning. The Platform will then aim to also share new learning based on published MCC evidence. As a part of this comprehensive approach, Evaluation Briefs continue to be a cornerstone to promoting utilization across audience groups. Enhanced utilization of MCC’s vast evidence base and learning was a key impetus behind the creation and expansion of the Evaluation Briefs and Star Reports. A push to ensure sector-level evidence use has led to renewed emphasis of the Principles into Practice series, with recent reports on the transport, education, and water & sanitation sectors.
  • MCC has also enhanced its in-country evaluation dissemination events to ensure further results and evidence building with additional products in local languages and targeted stakeholder learning dissemination strategies
Score
7
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY21?

7.1 Did the agency have staff dedicated to leading its innovation efforts to improve the impact of its programs?
  • MCC supports the creation of multidisciplinary country teams to manage the development and implementation of each compact and threshold program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. Prior to moving forward with a program investment, teams are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • MCC recently launched its second-ever internal Millennium Efficiency Challenge (MEC) designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the compact and threshold program development timeline while maintaining MCC’s rigorous quality standards and investment criteria. This year MCC is seeking to implement proposed innovations around making the compact development process more efficient, among other challenges.
7.2 Did the agency have policies, processes, structures, or programs to promote innovation to improve the impact of its programs?
  • MCC’s approach to development assistance hinges on its innovative and extensive use of evidence to inform investment decisions, guide program implementation strategies, and assess and learn from its investment experiences. As such, MCC’s Office of Strategic Partnerships offers an Annual Program Statement (APS) opportunity that allows MCC divisions and country teams to tap the most innovative solutions to new development issues. In FY21, the Monitoring and Evaluation division, using MCC’s APS and traditional evaluation firms, continues to pilot partnerships with academics and in-country think tanks to leverage innovative, lower cost data technologies across sectors and regions. These include:
    • using satellite imagery in Sri Lanka to measure visible changes in investment on land to get early indications if improved land rights are spurring investment;  
    • leveraging big data and cell phone applications in Colombo, Sri Lanka to monitor changes in traffic congestion and the use of public transport; independently measuring power outages and voltage fluctuations using cell phones in Ghana, where utility outage data is unreliable, and where outage reduction is a critical outcome targeted by the Compact;
    • using pressure loggers on piped water at the network and household levels to get independent readings on access to water in Dar es Salaam, Tanzania; and using remote sensing to measure water supply in water kiosks in Freetown, Sierra Leone.
  • These innovations in evidence generation have been ever-more critical in the past year given the inability to do many data collection activities in person. MCC has utilized local data collection and better technology to maintain evidence generation. For example, MCC partnered with the University of Colorado on their use of satellite connected sensors on water kiosks built by the Sierra Leone Threshold Program’s Water Project. MCC partnered with a consortium of the University of Colorado and SweetSense Inc. technology firm to collect high frequency monitoring data using emerging and cost-effective technologies to understand the state of water service from the water kiosks constructed by the project. The partnership provided significant flexibility to collaboratively shape how available technology can suit MCC’s monitoring needs, including in data-challenged environments. It also offered an example of how other MCC water projects can capitalize on the use of similar technology tools to collect more reliable data more frequently.
  • MCC regularly engages in implementing test projects as part of its overall compact programs. A few examples include: (1) in Morocco, an innovative pay-for-results mechanism to replicate or expand proven programs that provide integrated support; (2) a “call-for-ideas” in Benin for information regarding potential projects that would expand access to renewable off-grid electrical power; and (3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system.
7.3 Did the agency evaluate its innovation efforts, including using rigorous methods?
  • Although MCC rigorously evaluates all program efforts, MCC takes special care to ensure that innovative or untested programs are thoroughly evaluated. In addition to producing final program evaluations, MCC is continuously monitoring and evaluating all programs throughout the program lifecycle, including innovation efforts, to determine if mid-program course-correction actions are necessary. This interim data helps MCC continuously improve its innovation efforts so that they can be most effective and impactful. Although 32% of MCC’s evaluations use random-assignment methods, all of MCC’s evaluations – both impact and performance – use rigorous methods to achieve the three-part objectives of accountability, learning, and results in the most cost-effective way possible.  
Score
15
Use of Evidence in Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY21?

8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
  • MCC awards all of its agency funds through two competitive grants: (1) the compact program ($651.0 million in FY21; eligible grantees: developing countries) and (2) the threshold program ($31 million in FY21; eligible grantees: developing countries).
8.2 Did the agency use evidence of effectiveness to allocate funds in the five largest competitive grant programs? (e.g., Were evidence-based interventions/practices required or suggested? Was evidence a significant requirement?)
  • For country partner selection, as part of the compact and threshold competitive programs, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties.
  • When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact; (2) improved Scorecard performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its five largest competitive grant programs to build evidence? (e.g., requiring grantees to participate in evaluations)
  • Per its Policy for Monitoring and Evaluation (M&E), MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its Evidence Platform and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. 
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
  • MCC uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above. 
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?  
  • MCC’s $540 million Senegal Compact (2010-2015) funded the $170 million Irrigation and Water Resources Management Project to improve the productivity of the agricultural sector in certain agricultural-dependent areas of northern Senegal. The project rehabilitated or built 266 km of irrigation and drainage infrastructure, constructed a 450-hectare perimeter, mapped irrigated land, and trained officials to better administer land. The project was based on the theory that improved irrigation and land rights increase agricultural investment, productivity and ultimately household income. Five years after the completion of the project, the evaluation found:
    • The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
    • The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
    •  The irrigation infrastructure that the project built and rehabilitated remains in good condition, but routine weed clearance and dredging is not keeping pace with what is needed, which may reduce water available for farming.
  • From the evidence collected for this evaluation, MCC learned that large-scale irrigation projects, especially for smallholder farmers, may have difficulty meeting the economic rate of return (ERR) 10% hurdle rate. However, soft-side interventions, such as farmer trainings, and a strong focus on the market can boost farm incomes and the ERR. MCC is applying this lesson by supporting farmer services in Niger.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity-building efforts?
  • As described above, MCC develops a Monitoring & Evaluation (M&E) Plan for every grantee, which describes the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
  • MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity-building for grantees.
Score
10
Use of Evidence Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its non-competitive grant programs in FY21?

  • MCC does not administer non-competitive grant programs.
Score
8
Repurpose for Results

In FY21, did the agency shift funds away from or within any practice, policy, interventions, or program that consistently failed to achieve desired outcomes?

10.1 Did the agency have policy(ies) for determining when to shift funds away from grantees, practices, policies, interventions, and/or programs that consistently failed to achieve desired outcomes, and did the agency act on that policy?
  • MCC has established a Policy on Suspension and Termination that lays out the reasons for which MCC may suspend or terminate assistance to partner countries, including if a country “engages in a pattern of actions inconsistent with the MCC’s eligibility criteria,” by failing to achieve desired outcomes such as: 
    • A decline in performance on the indicators used to determine eligibility;
    • A decline in performance not yet reflected in the indicators used to determine eligibility; or
    • Actions by the country which are determined to be contrary to sound performance in the areas assessed for eligibility for assistance, and which together evidence an overall decline in the country’s commitment to the eligibility criteria.
  • Of 62 compact selections by MCC’s Board of Directors, including regional compacts, 15 have had their partnerships or a portion of their funding ended due to concerns about country commitment to MCC’s eligibility criteria or a failure to adhere to their responsibilities under the compact. MCC’s Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators, including achieving desired results.
  • In a number of cases, MCC has repurposed investments based on real-time evidence. In MCC’s first compact with Lesotho, MCC cancelled the Automated Clearing House Sub-Activity within the Private Sector Development Project after monitoring data determined that it would not accomplish the economic growth and poverty reduction outcomes envisioned during compact development. The remaining $600,000 in the sub-activity was transferred to the Debit Smart Card Sub-Activity, which targeted expanding financial services to people living in remote areas of Lesotho. In Tanzania, the $32 million Non-Revenue Water Activity was re-scoped after the final design estimates on two of the activity’s infrastructure investments indicated higher costs that would significantly impact their economic rates of return. As a result, $13.2 million was reallocated to the Lower Ruvu Plant Expansion Activity, $9.6 million to the Morogoro Water Supply Activity, and $400,000 for other environmental and social activities. In all of these country examples, the funding is either reallocated to activities with continued evidence of results or returned to MCC for investment in future programming.
10.2 Did the agency identify and provide support to agency programs or grantees that failed to achieve desired outcomes?
  • For every investment in implementation, MCC undertakes a Quarterly Performance Review with senior leadership to review, among many issues, quarterly results indicator tracking tables. If programs are not meeting evidence-based targets, MCC undertakes mitigation efforts to work with the partner country and program implementers to achieve desired results. These efforts are program- and context-specific but can take the form of increased technical assistance, reallocated funds, and/or new methods of implementation. For example, MCC reallocated funds in its compact with Ghana after the country failed to achieve agreed-upon policy reforms to ensure the sustainability of the investments. Upon program completion, if a program does not meet expected results targets, MCC works to understand and memorialize why and how this occurred, beginning with program design, the theory of change, and program implementation. The results and learning from this inquiry are published through the country’s Star Report.
  • MCC also consistently monitors the progress of compact programs and their evaluations across sectors, using the learning from this evidence to make changes to MCC’s operations. For example, in Côte d’Ivoire, MCC is currently implementing the Abidjan Transport Project that builds on critical learning from 16 completed roads projects. MCC learned that projects must be selected based on a complete road network analysis and that any transport program must address policy and institutional issues in the transport sector up front to ensure sustainability of road investments. As such, the Abidjan Transport Project will focus on the rehabilitation of up to 32 kilometers of critical roadway and adjoining infrastructure in the central corridor of Abidjan, and will invest in educational and training resources for road asset management, develop road asset and safety resources and management tools, and develop mechanisms to support more efficient use of road maintenance funds. 
  • In Morocco, MCC is implementing a Workforce Development Activity that builds on the results and learning from 11 completed technical and vocational education and training (TVET) investments. MCC synthesized learning from past TVET programs and concluded that MCC’s TVET investments should have two primary goals: placing graduates in higher-income jobs and supplying the private sector with in-demand skills. Based on this learning, the Morocco Compact’s Workforce Development Activity aims to increase the quality and relevance of TVET by supporting private-sector driven governance as well as construction/rehabilitation of 15 training centers, together with targeted investments in policy reform of the sector. This activity is also investing in improvements to job placement services through a results-based financing mechanism as well as improvements to the availability and analysis of labor market data.
Back to the Standard

Visit Results4America.org