Now Reading:

Three Steps to Improving Evaluation in the Philanthropic Sector

Three Steps to Improving Evaluation in the Philanthropic Sector

People working in the philanthropic sector are familiar with the problems of monitoring and evaluation in our field. Funders want to ensure that their investments have an “impact,” so they define what that means and then ask their grantees to measure it. Grantee organizations need funding to do their work, so they agree to measure these indicators, even if they know they aren’t a good fit for their work or the communities they serve. Communities are repeatedly asked to provide this data, to little or no clear benefit.

Unsurprisingly, this system rarely serves anyone well. The communities where a grantee works may not benefit at all from engaging in this type of evaluative work and rarely have any say over how their data is used or interpreted. Grantee organizations are required to report on indicators that may be less useful to them, often at the expense of more mission-critical work such as their program implementation or data collection efforts that would better serve their learning. Funders, despite dictating these measures, often get a very basic and less meaningful understanding of their funding’s impact. And the cycle continues. Sensing this gap between what they think success should look like and the type of data they’re getting, funders may even double down on dictating more “rigorous” indicators or outcomes that they want their grantees to track!

We believe there is a better way.

As part of the 2021 American Evaluation Association’s annual conference, we—one member of a foundation’s learning team, and staff from two different community-based grantee organizations—presented our own experiences and vision for how to make philanthropic evaluation activities better serve all of us—communities, grantees, and funders.

We believe philanthropic evaluation can be improved in three steps that work together by flipping the question, ‘who decides what matters,’ on its head. Let’s look at how to get there.

Funders need to lead the change by creating an enabling environment.

In traditional philanthropic evaluation, funders are the most powerful actors in the system. This means change should start with them.

Grantee organizations should not be forced to independently advocate for these changes and do the work of educating those in a position of greater power, at the risk of losing their own funding.

Instead, donors need to educate themselves and reflect on how traditional monitoring and evaluation practices in the philanthropic sector reinforce traditional power structures.

At WomenStrong, we have progressively simplified the information we ask our grantee partners to share about what they’d like to learn over the course of their grant. At the outset of our current funding model, we asked our grantee partners to complete an indicator table in Excel that followed a standard development format, requiring them to specify outputs, outcomes, data sources and methods, baselines, and targets. Realizing that this template was needlessly complex, we instead began asking our grantee partners to complete a “Knowledge and Learning Plan” in Word. While simpler, this template still asked them to document in writing the learning questions they wanted to answer, data sources and methods, and systems they had in place to institutionalize learning. Now, we have simplified even further, simply asking our grantee partners to share a few learning questions they hope to answer at the start of their grant and then hosting periodic group check-in calls. On these calls, partners can reflect on what they’ve learned to date, decide whether their questions are still relevant or in need of updating, and brainstorm as to how they can collect the information they need to answer their latest questions, all with support from their peers.

Funders then need to deconstruct the traditional measurement expectations of the organizations they support. They should give grantees the flexibility and support they need to measure and report on the information that is most critical to their learning, in the format that best aligns with their chosen approach as well as their resources. This will likely require the elimination of mandatory indicators and a more flexible approach to reporting.

Finally, funders must also ensure that they are answerable to their grantees and open to receiving feedback regarding whether these changes are fulfilling their intended purpose and how the funders can continue to provide better support.

Grantee organizations must extend the flexibility in this new system to engage the communities they serve more meaningfully in data collection and interpretation.

When presented with opportunities to enact more flexible evaluation approaches, grantees must be willing to think more expansively and inclusively about how to engage their communities in evaluation work. Community leaders and advocates need to be involved in conceptualizing the purpose of monitoring and evaluation activities, the methods used to collect information, and how that information is used and interpreted.

Additionally, nonprofits must ensure that they are answerable to the communities they serve, with meaningful mechanisms for receiving feedback and regular practices of sharing out how that feedback is being used.

Over the last year, Women’s Justice Initiative (WJI), a Guatemala-based organization dedicated to combating gender inequality and ending violence against indigenous girls and women in rural communities, has developed a new curriculum incorporating an evaluation of WJI’s staff and programs that challenges and subverts the top-down perspectives of most evaluation practices. For a while, WJI evaluated the impact of their programs by prioritizing quantitative information collected through baseline and endline surveys that only measured changes in knowledge. Although quantitative data is important, WJI has learned to recognize the value in including qualitative data collection whilst recognizing power imbalances that might occur between practitioners and the communities they serve.

The core guiding principle of data collection should be justice, meaning that communities, as those most burdened by the data collection, should also reap the most benefits from the information gained. Justice does not have to be in tension or conflict with traditional conceptualizations of “rigor” in evaluation.

Another example of engaged data collection are the community mappings that Community Advocates -women leaders from communities where WJI has previously worked and where WJI’s field technicians conduct to get community-level data not collected by national surveys. WJI shares the data with community leaders so that they can use it for their own work. This makes it a collaborative effort and recognizes that community advocates and field technicians are experts in navigating communities that share their demographic profile, actively rejecting the idea that only external evaluators can do this type of work.

Communities then have the power to define “success” and hold the organizations serving them accountable

If the first two steps are enacted, the stage should be set for community members to take a leading role in defining what “success” looks like to them.

This process of defining success must invite people who may not consider themselves evaluators into the learning process and must also honor different ways of knowing.

Projet Jeune Leader (PJL), a youth-led, community-based organization in Madagascar, was able to realize this kind of positive systems change in 2018 when they received funding and flexibility from a donor to co-define and measure “success” with the groups they serve. They shifted from exclusively using donor-defined indicators, to community-led indicators that inform their accountability, such as transparency, trust, and responsiveness. PJL found that this shift resulted in multiple positive outcomes, including an enhanced ability to respond more quickly and to make changes to their work, and an increased understanding of the multifaceted impacts of their work that help drive their scale and sustainability.

Undertaking this kind of systems change requires collaboration from all actors in the philanthropic evaluation sector. But if these three steps are followed, we believe the result will be a healthier system, with benefits for all. Funders will get more nuanced, insightful perspectives on the impact of their funding. Grantee organizations will be able to freely identify learning activities that best fit their resources and objectives. Most importantly, community members will have the power to specify their most pressing challenges, as well as what “success” means to them. Ultimately, these shifts have the power to promote more meaningful learning and achieve better outcomes for the communities we all aim to serve.

###

Co-Authors:

Laura Leeson, Program Director, Projet Jeune Leader, Madagascar

Mara Steinhaus, Senior Research and Learning Specialist, WomenStrong International, USA

Andrea Tock, Monitoring and Evaluation Coordinator, Women’s Justice Initiative, Guatemala

comments powered by Disqus

More from WomenStrong

May 7, 2022

A Terrible Time for Mother’s Day

WomenStrong’s Dr. Susan M. Blaustein reflects on the agonizing challenges faced by mothers across the globe, encouraging us to celebrate their “extraordinary capacity for caregiving, hope, and healing, as they rise, time and again, above the hardship they’re forced to endure.”… Continue Reading