Impact Measurement Practices for Civil Society Organisations

Reflections on the use of research tools to enhance organisations’ social impact.

Though civil society organisations are increasingly interested in tracking and measuring their social impact, many still struggle to do so effectively. This is a problem because better social results could be achieved at current costs, if only interventions were systematically assessed and resources were reallocated to the initiatives proved to be most impactful. Achieving this would take impact measurement, evidence-based decision-making, and pivoting to more effective interventions when the evidence tells you to - agile thinking, here we go. 

These practices, however, are difficult to implement. Evaluations tend to be expensive and time consuming, not fitting most organisations’ budgets. Civil society organisations still struggle to correctly identify impact; and many still see this kind of investment as a waste of resources, which would be better used if channelled to expand programs and operations.

After eight weeks at OpenUp, I discovered that we are increasingly committed to keeping track of our projects’ impacts. I also identified some of the challenges that seem to be common to most organisations that make such a commitment. Based on those, this article points to some strategies I believe could make impact measurement easier and more common practice in civil society organisations.

  1.  Application of problem diagnosis methods

Let’s start with problem diagnosis. When working with social development, there is a temptation to focus efforts on designing solutions to social problems - which leaves little time to deeply understand the targeted problem. However, if root causes to the problem are not correctly identified, the proposed solutions may not be ideal to the specific context.

Reallocating some of your time from solution design to problem diagnosis increases your chances of coming up with assertive initiatives from the beginning. This practice will not improve your evaluation directly, but may reduce your chances of spending resources on solutions that are ineffective - which is exactly what evaluation should prevent. 

The Smart Policy Design and Implementation (SPDI) and the Problem Driven Iterative Adaptation (PDIA) methods are good frameworks to support you in developing your own problem diagnosis. Applying them before designing and evaluating solutions can increase your chances of picking the right social intervention.

  1.  Correct identification of impact

When time and capacity permit, civil society organisations tend to mobilise resources to collect data on their beneficiaries, comparing outcomes before and after exposure to their services or products. These comparisons are frequently referred to as the organisation’s or project’s impact, which, despite representing an admirable step towards evaluation, can be a misinterpretation. By making a pre-post comparison, the variation due to the intervention can be mistaken for changes due to other circumstances, which could have affected beneficiaries’ outcomes at the same time that the initiative took place.

Organisations should be able to identify what kind of comparisons consist of impact measures. To date, the standard procedure to evaluate impact involves comparing two groups that are very similar on average, except for the participation in the program. The difference in outcomes of these two groups is expected to reflect the sole difference between them - i.e., access to the program.

These comparable groups are usually achieved through randomisation of access to the intervention, though there are other ways to attain comparability. I will not delve into these concepts in this article, but, if they are new to you, it could be useful to do some quick research on randomised control trials. As a final takeaway, keep in mind that other factors may always be at play, beyond your organisation’s social intervention.

  1.  Good practices for impact evaluation

Running experiments with comparable groups can be extremely expensive and will not fit the budgets of most organisations. There are, however, practices which could be routinely implemented to make evaluations simpler and cheaper. These practices involve keeping evaluation on the radar when decisions are made, making organisations prepared for future partnerships to measure impact.

They include, but are not limited to,

  • Randomisation of access or exposure to intervention/service across interested people or potential beneficiaries whenever possible;
  • Collection of data on both beneficiaries and a group of similar people - potentially, people who signed up to participate but couldn’t due to lack of available slots;
  • Conducting periodic workshops on basic strategies to get comparable groups for evaluations, focused on training the team to spontaneously identify and invest in these ideal contexts;
  • Collecting secondary data (i.e., provided by other sources, not collected by the organisations) available on the intervention’s theme and problem;
  • Developing and researching low-cost methodologies for your own organisation;  such as A/B testing for digital initiatives.
  • Digital and automated data collection, if set up correctly can reduce the resources and cost needed to collect impact measurement data.

If these practices are taken seriously, it becomes much easier, for instance, to connect with academics who are looking for projects to evaluate - that is, qualified people who could be willing to conduct assessments at no charge.

  1.  Use of existing evidence to base decisions

In some cases, it will be impossible to gather evidence from experiments. That is where existing evidence on similar interventions may be helpful. Luckily, these are readily available on many social issues, and should serve as guidance for decision-making within organisations. 

Multiple academic articles are focused on evaluating social interventions. I plan to write a second article with a list of sources that may be useful for organisations.

  1.  Advocacy for more resources dedicated to impact assessment

Last but not least, we need to create a culture that values monitoring and evaluation. We must have investors and donors demanding assessments of their projects. What may seem costly today can lead to greater impact with the same investment in the future. We need organisations to take the first step and prioritize these matters. We also need to share our processes and results with our civil society networks, so that others can learn from mistakes and successes. Join us in this new movement of evidence-based decision making!


Share this post:
Share this post:

Reflections on the use of research tools to enhance organisations’ social impact.

Though civil society organisations are increasingly interested in tracking and measuring their social impact, many still struggle to do so effectively. This is a problem because better social results could be achieved at current costs, if only interventions were systematically assessed and resources were reallocated to the initiatives proved to be most impactful. Achieving this would take impact measurement, evidence-based decision-making, and pivoting to more effective interventions when the evidence tells you to - agile thinking, here we go. 

These practices, however, are difficult to implement. Evaluations tend to be expensive and time consuming, not fitting most organisations’ budgets. Civil society organisations still struggle to correctly identify impact; and many still see this kind of investment as a waste of resources, which would be better used if channelled to expand programs and operations.

After eight weeks at OpenUp, I discovered that we are increasingly committed to keeping track of our projects’ impacts. I also identified some of the challenges that seem to be common to most organisations that make such a commitment. Based on those, this article points to some strategies I believe could make impact measurement easier and more common practice in civil society organisations.

  1.  Application of problem diagnosis methods

Let’s start with problem diagnosis. When working with social development, there is a temptation to focus efforts on designing solutions to social problems - which leaves little time to deeply understand the targeted problem. However, if root causes to the problem are not correctly identified, the proposed solutions may not be ideal to the specific context.

Reallocating some of your time from solution design to problem diagnosis increases your chances of coming up with assertive initiatives from the beginning. This practice will not improve your evaluation directly, but may reduce your chances of spending resources on solutions that are ineffective - which is exactly what evaluation should prevent. 

The Smart Policy Design and Implementation (SPDI) and the Problem Driven Iterative Adaptation (PDIA) methods are good frameworks to support you in developing your own problem diagnosis. Applying them before designing and evaluating solutions can increase your chances of picking the right social intervention.

  1.  Correct identification of impact

When time and capacity permit, civil society organisations tend to mobilise resources to collect data on their beneficiaries, comparing outcomes before and after exposure to their services or products. These comparisons are frequently referred to as the organisation’s or project’s impact, which, despite representing an admirable step towards evaluation, can be a misinterpretation. By making a pre-post comparison, the variation due to the intervention can be mistaken for changes due to other circumstances, which could have affected beneficiaries’ outcomes at the same time that the initiative took place.

Organisations should be able to identify what kind of comparisons consist of impact measures. To date, the standard procedure to evaluate impact involves comparing two groups that are very similar on average, except for the participation in the program. The difference in outcomes of these two groups is expected to reflect the sole difference between them - i.e., access to the program.

These comparable groups are usually achieved through randomisation of access to the intervention, though there are other ways to attain comparability. I will not delve into these concepts in this article, but, if they are new to you, it could be useful to do some quick research on randomised control trials. As a final takeaway, keep in mind that other factors may always be at play, beyond your organisation’s social intervention.

  1.  Good practices for impact evaluation

Running experiments with comparable groups can be extremely expensive and will not fit the budgets of most organisations. There are, however, practices which could be routinely implemented to make evaluations simpler and cheaper. These practices involve keeping evaluation on the radar when decisions are made, making organisations prepared for future partnerships to measure impact.

They include, but are not limited to,

  • Randomisation of access or exposure to intervention/service across interested people or potential beneficiaries whenever possible;
  • Collection of data on both beneficiaries and a group of similar people - potentially, people who signed up to participate but couldn’t due to lack of available slots;
  • Conducting periodic workshops on basic strategies to get comparable groups for evaluations, focused on training the team to spontaneously identify and invest in these ideal contexts;
  • Collecting secondary data (i.e., provided by other sources, not collected by the organisations) available on the intervention’s theme and problem;
  • Developing and researching low-cost methodologies for your own organisation;  such as A/B testing for digital initiatives.
  • Digital and automated data collection, if set up correctly can reduce the resources and cost needed to collect impact measurement data.

If these practices are taken seriously, it becomes much easier, for instance, to connect with academics who are looking for projects to evaluate - that is, qualified people who could be willing to conduct assessments at no charge.

  1.  Use of existing evidence to base decisions

In some cases, it will be impossible to gather evidence from experiments. That is where existing evidence on similar interventions may be helpful. Luckily, these are readily available on many social issues, and should serve as guidance for decision-making within organisations. 

Multiple academic articles are focused on evaluating social interventions. I plan to write a second article with a list of sources that may be useful for organisations.

  1.  Advocacy for more resources dedicated to impact assessment

Last but not least, we need to create a culture that values monitoring and evaluation. We must have investors and donors demanding assessments of their projects. What may seem costly today can lead to greater impact with the same investment in the future. We need organisations to take the first step and prioritize these matters. We also need to share our processes and results with our civil society networks, so that others can learn from mistakes and successes. Join us in this new movement of evidence-based decision making!


You might also like