How to measure the impact of OpenUp’s Civic Tech Tools

At OpenUp, we build tools for change. There is no creating tech for tech’s sake; instead, we believe there are ways to use tech (and create tech) to do good - but to do this, you have to approach with intent.

A strategy for trying to make sure you get tech to do what you want it to do is to measure the impact it is making - but how might an organisation go about doing this? Here is a quick FAQ to help introduce our approach to measuring the impact (and change) our tools bring to the world:

‍Why is impact measurement important? 

Measuring the impact of civic tech tools is relevant to both those inside and outside the organisation. There are three main reasons for doing it. Firstly, to inform long-term internal decisions (e.g. where should we spend our time) by understanding how much impact a tool has and how this compares to other tools. Secondly, to drive improvements in tools by understanding how the impact is being generated. Lastly, to improve communication of the impact of the tools with other partners. While it may not be easy, measuring impact is essential.

What are the challenges?

Impact measurement in the Civic Tech space needs to focus on changes in end outcomes and not just on traditional tech engagement metrics. Traditional short-term “online” measures of success (website hits or sign-ups) do not necessarily align with long-term “offline” measures of impact. Instead we must understand how tools are being used, and by who, to understand their impact. 

Causality is hard to prove in the environments where Civic Tech projects operate. This is because the connection between a tool’s effects on its users and on society requires a series of steps between participation and the end outcome which are often unclear and occur over a long period of time. We often care about the impact a tool is having on a system, which is likely complicated to understand.

There are typically limited organisational resources for impact measurement. Funding is often secured for specific projects and civic tech organisations typically have few monitoring and evaluation focused resources which means there can be limited opportunity to evaluate the impacts of civic tech tools.

A framework for thinking about impact measurement

To understand the impact of a tool you need to be able to measure three things: who is it using and what for, how many of them are using it and how effective the tool is. Each of these components is essential and each likely requires a different approach and data source. Any process to measure the impact of a Civic Tech tool will require assumptions and be imperfect, however we believe this framework offers potential.

A process for implementing impact measurement

At OpenUp we have implemented the following steps on some of our projects in an attempt to measure impact:

  1. Define and validate the groups of people using your tools: Defining your user groups (e.g. civil servants, academics) can be based on your theory of change regarding who the tool is designed for and how they will use it. To validate this we’ve implemented “exit intent” pop-ups on some of our tools (which pop-up when you’re leaving the site) where we ask users simple questions such as “who are you”. Using this data we can be sure we know who our users are.
  2. Understand how many users of each group you have: Using google analytics it is easy to determine the total armount traffic visiting your site. This is often where impact measurement begins and ends. However, combining this aggregate data with data from our pop-ups we are able to go further and say for example “20% of our total users are Civil Servants”.
  3. Understand the effectiveness of your tool for each user segment: Within our pop-ups we also ask users a few simple questions regarding whether they were satisfied with our tool or whether they were able to achieve their goal. We use this information as a guide for the effectiveness of our tools for different types of users.

What have we found?

As we’ve begun experimenting with this approach to impact measurement we’ve begun to learn things about both the impact our tools generate as well as the process through which we can measure this.

What we’ve learnt about impact:

  • Our Wazimap tool has a wide range of users including: civil servants, private sector employees, citizens, academics, civil society employees, journalists, students and elected officials
  • The tool is effective for about ⅔ of users but only ⅓ of journalists
  • The tool is significantly more effective on desktop compared to mobile devices
  • We have also explored additional areas such as: what are the most useful features

What we’ve learnt about process:

  • People do respond to quick pop-ups: We have seen responses rates of between 5-15% depending on the tool and device.
  • Periodic (say quarterly) impact measurement through pop-ups is likely sufficient instead of continuous assessments.
  • It takes time to embed these impact measurement processes and findings into teams' ways of working. Building a culture of measurement and learning is more than just the data required to do it!

What’s next?

  • We plan to roll out this process across more tools.
  • We want to continue to embed these ways of working into our teams.
  • We want to include impact measurement into a broader assessment of performance management of our tools.
Share this post:
Email icon

At OpenUp, we build tools for change. There is no creating tech for tech’s sake; instead, we believe there are ways to use tech (and create tech) to do good - but to do this, you have to approach with intent.

A strategy for trying to make sure you get tech to do what you want it to do is to measure the impact it is making - but how might an organisation go about doing this? Here is a quick FAQ to help introduce our approach to measuring the impact (and change) our tools bring to the world:

‍Why is impact measurement important? 

Measuring the impact of civic tech tools is relevant to both those inside and outside the organisation. There are three main reasons for doing it. Firstly, to inform long-term internal decisions (e.g. where should we spend our time) by understanding how much impact a tool has and how this compares to other tools. Secondly, to drive improvements in tools by understanding how the impact is being generated. Lastly, to improve communication of the impact of the tools with other partners. While it may not be easy, measuring impact is essential.

What are the challenges?

Impact measurement in the Civic Tech space needs to focus on changes in end outcomes and not just on traditional tech engagement metrics. Traditional short-term “online” measures of success (website hits or sign-ups) do not necessarily align with long-term “offline” measures of impact. Instead we must understand how tools are being used, and by who, to understand their impact. 

Causality is hard to prove in the environments where Civic Tech projects operate. This is because the connection between a tool’s effects on its users and on society requires a series of steps between participation and the end outcome which are often unclear and occur over a long period of time. We often care about the impact a tool is having on a system, which is likely complicated to understand.

There are typically limited organisational resources for impact measurement. Funding is often secured for specific projects and civic tech organisations typically have few monitoring and evaluation focused resources which means there can be limited opportunity to evaluate the impacts of civic tech tools.

A framework for thinking about impact measurement

To understand the impact of a tool you need to be able to measure three things: who is it using and what for, how many of them are using it and how effective the tool is. Each of these components is essential and each likely requires a different approach and data source. Any process to measure the impact of a Civic Tech tool will require assumptions and be imperfect, however we believe this framework offers potential.

A process for implementing impact measurement

At OpenUp we have implemented the following steps on some of our projects in an attempt to measure impact:

  1. Define and validate the groups of people using your tools: Defining your user groups (e.g. civil servants, academics) can be based on your theory of change regarding who the tool is designed for and how they will use it. To validate this we’ve implemented “exit intent” pop-ups on some of our tools (which pop-up when you’re leaving the site) where we ask users simple questions such as “who are you”. Using this data we can be sure we know who our users are.
  2. Understand how many users of each group you have: Using google analytics it is easy to determine the total armount traffic visiting your site. This is often where impact measurement begins and ends. However, combining this aggregate data with data from our pop-ups we are able to go further and say for example “20% of our total users are Civil Servants”.
  3. Understand the effectiveness of your tool for each user segment: Within our pop-ups we also ask users a few simple questions regarding whether they were satisfied with our tool or whether they were able to achieve their goal. We use this information as a guide for the effectiveness of our tools for different types of users.

What have we found?

As we’ve begun experimenting with this approach to impact measurement we’ve begun to learn things about both the impact our tools generate as well as the process through which we can measure this.

What we’ve learnt about impact:

  • Our Wazimap tool has a wide range of users including: civil servants, private sector employees, citizens, academics, civil society employees, journalists, students and elected officials
  • The tool is effective for about ⅔ of users but only ⅓ of journalists
  • The tool is significantly more effective on desktop compared to mobile devices
  • We have also explored additional areas such as: what are the most useful features

What we’ve learnt about process:

  • People do respond to quick pop-ups: We have seen responses rates of between 5-15% depending on the tool and device.
  • Periodic (say quarterly) impact measurement through pop-ups is likely sufficient instead of continuous assessments.
  • It takes time to embed these impact measurement processes and findings into teams' ways of working. Building a culture of measurement and learning is more than just the data required to do it!

What’s next?

  • We plan to roll out this process across more tools.
  • We want to continue to embed these ways of working into our teams.
  • We want to include impact measurement into a broader assessment of performance management of our tools.