Whenever we discuss innovation in the Sub-Saharan context, people raise the lack of data and digital skills as an inhibiting factor to a “Bright Digital Future”. This often hides the complexities of these challenges amongst broad platitudes, but it can be true across sectors even at a high-level. In South Africa, the public sector has been shown to lack the digital skills necessary to even perform its current functions. Almost 70% of South African businesses surveyed have said they have had to offshore digital work because of being unable to source sufficient digital talent locally. And South Africa’s incredibly weak scoring in digital skills and education is cited as a direct reason a majority of persons are impeded from benefiting from the “digital economy”. This a massive challenge and a massive gap - and so when every conversation ends in: “We should promote education and capacity-building in digital skills”, we are compelled to ask back: “But, how?”. How do we educate at the scale necessary to make an impact?
OpenUp are in the process of conducting a second round of training, in collaboration with the Global Partnership for Sustainable Development Data, that seeks to enhance data science skills across Africa, partiuclarly in the fields of climate and health science. And we’re having a blast. But, we’d suggest that any social impact group hoping to tackle the digital divide seriously must be building real evidence on how to - where possible - scale. And also provide real guidance on the limits of that scale, too.
Our project is already starting to promote a massive classroom. In our first cohort, we averaged about 160 attendees per session over an intense 26 sessions. And in our second cohort? We have seen over 450 participants, based in 40 countries, with over 98% of the participants being based currently in Africa as well. And this means we’re having to push back the walls of our digital class: with over 200 active participants in sessions.
The simple relationship between digitalisation and scale is the discussion of how it is the technology itself that allows this scale. As a simple comparison, when OpenUp host workshops we tend to allocate two people (a facilitator and assistant) for groups of 10-15. Anything above that number of participants probably needs around an additional OpenUp resource for every ten people (and of course, depending on the complexity of the topic, we may in fact have as many as five facilitators as we use in our discovery workshops with a maximum of 20 participants). On simple math that would require about 19 OpenUppers to helps us manage one CAN session…naturally, digital tools allow us to facilitate at scale more cost effectively, and we have been using digital training to help deliver through out much of our programming historically.
The Science of Scaling
But what more substantive lessons exist on achieving scale, beside the acknowledgement that the digital acts as an important foundation? There is in fact a science to scaling. And whilst the traditional tech sector loves to talk about scale for earnings, looking at the science of scaling from the policy environment is far more useful to social impact workers trying to grow projects and programmes in contexts of incredible complexity.
Step 1? Approaching scaling as a particular site for study in your impact research.
Scaling science provides the social impact community with lessons. One of these is that - if the ultimate goal is to scale - technology should be used to promote standardisation. In the context of our CAN training, this means having readily accessible documentation, in formats that respond to the cohorts, and promoting digital documentation to standardise the actual implementation of the training as well. This is why OpenUp is in the process of exploring a consolidated digital training facility. If there is too much variety in implementation, this impedes our ability to scale to more, diverse partner implementers across the region.
A further suggestion is to “...block on situations when doing experiments, just like we commonly block on individual characteristics in modern experimentation”. Essentially, we need to not just account for the details of individuals systematically, but also consider situational elements as well (this desire to categorise the context and conditions of participants synchronises with our own work on contextualising for beneficiaries). Our cohorts already possess data skills, with the majority noting they “...have been educated in relevant topics but am new to this particular topic”. By initiating our training with comprehensive user surveys, we can then ensure that the implementation of our existing contents will be sufficient for meeting their needs.
And vital once scaling begins - is continuous measurement. Whilst OpenUp obviously collates and analyses data from a variety of sources (user surveys, platform analytics, workshop records, etc.), we are also trying to design “LEAN” data methods for our impact monitoring across all our projects. Ensuring a realistic, but cohesive, data practice is sustained across our own organisation generally, we are ultimately better improving our capacity to in fact develop scalable social impact projects in the longer term.
This is important. Looking at our participants own surveys, they have all identified machine learning and predictive analytics as important skills they hope to learn from their positions. Yet (and we see this through analysis of the qualitative data available in the records), it is continuously access to basic and quality data that is the most significant factor in impeding the implementation of their group project work. Open and quality data continues to be a chief stumbling block for our cohorts, but is also a considerable impediment to the “Bright Digital Future” we hope for. Even as we scale skills, we will need to improve the regional data ecosystem as a whole simultaneously.
Whilst we are continuing to monitor the potentials for scale in the longer term of these training interventions, we can look too to our substantive lessons on why we should scale activities like this. A worry that always exists for us in conducting virtual training at scale is the loss of personal contact that we feel impacts both the quality of the experience for participants, and potentially the efficacy of the training itself. We use group work with the participants to build engagement across the cohort, but there are challenges in helping keep these activities on track in partnership with our participants as the cohorts have been growing. However, we also provide intensive engagement one-on-one through Fellowship under the Programme. This requires resourcing, but has great results. And ensuring we have the data to monitor progress means we can meet our real goal - which is not just to scale our impacts, but also improve the outcome of these impacts for the participants and their communities.
General Lessons
As our partners also explore their own data science training, we have some generic lessons that may be of interest (though we would encourage you to get in touch so we can partner to expand these all together :):
- Make it easier for participants to participate. One of the ways to do this with our cohort is to ensure the content is strongly associated to their work and career needs. This can be applied in our programmes for journalism data skills, but is of course less relevant to our youth data training or our data training with community-based organisations. However, when you lead your programme implementation with empathy and ensure you “block on situations”, you can orchestrate the best incentive structure for your digital skills community!
- Whilst everyone wants to focus on Artificial Intelligence and Machine Learning, both skills and policy conversations should lead from the growth of open, quality data - and the reinforcement of skills to work that data - as necessary foundations.
- Scale only matters if you have something of value to your community at large. This requires you to be honest about the quality of what you are delivering, supported by the data that demonstrates it. This means you need to ensure your own good data governance practice organisationally.
- In the longer term, the potential for real scaling is seriously impacted by the relationships between all the implementing partners - we believe the real progress we are seeing is significantly impacted by the wonderful working relationship we have with the Global Partnership for Sustainable Development Data teams, and it is improtant to value those relationships.
Reach out to us to find data and digital training that can contribute to your communities through the TrainUp email.