The specific vulnerabilities in democracy AI

Photo by Joakim Honkasalo on Unsplash

And how to try and manage them.

How do we design for an ethical AI future in the Global South? The global conversation on AI frequently raises the potential of “ethics”. And whilst there are some critiques about the validity of ethics frameworks (more around if non-obligatory ethics are sufficient to mitigate the true risks and harms of AI), certainly it is clear to many in the social technology space that development must be driven by harm-conscious values. Yet as calls in policy circles grow to implement ethics through privacy and other assessments during the development process, there are not many frameworks for technologists on how to actually do this kind of development in practice.

OpenUp recently designed and applied a risks and harms framework to an AI chatbot that was speculated to be used in public participation processes with the South African government.

In partnership with Stellenbosch University, OpenUp recently designed and applied a risks and harms framework to an AI chatbot that was speculated to be used in public participation processes with the South African government. Based on some of the work I have done previously in overseeing the ethics of implementing AI projects in the development space, we explored an assessment that was particularly targeted at AI in political contexts. Here’s what we learned. 

Start with Where 

The first step is to emphasise context, as it allows you a more considered perspective on potential risks and harms. So for instance in South Africa we have incredible digital inequality and significant issues in digital capacities, capped with the particular reality that less than 20% of South African Internet users report that they use e-government services. This context must be understood to have a realistic perspective both on potential blockers to implementation, but also to understand how AI will be deployed in practice.

...context must be understood to have a realistic perspective both on potential blockers to implementation, but also to understand how AI will be deployed in practice.

Move to What If

There are also particular AI risks and harms you should foreground when considering your design and implementation. The difficulty is that potential risks cannot be exhaustively listed, but here are some of the more realised risks we have seen in AI projects in political contexts:

  • Algorithmic bias: biases in data and algorithm design mean biased outcomes in the implementation of AI. As the old adage in software development goes, “Garbage In, Garbage Out”, and the chronic under-representation of certain groups in datasets perpetuates inequalities in AI outputs.
  • Hallucinations: AI hallucinations occur when large language models generate information that appears plausible but is factually incorrect, fabricated, or unsupported by reliable evidence due to limitations in training data, probabilistic prediction methods, or lack of external verification mechanisms. It is a significant challenge with chatbots, in particular, that perform an agent role. Systematic testing of chatbots using both ChatGPT and Llama in a clinical setting have shown alarming rates for these hallucinations.
  • Exclusion: Sometimes a result of bias, but also a result of processes around the implementation of AI, the embedding of such technologies might result in the exclusion of vulnerable groups from essential (or beneficial) services.
  • Personal data protection: AI Participation platforms may have implications for data protection, as they will inevitably engage with personal information from citizens at both the point of registration, and during exchanges with chatbots. This is exacerbated by the fact that inputted data to AI-as-a-service typically then becomes a part of the AI’s training data.
  • Transparency and accountability: the use of proprietary algorithms has ramifications for transparency, as they are ordinarily not developed as part of an open-source framework. A key difference between automated decision-making and AI is that rule-based systems make for greater transparency making comprehensive view at the level of the “decision” even more difficult.
  • Accountability and recourse: Clarity on who is responsible for outputs is made even more complicated in AI deployment contexts where people may be implementing on “behalf” of the government.
  • Surveillance: Surveillance is a significant risk area with the utilisation of AI in vulnerable political contexts. There is the risk of ‘dataveillance’ from the state, but also the increased exploitation of data from the private sector in the pursuit of ‘surveillance capitalism’. Both dimensions of surveillance lead to risks for the data subject and civic technology participant. 
  • Technology reliance: Introducing technologies into previously in-person contexts in the Global South can introduce unreliability into important systems and processes. This can stem from infrastructure challenges (unreliable electricity, and non-uniform Internet access), which are not buttressed with “offline” processes. This reliance may also lead to political economic challenges, with public sector reliance on proprietary technologies leading to ‘vendor lock-in’ that undermine economic efficiencies in public service delivery.

Any good organisational review also needs to understand the risks that come with compliance. Any deployed project will need to fully understand its legal and regulatory environment - so in South Africa, what are the impacts for instance of the Protection of Personal Information Act, etc.?

Who is going to maintain this tool in the future, and how might this change the risk profile of the tool...

This is also future looking. It is important that you consider the true “risks” of your potential deployment with a proper understanding of its sustainability, and what this means for the tools ultimate business model. Who is going to maintain this tool in the future, and how might this change the risk profile of the tool when this happens?

Technical Issues Should not Obfuscate

When considering risks and harms, you need to ensure you have sufficient technical capacity to be able to review objectively (and realistically) the technology’s processes, architecture and efficiencies. You need to know at which point a “human in the loop” will maximise the protections available to users. This includes understanding sufficiently enough to think about the technical and risk differences between the various AI-as-a-service versus self-hosted AI options.  A proper technical understanding will be 

You need to know at which point a “human in the loop” will maximise the protections available to users.

Proportionality

In our experience, one of the most under-explored aspects of AI risk assessments is “proportionality”. This is an idea developed from law, which essentially is an evaluation that determines whether deploying an AI system is justified by weighing its costs, benefits, and risks as a whole to ensure the solution is appropriate for the problem it aims to solve. The test involves examining whether the scale, scope, and invasiveness of the AI deployment is balanced relative to the issue being addressed, considering direct costs (like environmental impacts from compute power and monetary expenses), efficiency in meeting stated objectives, and whether there are less restrictive or risky alternative approaches that could achieve the same results. Essentially, it asks: "Are we using a machete for a fillet mignon?" and requires demonstrating that the AI solution's benefits outweigh its potential harms and that no less risky method could accomplish the same goals effectively. The most important thing to centre is that, all things being equal, there is a significant environmental cost in deploying AI that must always be considered when AI is either being developed or deployed.

The test involves examining whether the scale, scope, and invasiveness of the AI deployment is balanced relative to the issue being addressed...

What proportionality also requires of responsible technologists then is a very clear understanding of the purpose of their intervention - and a realistic measurement of its implementation, which also includes a baseline. We need to be able to determine if what we are doing can be effective, and if it is in fact effective, and if this is not being measured throughout the development process proportionality cannot be established!

Lessons Learned

  • Risks and harms (arising from both AI, but also other technologies) are different based on context; both the thematic context, and country context. This is why development by those incredibly familiar with the communities in which it will be deployed is ultimately advantageous to outcomes.
  • Ai should be developed with a full plan in place about its ultimate sustainability, because this impacts risks and harms. No clear plan should be treated with immense caution.
  • In most contexts currently, AI will not replace the importance, or even efficacy, of human action. Project plans should therefore clearly define the point at which AI and human roles diverge, in order to prevent systematic reliance on AI.
  • Measure the efficacy of the deployment of AI by determining the gains it provides, which can only be determined by a clear understanding of purpose.
  • Proportionally highlights how the deployment of ethical AI requires us to provide a considered and nuanced approach to reflecting on its development.

This project was implemented by Open Up, on behalf of the GIZ project “Data2Policy” commissioned by the Federal Ministry for Economic Cooperation and Development (BMZ).

Share this post:
Email iconTwitter icon

And how to try and manage them.

How do we design for an ethical AI future in the Global South? The global conversation on AI frequently raises the potential of “ethics”. And whilst there are some critiques about the validity of ethics frameworks (more around if non-obligatory ethics are sufficient to mitigate the true risks and harms of AI), certainly it is clear to many in the social technology space that development must be driven by harm-conscious values. Yet as calls in policy circles grow to implement ethics through privacy and other assessments during the development process, there are not many frameworks for technologists on how to actually do this kind of development in practice.

OpenUp recently designed and applied a risks and harms framework to an AI chatbot that was speculated to be used in public participation processes with the South African government.

In partnership with Stellenbosch University, OpenUp recently designed and applied a risks and harms framework to an AI chatbot that was speculated to be used in public participation processes with the South African government. Based on some of the work I have done previously in overseeing the ethics of implementing AI projects in the development space, we explored an assessment that was particularly targeted at AI in political contexts. Here’s what we learned. 

Start with Where 

The first step is to emphasise context, as it allows you a more considered perspective on potential risks and harms. So for instance in South Africa we have incredible digital inequality and significant issues in digital capacities, capped with the particular reality that less than 20% of South African Internet users report that they use e-government services. This context must be understood to have a realistic perspective both on potential blockers to implementation, but also to understand how AI will be deployed in practice.

...context must be understood to have a realistic perspective both on potential blockers to implementation, but also to understand how AI will be deployed in practice.

Move to What If

There are also particular AI risks and harms you should foreground when considering your design and implementation. The difficulty is that potential risks cannot be exhaustively listed, but here are some of the more realised risks we have seen in AI projects in political contexts:

  • Algorithmic bias: biases in data and algorithm design mean biased outcomes in the implementation of AI. As the old adage in software development goes, “Garbage In, Garbage Out”, and the chronic under-representation of certain groups in datasets perpetuates inequalities in AI outputs.
  • Hallucinations: AI hallucinations occur when large language models generate information that appears plausible but is factually incorrect, fabricated, or unsupported by reliable evidence due to limitations in training data, probabilistic prediction methods, or lack of external verification mechanisms. It is a significant challenge with chatbots, in particular, that perform an agent role. Systematic testing of chatbots using both ChatGPT and Llama in a clinical setting have shown alarming rates for these hallucinations.
  • Exclusion: Sometimes a result of bias, but also a result of processes around the implementation of AI, the embedding of such technologies might result in the exclusion of vulnerable groups from essential (or beneficial) services.
  • Personal data protection: AI Participation platforms may have implications for data protection, as they will inevitably engage with personal information from citizens at both the point of registration, and during exchanges with chatbots. This is exacerbated by the fact that inputted data to AI-as-a-service typically then becomes a part of the AI’s training data.
  • Transparency and accountability: the use of proprietary algorithms has ramifications for transparency, as they are ordinarily not developed as part of an open-source framework. A key difference between automated decision-making and AI is that rule-based systems make for greater transparency making comprehensive view at the level of the “decision” even more difficult.
  • Accountability and recourse: Clarity on who is responsible for outputs is made even more complicated in AI deployment contexts where people may be implementing on “behalf” of the government.
  • Surveillance: Surveillance is a significant risk area with the utilisation of AI in vulnerable political contexts. There is the risk of ‘dataveillance’ from the state, but also the increased exploitation of data from the private sector in the pursuit of ‘surveillance capitalism’. Both dimensions of surveillance lead to risks for the data subject and civic technology participant. 
  • Technology reliance: Introducing technologies into previously in-person contexts in the Global South can introduce unreliability into important systems and processes. This can stem from infrastructure challenges (unreliable electricity, and non-uniform Internet access), which are not buttressed with “offline” processes. This reliance may also lead to political economic challenges, with public sector reliance on proprietary technologies leading to ‘vendor lock-in’ that undermine economic efficiencies in public service delivery.

Any good organisational review also needs to understand the risks that come with compliance. Any deployed project will need to fully understand its legal and regulatory environment - so in South Africa, what are the impacts for instance of the Protection of Personal Information Act, etc.?

Who is going to maintain this tool in the future, and how might this change the risk profile of the tool...

This is also future looking. It is important that you consider the true “risks” of your potential deployment with a proper understanding of its sustainability, and what this means for the tools ultimate business model. Who is going to maintain this tool in the future, and how might this change the risk profile of the tool when this happens?

Technical Issues Should not Obfuscate

When considering risks and harms, you need to ensure you have sufficient technical capacity to be able to review objectively (and realistically) the technology’s processes, architecture and efficiencies. You need to know at which point a “human in the loop” will maximise the protections available to users. This includes understanding sufficiently enough to think about the technical and risk differences between the various AI-as-a-service versus self-hosted AI options.  A proper technical understanding will be 

You need to know at which point a “human in the loop” will maximise the protections available to users.

Proportionality

In our experience, one of the most under-explored aspects of AI risk assessments is “proportionality”. This is an idea developed from law, which essentially is an evaluation that determines whether deploying an AI system is justified by weighing its costs, benefits, and risks as a whole to ensure the solution is appropriate for the problem it aims to solve. The test involves examining whether the scale, scope, and invasiveness of the AI deployment is balanced relative to the issue being addressed, considering direct costs (like environmental impacts from compute power and monetary expenses), efficiency in meeting stated objectives, and whether there are less restrictive or risky alternative approaches that could achieve the same results. Essentially, it asks: "Are we using a machete for a fillet mignon?" and requires demonstrating that the AI solution's benefits outweigh its potential harms and that no less risky method could accomplish the same goals effectively. The most important thing to centre is that, all things being equal, there is a significant environmental cost in deploying AI that must always be considered when AI is either being developed or deployed.

The test involves examining whether the scale, scope, and invasiveness of the AI deployment is balanced relative to the issue being addressed...

What proportionality also requires of responsible technologists then is a very clear understanding of the purpose of their intervention - and a realistic measurement of its implementation, which also includes a baseline. We need to be able to determine if what we are doing can be effective, and if it is in fact effective, and if this is not being measured throughout the development process proportionality cannot be established!

Lessons Learned

  • Risks and harms (arising from both AI, but also other technologies) are different based on context; both the thematic context, and country context. This is why development by those incredibly familiar with the communities in which it will be deployed is ultimately advantageous to outcomes.
  • Ai should be developed with a full plan in place about its ultimate sustainability, because this impacts risks and harms. No clear plan should be treated with immense caution.
  • In most contexts currently, AI will not replace the importance, or even efficacy, of human action. Project plans should therefore clearly define the point at which AI and human roles diverge, in order to prevent systematic reliance on AI.
  • Measure the efficacy of the deployment of AI by determining the gains it provides, which can only be determined by a clear understanding of purpose.
  • Proportionally highlights how the deployment of ethical AI requires us to provide a considered and nuanced approach to reflecting on its development.

This project was implemented by Open Up, on behalf of the GIZ project “Data2Policy” commissioned by the Federal Ministry for Economic Cooperation and Development (BMZ).