OpenUp is currently engaged in a few projects involving generative Artificial Intelligence (AI). I have spent a few years working in AI ethics and policy, and have always been frustrated by how discussions centre on what good AI development should look like, without having a realistic view of how AI development actually happens in practice. As leaders in responsible African tech, OpenUp are trying to provide inputs and frameworks that can change the actual practice of how technology is built, for the better.
When we talk about AI ethics, we are generally referring to the moral principles and guidelines that govern the development and use of AI to ensure it benefits society (or the “furtherance of human wellbeing”) while minimising potential harms. Traditionally, this means considering how fairness, transparency, privacy, accountability and inclusivity can be forwarded in the development of AI, and in its outcomes and impacts too.
Yet, as we have been engaging with multiple stakeholders in our current genAI projects, I have been repeatedly reminded of how the true foundation for the pursuit of AI ethics is through decisions and decision-making.
Principles may guide these decisions (or where things are going awry, they may not), but it is decision-making that ultimately results in an ethical AI process and output.
What server you use, what model you engage, whether you use an AI-as-a-service, how you will conduct testing, how you will manage prompt engineering, etc. are not just technical decisions - they are decisions which practically influence whether values like equity, inclusivity and fairness are actually pursued.
And one of the significant challenges for this reality is power and unequal power relations. Often, these power inequities may not be as clear as people think. In the social impact space for instance, the unequal power relations between funder and grantee are sometimes more opaque to implementers, than the more obvious power imbalances between investors and startups, or clients and service providers. But these power imbalances become the main factor in how decisions on a technology’s path are taken - especially when there are competing values at play.
In one of our projects the power imbalance between grantee and funder has been the true arbiter for the ultimate AI decisions, but in another it is the power imbalance between the AI developers and the ultimate “subjects” of those technologies that better highlights these problems. The power dynamics in many civic technology projects are complex: relationships exist between AI developers and funders; government and beneficiaries; beneficiaries and AI developers; funders and government, and so on. Yet many AI innovators have not been encouraged to think about political and power imbalances very directly, nor in how to negotiate these challenges as part of the AI development process.
Power dynamics must be incorporated into our understanding of AI ethics. And if, as I propose, decision-making is the foundation of ethics - all the stakeholders in an AI development process should be thoughtful in how they structure decision-making in their AI project, as a direct mechanism for channeling AI ethical values into the actual building of AI. This is why openness continues to be one of the critical values OpenUp use to advance both its work, and its own organisational structuring, because this openness can help unpack decision-making influences, and ultimately advance accountability for decisions being taken.
Stay tuned to OpenUp, as we will start providing more practical frameworks and advice for the advancement of your responsible innovation activities.