Responsible Use Of Ai In Africa With Amaka Ibeji

Responsible Use Of Ai In Africa With Amaka Ibeji
Category
Newsletter

The Center for Law and Innovation had the pleasure of hosting privacy engineer Amaka Ibeji, FIP, CIPM, CIPP/E, CISSP, in a webinar entitled ‘Responsible Use of AI in Africa: Form Development to Deployment’ to ground the conversation on AI’s applications and implications in the African context. This article will cover some key points from the discussion, highlighting concerns around imported machine learning models and ethical frameworks, the infrastructural challenges to comprehensive and ethical data collection, and the possibilities that await when these challenges are faced head-on with an African-centric perspective.

We discussed the ethical concerns of AI in Africa, infrastructural challenges and how to enhance data collection and Africa specific data sets..

The scarcity of data across Africa, especially in sectors such as healthcare, poses a challenge to digitization and the ability to build culturally specific and African-centered models that address key issues on the continent. At the heart of it, the key ingredient of machine learning is data. One has to ask, “What problem do I want to solve? And what data do I have to address it?” For instance, take the example of the healthcare sector, which is in various degrees of development infrastructurally across Africa. On infrastructure gap, for example, Nigeria’s current energy supply crisis poses a challenge to digital data collection and digitization as a whole and encourages alternative methods of collection that unfortunately do not encourage the use of AI.

We also discussed the concern that models developed in the Global North will force African developers, regulators, and other experts working with AI to apply mismatched and ill-fitting AI solutions. There is a need for a local understanding of the context to avoid being forced into the parameters of what has already been set in faraway contexts like Europe or the United States. We are aware of the dangers of bias, but often, the locally specific nature of bias is over looked. This means that while, for example, gender bias is a significant and universal concern, those same concerns are layered with the culturally specific dynamics created by historical conditions and more enduring ones such as tribalism or disparities in access to education or information.

It can also help to look at other geographical contexts outside of the Global North to learn from their approaches, challenges, and solutions. One example that was brought up was a case in India from 2020 whereby a newly built algorithmic system –the Family Identity Data Repository or the Parivar Pehchan Patra (PPP) database– was used to determine the eligibility of welfare claimants.

Ultimately, when it comes to new technology, in spite of the excitement that it may elicit, this needs to be balanced with a thorough risk assessment. We are grateful for the deep insights shared by our guest speaker that enabled us to critically appraise how to approach solutions, review context specific guardrails while promoting innovation and development.   We look forward to further discussions on AI this year and will keep you updated on our upcoming events!