A keynote session delivered by Dr. Fola Adeleke, Executive Director and co-founder of the Global Center on AI Governance, challenged regulators to reconsider how artificial intelligence is introduced into investigations, compliance, and enforcement. Rather than framing AI as a neutral or technical upgrade, the session positioned AI adoption as a series of governance decisions with direct implications for public trust and legitimacy.
Drawing on his work leading the African Observatory on Responsible AI, Adeleke argued that the most consequential choices about AI are made well before systems go live. Procurement, data selection, model design, and oversight structures shape outcomes long before regulators see operational benefits. As he put it, “Responsible AI in regulation is a governance choice, and most of these choices are made long before a system is deployed.”
The session emphasized that regulators operate under a distinct public interest mandate, where the consequences of error are not symmetrical. AI-supported decisions can affect investigations, licensing timelines, and enforcement actions, raising the stakes for accuracy and fairness. “The cost of a false positive is not just a bad recommendation. It can be a wrongful investigation, a delayed licence, an unjust sanction,” Adeleke noted, underscoring why regulatory uses of AI cannot be compared to consumer-facing technologies.
Adeleke also warned of the risks posed by opaque or unchallengeable systems. When AI influences regulatory outcomes without clear explanation or recourse, institutional legitimacy is at risk. “If decisions are automated, unchallengeable, or opaque, we don’t just lose trust. We lose legitimacy,” he said.
A central focus of the keynote was procurement, which Adeleke described as a decisive moment for safeguarding accountability. Contractual terms, audit rights, data governance, and human oversight mechanisms determine whether regulators retain control over systems that exercise public authority. Importantly, he cautioned against assuming that human involvement alone resolves accountability concerns. “The presence of a human decision-maker does not remove the need for accountability.”
Throughout the session, Adeleke reinforced that AI may support regulatory work but must never replace judgment. For regulators, responsible AI use requires deliberate choices that align technology with legal authority, human rights obligations, and public trust.