A global community of practice sharing knowledge, exchanging ideas, and engaging in meaningful discussion on regulation.

Responsible AI as a regulatory choice, not a technical shortcut

Written by

Published on

A keynote session delivered by Dr. Fola Adeleke, Executive Director and co-founder of the Global Center on AI Governance, challenged regulators to reconsider how artificial intelligence is introduced into investigations, compliance, and enforcement. Rather than framing AI as a neutral or technical upgrade, the session positioned AI adoption as a series of governance decisions with direct implications for public trust and legitimacy.

Drawing on his work leading the African Observatory on Responsible AI, Adeleke argued that the most consequential choices about AI are made well before systems go live. Procurement, data selection, model design, and oversight structures shape outcomes long before regulators see operational benefits. As he put it, “Responsible AI in regulation is a governance choice, and most of these choices are made long before a system is deployed.”

The session emphasized that regulators operate under a distinct public interest mandate, where the consequences of error are not symmetrical. AI-supported decisions can affect investigations, licensing timelines, and enforcement actions, raising the stakes for accuracy and fairness. “The cost of a false positive is not just a bad recommendation. It can be a wrongful investigation, a delayed licence, an unjust sanction,” Adeleke noted, underscoring why regulatory uses of AI cannot be compared to consumer-facing technologies.

Adeleke also warned of the risks posed by opaque or unchallengeable systems. When AI influences regulatory outcomes without clear explanation or recourse, institutional legitimacy is at risk. “If decisions are automated, unchallengeable, or opaque, we don’t just lose trust. We lose legitimacy,” he said.

A central focus of the keynote was procurement, which Adeleke described as a decisive moment for safeguarding accountability. Contractual terms, audit rights, data governance, and human oversight mechanisms determine whether regulators retain control over systems that exercise public authority. Importantly, he cautioned against assuming that human involvement alone resolves accountability concerns. “The presence of a human decision-maker does not remove the need for accountability.”

Throughout the session, Adeleke reinforced that AI may support regulatory work but must never replace judgment. For regulators, responsible AI use requires deliberate choices that align technology with legal authority, human rights obligations, and public trust.

Recommended Articles

News

Malaysia among early adopters of national AI office to guide policy

Malaysia has inaugurated a national artificial intelligence office to shape AI policy and regulation, placing the country among the early adopters of a centralized government agency dedicated to overseeing AI governance.
News

Illinois reports major progress on moving professional licensing online

New digital licensing platform is cutting delays and improving service after years of paper-based backlogs.
News

Alberta first province in Canada to regulate health-care aides

Beginning February 2, 2026, the College of Licensed Practical Nurses of Alberta will be renamed to the College of Licensed Practical Nurses and Health Care Aides, which will regulate 40,000 HCAs.

Popular Posts

Oluwatoyin Aguda

On the front lines: AI governance in clinical practice