A global community of practice sharing knowledge, exchanging ideas, and engaging in meaningful discussion on regulation.

AI, accountability and public trust: lessons from global regulatory leaders

Written by

Published on

As artificial intelligence moves from experimentation to operational use, a Global Leadership Panel at the AI in Regulation Conference examined how regulators across jurisdictions are grappling with accountability, public trust, and real-world risk.

Bringing together regulators and policy leaders from North America, Australia, Africa, and Europe, the panel focused less on technical promise and more on the consequences of AI deployment in public-facing systems. Across sectors, speakers emphasized that the most serious risks often emerge not from technology failure, but from governance gaps and human assumptions.

Lisa Maina, Global AI Coordinator at The United Nations Children’s Fund (UNICEF) shared lessons from deploying an AI-supported fraud detection model in humanitarian cash transfer programs. While the system helped flag legitimate cases of fraud, it also produced false positives, with families incorrectly identified as high risk due to incomplete documentation or shared phone numbers in conflict settings.

“This is a lifeline,” Maina said, noting that denying funds could mean denying access to food or medicine. As a result, UNICEF required human review before any decision was acted upon. The experience underscored the need for training, clear authority, and governance structures that reinforce human responsibility.

Accountability was also a central theme for Claire Bayford, Founder, Fathomable, Australia who shared a case study from the retail sector. A loss-prevention system designed for transparency backfired when customers reacted negatively to being visibly monitored. “Accountability isn’t just a legal setting,” Bayford noted it’s actually a user experience as well. Even with technically sound systems, she cautioned, undertrained staff placed “in the human loop” can become the point where trust breaks down.

From a professional regulation perspective, Anna Van der Gaag, visiting Professor of Ethics and Regulation, University of Surrey, UK emphasized continuity of standards. Accountability online, she argued, should mirror accountability offline. “It means exactly the same as being an accountable practitioner face to face,” she said, reinforcing that digital tools do not change professional obligations.

Closing the discussion, Frank Meyers, Director, Regulatory Innovation and Member Services, Federation of State Medical Boards, USA highlighted the growing complexity in healthcare AI. While tools may be right most of the time, he warned that the remaining margin carries real consequences, especially when liability and jurisdictional rules differ.

Recommended Articles

News

Malaysia among early adopters of national AI office to guide policy

Malaysia has inaugurated a national artificial intelligence office to shape AI policy and regulation, placing the country among the early adopters of a centralized government agency dedicated to overseeing AI governance.
News

Illinois reports major progress on moving professional licensing online

New digital licensing platform is cutting delays and improving service after years of paper-based backlogs.
News

Alberta first province in Canada to regulate health-care aides

Beginning February 2, 2026, the College of Licensed Practical Nurses of Alberta will be renamed to the College of Licensed Practical Nurses and Health Care Aides, which will regulate 40,000 HCAs.

Popular Posts

Oluwatoyin Aguda

On the front lines: AI governance in clinical practice