As artificial intelligence moves from experimentation to operational use, a Global Leadership Panel at the AI in Regulation Conference examined how regulators across jurisdictions are grappling with accountability, public trust, and real-world risk.
Bringing together regulators and policy leaders from North America, Australia, Africa, and Europe, the panel focused less on technical promise and more on the consequences of AI deployment in public-facing systems. Across sectors, speakers emphasized that the most serious risks often emerge not from technology failure, but from governance gaps and human assumptions.
Lisa Maina, Global AI Coordinator at The United Nations Children’s Fund (UNICEF) shared lessons from deploying an AI-supported fraud detection model in humanitarian cash transfer programs. While the system helped flag legitimate cases of fraud, it also produced false positives, with families incorrectly identified as high risk due to incomplete documentation or shared phone numbers in conflict settings.
“This is a lifeline,” Maina said, noting that denying funds could mean denying access to food or medicine. As a result, UNICEF required human review before any decision was acted upon. The experience underscored the need for training, clear authority, and governance structures that reinforce human responsibility.
Accountability was also a central theme for Claire Bayford, Founder, Fathomable, Australia who shared a case study from the retail sector. A loss-prevention system designed for transparency backfired when customers reacted negatively to being visibly monitored. “Accountability isn’t just a legal setting,” Bayford noted it’s actually a user experience as well. Even with technically sound systems, she cautioned, undertrained staff placed “in the human loop” can become the point where trust breaks down.
From a professional regulation perspective, Anna Van der Gaag, visiting Professor of Ethics and Regulation, University of Surrey, UK emphasized continuity of standards. Accountability online, she argued, should mirror accountability offline. “It means exactly the same as being an accountable practitioner face to face,” she said, reinforcing that digital tools do not change professional obligations.
Closing the discussion, Frank Meyers, Director, Regulatory Innovation and Member Services, Federation of State Medical Boards, USA highlighted the growing complexity in healthcare AI. While tools may be right most of the time, he warned that the remaining margin carries real consequences, especially when liability and jurisdictional rules differ.