A global community of practice sharing knowledge, exchanging ideas, and engaging in meaningful discussion on regulation.

Rethinking AI oversight in regulatory practice

Written by

Published on

The AI in Regulation Conference, co-hosted by MDR Strategy Group and Objective, a two-day event taking place in Toronto on February 2–3, 2026, brings together regulators, policymakers, and government leaders from Canada and internationally to examine how artificial intelligence is reshaping regulatory governance, risk management, and public protection.

Day one opened with the AI and Regulatory Governance Workshop, which focused on building a shared understanding of AI oversight, board responsibilities, and the governance foundations required to support responsible AI use in regulatory settings. The training was led by Melissa Peneycad, Director of Public Engagement and AI Strategy at MDR Strategy Group, and Bradley Chisholm from the Regulator’s Practice. Together, they discussed how regulators should approach artificial intelligence. Rather than treating AI as a standalone innovation or responding to external pressure to adopt new tools, participants were encouraged to begin with clarity of mandate and purpose. Melissa emphasized that AI is not intelligence but a tool, one that must be applied deliberately, proportionately, and in alignment with public protection responsibilities.

The session introduced a use-case-driven approach to AI governance, urging boards and senior leadership to clearly define problems before considering whether AI is an appropriate solution. This framing grounded discussions in risk, accountability, and outcomes, while reinforcing the importance of role clarity between governance and operations. Participants also examined how AI introduces new categories of public risk, including data quality, explainability, security, and broader systemic impacts, all of which require thoughtful oversight rather than reactive decision-making.

Building on this foundation, an afternoon panel discussion on Board Oversight of AI-Related Public Risks examined how these principles are being applied in practice. The panel featured Denitha Breau, Registrar and CEO of the Ontario College of Social Workers and Social Service Workers, Natalie Thiessen, Policy Analyst with The Regulators’ Practice, and Megan Wood, CEO and Registrar of the College and Association of Nurses of the Northwest Territories and Nunavut.

Drawing on their governance and leadership roles, panelists shared how boards are moving beyond abstract discussions about AI by strengthening literacy, embedding AI considerations into existing governance and risk frameworks, and creating space for informed, mandate-driven oversight. Rather than treating AI as a siloed issue, organizations are increasingly integrating it into broader conversations about public risk, professional practice, and strategic priorities.

Across both sessions, a consistent theme emerged: effective AI oversight is less about technical expertise and more about governance discipline, informed questioning, and ensuring that AI use remains firmly anchored in the public interest.

Recommended Articles

News

Malaysia among early adopters of national AI office to guide policy

Malaysia has inaugurated a national artificial intelligence office to shape AI policy and regulation, placing the country among the early adopters of a centralized government agency dedicated to overseeing AI governance.
News

Illinois reports major progress on moving professional licensing online

New digital licensing platform is cutting delays and improving service after years of paper-based backlogs.
News

Alberta first province in Canada to regulate health-care aides

Beginning February 2, 2026, the College of Licensed Practical Nurses of Alberta will be renamed to the College of Licensed Practical Nurses and Health Care Aides, which will regulate 40,000 HCAs.

Popular Posts

Oluwatoyin Aguda

On the front lines: AI governance in clinical practice