The AI in Regulation Conference, co-hosted by MDR Strategy Group and Objective, a two-day event taking place in Toronto on February 2–3, 2026, brings together regulators, policymakers, and government leaders from Canada and internationally to examine how artificial intelligence is reshaping regulatory governance, risk management, and public protection.
Day one opened with the AI and Regulatory Governance Workshop, which focused on building a shared understanding of AI oversight, board responsibilities, and the governance foundations required to support responsible AI use in regulatory settings. The training was led by Melissa Peneycad, Director of Public Engagement and AI Strategy at MDR Strategy Group, and Bradley Chisholm from the Regulator’s Practice. Together, they discussed how regulators should approach artificial intelligence. Rather than treating AI as a standalone innovation or responding to external pressure to adopt new tools, participants were encouraged to begin with clarity of mandate and purpose. Melissa emphasized that AI is not intelligence but a tool, one that must be applied deliberately, proportionately, and in alignment with public protection responsibilities.
The session introduced a use-case-driven approach to AI governance, urging boards and senior leadership to clearly define problems before considering whether AI is an appropriate solution. This framing grounded discussions in risk, accountability, and outcomes, while reinforcing the importance of role clarity between governance and operations. Participants also examined
Building on this foundation, an afternoon panel discussion on Board Oversight of AI-Related Public Risks examined how these principles are being applied in practice. The panel featured Denitha Breau, Registrar and CEO of the Ontario College of Social Workers and Social Service Workers, Natalie Thiessen, Policy Analyst with The Regulators’ Practice, and Megan Wood, CEO and Registrar of the College and Association of Nurses of the Northwest Territories and Nunavut.
Drawing on their governance and leadership roles, panelists shared how boards are moving beyond abstract discussions about AI by strengthening literacy, embedding AI considerations into existing governance and risk frameworks, and creating space for informed, mandate-driven oversight. Rather than treating AI as a siloed issue, organizations are increasingly integrating it into broader conversations about public risk, professional practice, and strategic priorities.
Across both sessions, a consistent theme emerged: effective AI oversight is less about technical expertise and more about governance discipline, informed questioning, and ensuring that AI use remains firmly anchored in the public interest.