The first Executive Fireside Dialogue session of the 2026 AI in Regulation conference focused on ethical AI and regulatory priorities and governance questions now facing regulators. Moderated by Denitha Breau, Registrar and CEO of the Ontario College of Social Workers and Social Service Workers (OCSWSSW), the conversation with Candice Alder, President of the BC Association of Clinical Counsellors, examined how AI is already reshaping risk, consent, and accountability in mental health practice.
Alder framed the issue in practical, cautionary terms how the use of AI means you’re entering a far more powerful, faster, and riskier environment than it may seem at first. “It may feel like you’ve stepped onto the highway, but you’ve actually stepped onto the Autobahn,” she said, emphasizing how quickly routine use can escalate into high-risk territory. The implication for regulators was clear. Familiar interfaces do not reduce underlying governance complexity.
Many systems used by registrants rely on external providers, even when they appear embedded in workplace software. Once information enters these systems, users lose an element of control over where data travels, how it’s processed, and how easily it can be retrieved or withdrawn. That loss of control matters for regulators tasked with overseeing confidentiality, accountability, and public protection.
Responsibility begins with consent, but not just any consent. Alder drew a clear distinction between simply informing people that AI is being used and obtaining informed consent, where individuals understand how, why, and where their information may be used. Anonymization, she noted, does not guarantee protection or transfer ownership or ethical discretion to practitioners or institutions. The information still belongs to the person it describes.
This creates regulatory tension. When regulators advise registrants to limit AI use or seek consent, they must also provide workable alternatives and have processes in place when individuals opt out. Guidance without options leaves practitioners facing ethical expectations they cannot realistically meet. If AI-enabled systems are embedded in core workflows, opting out cannot mean opting out of care or compliance altogether.
The discussion clarified a widening gap between rapid AI adoption and slower-moving policy, training, and legislation in Canada. For regulators, the questions remain: How should informed consent be operationalized when AI is embedded across systems? What fallback processes must exist when consent is withheld? And how can regulatory bodies prepare for failures without assuming AI will always function as intended?
The session reinforced that AI governance is no longer speculative. It underscored its role as an immediate regulatory responsibility, closely tied to trust, fairness, and public protection.