As artificial intelligence becomes a practical reality in regulatory work, a fireside chat at the AI in Regulation Conference examined AI not just as a set of tools, but through the lens of institutional readiness. The session, From compliance to capacity, explored how regulators across sectors are beginning to build the internal foundations needed for safe, effective AI adoption.
Moderated by Jennifer Quaglietta, CEO and Registrar of Professional Engineers Ontario, the conversation focused on how traditional compliance frameworks alone are no longer sufficient for governing complex, fast-moving technologies.
Quaglietta framed the discussion around regulatory lag and adaptation, noting that many oversight systems are designed for predictable environments, while AI operates within a complex ecosystem shaped by constant interaction between technology, people, and institutions. The question for regulators, she suggested, is not whether AI is coming, but how to engage with it without losing public trust.
Drawing on experience at the Municipal Property Assessment Corporation (MPAC), Soussanna Karas described how her organization moved past what she called “analysis paralysis” by embedding AI into strategic planning and starting with low-risk, internal use cases. Rather than launching external-facing systems, MPAC began with internal support tools and staff-led pilot ideas designed to build familiarity, confidence, and governance discipline.
Karas emphasized that capacity-building is as much cultural as it is technical. AI adoption, she argued, requires leadership alignment, workforce education, and clear accountability—not just software. Using examples from MPAC’s internal “pitch” process, she outlined how bottom-up experimentation, paired with structured governance review, helped surface realistic use cases while managing risk.
A recurring theme was data readiness. Karas cautioned that AI initiatives often fail when data quality, documentation, and ownership are unclear, reinforcing the idea that governance decisions made early frequently determine outcomes later. She also highlighted the importance of transparency and human accountability, particularly when AI supports regulatory judgment rather than replacing it.
Rather than offering a blueprint, the session surfaced questions regulators are likely to carry forward: How ready are our boards and staff? What decisions should never be automated? And how do we ensure AI strengthens, rather than erodes, professional competence and public trust?
As the conference continues, the conversation signals a shift in regulatory thinking—from enforcing compliance after the fact to building the internal capacity required to govern AI responsibly in practice.