As the AI in Regulation Conference continues, Day Two builds on discussions from the previous day, marking a shift from conceptual discussions about artificial intelligence to the practical realities regulators now face as AI tools enter everyday regulatory work. Opening remarks from Arun Dixit, Vice President of Digital Transformation and Corporate Operations at Professional Engineers Ontario, frame the day around responsibility rather than innovation.
Dixit emphasizes that AI is no longer a future consideration for regulators. It is already shaping investigations, licensing decisions, triage, compliance monitoring, and other public-facing functions. As a result, the central question before regulators is no longer whether AI will be used, but how it will be governed, monitored, and held to account within regulatory systems. The implications, he notes, are significant: decisions once made through visible processes are increasingly influenced by software, data models, and workflows that may sit far from boardrooms and council tables.
Reinforcing this point, Daniel Roukema, CEO of MDR Strategy Group, notes that regulators are already past the stage of abstract curiosity about artificial intelligence. The task now is to engage with what AI means in practice, within real regulatory systems where decisions carry legal authority and public consequence.
Roukema frames the conference as a response to a reality already taking shape across regulatory environments. Decisions are increasingly influenced not only by hearings and inspections, but by software developed years earlier, algorithms regulators may not have designed, and workflows that quietly affect who is reviewed, who is delayed, and who is left out. For regulators, he notes, this is not a theoretical policy debate but an active question of responsibility, legitimacy, and equity, closely tied to their duty to protect the public.
Rather than positioning AI as an inevitable solution, Roukema emphasizes the importance of governance discipline. If AI is used, it must be governed rather than simply adopted. This distinction becomes especially important as regulators consider placing tools, vendors, and proprietary data models inside the exercise of public authority. The challenge, as he outlines it, is to ask better questions before systems are implemented, not only after risks emerge.