Artificial intelligence is beginning to reshape how regulatory investigations are conducted, particularly as cases grow more complex and evidence increasingly exists in digital form. These issues were explored during a fireside dialogue at the AI in Regulation Conference titled From discretion to decision-support: How AI is reshaping investigations, evidence, and procedural fairness.
Hosted by Fazal Khan, Registrar and CEO of the College of Opticians of Ontario, the session featured Dean Benard, President of Benard + Associates, who reflected on how investigative work has evolved over the past several years.
Benard noted that investigations are no longer dominated by straightforward cases. Instead, regulators are increasingly dealing with large volumes of electronic documentation, digital communications, and social media content that intersect with professional practice. This growing complexity, he suggested, is driving interest in AI as a tool to support investigative functions.
Rather than replacing investigators, AI is being used to manage labour-intensive tasks such as searching thousands of pages of records, summarizing lengthy interviews, and organizing digital evidence. Benard explained that AI-assisted processes have reduced investigation timelines by approximately 25 to 30 per cent, while leaving decision-making firmly in human hands.
AI is also being incorporated into digital forensic software, enabling faster and more accurate handling of electronic evidence. However, both speakers emphasized that the use of these tools requires clear boundaries, training, and ongoing oversight to ensure procedural fairness is maintained.
The conversation also touched on parallels between investigative use of AI and its adoption by regulated professionals, particularly in healthcare settings. While AI-assisted documentation and assessment tools may offer efficiency gains, Benard stressed the importance of confidentiality, accuracy, and client benefit.
Transparency and defensibility emerged as key themes. Benard highlighted the need for regulators to be prepared to explain how AI is used in investigations, particularly in disciplinary or legal proceedings. His organization has developed internal policies outlining where AI is permitted, emphasizing human oversight and accountability.
The session concluded with a shared view that AI should be treated as a decision-support tool rather than a substitute for professional judgment. For regulators, the challenge lies in developing standards and practices that allow AI to enhance investigations while preserving trust, fairness, and public protection.