Superintendent Routledge participates in a fireside chat at Reuters Future of Insurance event

PR Newswire
Today at 7:54pm UTC

Superintendent Routledge participates in a fireside chat at Reuters Future of Insurance event

Canada NewsWire

TORONTO, Nov. 25, 2025 /CNW/ - On Monday, November 24, 2025, Superintendent Peter Routledge participated in a fireside chat at the Reuters Future of Insurance event in Toronto, Ontario.

Check against delivery

Moderator:

In 2021 when you became Superintendent, OSFI's top key risks were climate, digital innovation and third-party risks. Four years later, what are the risks that are keeping you up at night?

Superintendent Peter Routledge:

  • OSFI's 2025 Annual Risk Outlook identifies integrity and security threats as a key supervisory priority. Geopolitical instability, fast-moving technologies, and third-party reliance makes financial institutions vulnerable to cyber-attacks, state-linked interference, and integrity-related risks. These threats are not hypothetical, and they aim to disrupt trust and financial stability. Indeed, these integrity and security risks can expose banks and insurers to financial losses and reputational damage. 
  • We've all seen ransomware, data breaches, and state-linked activity intensify. AI is accelerating the pace and scale of these threats.
  • OSFI's mandate is to provide guidance so institutions remain resilient. This is now more relevant than ever before in the face of heightened geopolitical risk and evolving integrity and security risks, whether related to tariffs, trade protectionism, or broader uncertainty in the global market. We at OSFI, like other government institutions, are also subject to these risks and are increasing measures and focusing resources to deal with them. We too face the same challenge.

Moderator:

AI presents both risks and opportunities for the financial system. How is OSFI approaching artificial intelligence—both in its own supervisory work and in how it expects financial institutions to manage AI responsibly?

Superintendent Peter Routledge:

  • AI is a powerful tool if adopted responsibly. AI can enhance productivity and efficiency, both for financial institutions and for OSFI itself.
  • However, AI increases multiple risk types—both internally and externally. Internally, AI can raise model risk, operational risk, and legal or reputational risk. Externally, malicious actors are increasingly using AI tools to carry out cyber threats, financial fraud, and geopolitical interference.
  • AI is part of OSFI's expanding integrity and security risk lens. OSFI is focused on how AI affects institutions' financial resilience and operational, cyber, and integrity and security risk profiles.
  • It is my view that the balance of AI risks skews to the external environment and institutions should be bold in their adoption of AI, particularly in areas like fraud detection and cybersecurity. Nevertheless, institutional governance and control must keep pace with deployment. Many of our institutions are at the vanguard of responsible AI adoption as seen from the Evident AI indices for banking and insurance.
  • Internally, OSFI is leveraging and expanding use of AI to enhance supervision, analysis, and internal operations. Our approach is built on accountability, explainability, and data security, ensuring that as we innovate with these technologies, we do so responsibly and effectively.

Moderator:

Could you give an overview of OSFI's principles for responsible AI in finance? How can banks and insurance companies adopt AI while managing risks?

Superintendent Peter Routledge:

  • My guiding principle is to 'first, do no harm.' We will allow institutions the space to innovate, while remaining vigilant in identifying and addressing material risks as they emerge.
  • AI has the potential to be a productivity multiplier, both for the institutions we regulate and for OSFI itself. Internally, we're using data and AI tools in targeted use cases. I often compare AI to the internet—it will change how financial services operate, and the pace of adoption will be rapid.
  • For financial institutions, AI carries a number of operational risks. That's why governance, oversight, and accountability are so important.
  • The Financial Industry Forum on Artificial Intelligence (FIFAI) discussions in 2022 led to the development of the EDGE principles, Explainability, Data, Governance and Ethics:
    • Explainability: Financial institutions must be able to explain how an AI model arrives at its decisions. This is crucial for building trust with customers and stakeholders.
    • Data: AI models require high-quality, reliable, and well-governed data to produce trustworthy and accurate results. The integrity of the underlying data is fundamental to a responsible AI system.
    • Governance: Institutions need a robust governance framework to manage the risks and responsibilities associated with AI. This includes clear roles, accountability, and oversight throughout the AI lifecycle.
    • Ethics: Financial institutions must consider the broader societal impacts of their AI systems, addressing issues such as transparency, privacy, bias, and fairness.
  • We're continuing to engage with the Global Risk Institute and industry through the second FIFAI. That dialogue is helping shape principles for responsible AI use, including transparency, fairness, security, and accountability.

Moderator:

Climate-related risks have been a major focus for OSFI in recent years through initiatives like Guideline B-15, the Climate Risk Returns, and the Standardized Climate Scenario Exercise. What is OSFI ultimately trying to achieve through these efforts, and how has its perspective on climate risk evolved?

Superintendent Peter Routledge:

  • Climate-related risks can have financial impacts that could affect the stability and resilience of Canada's financial system. These risks are complex, long-term, and uncertain. They require greater measurement precision so that boards and senior managers can reflect these risks in institutions' business strategies and risk management frameworks.
  • Guideline B-15, the Standardized Climate Scenario Exercise (SCSE), and the Climate Risk Returns all lay the groundwork for insurers to build their risk quantification capabilities.
    • The Climate Risk Returns, for example, are a key part of this groundwork because they help collect standardized data from financial institutions that will improve institutions' abilities to quantify and measure climate-related risks. We published a Climate Risk Returns report and updated returns last week.
  • But we know there is more work to do on the capacity building front. Consider, for example, less than half of the P&C insurers reported having experience with physical climate scenario analysis prior to the SCSE. This number is even lower for deposit-taking institutions and life insurers.
    • Our work also revealed that, outside of the P&C sector, very few financial institutions precisely geocode their exposures. This represents a significant gap in an era of increasingly frequent and severe climate-related physical risks. Geospatial analysis is foundational for assessing the exposure and vulnerability of assets and is essential to ensuring that these risks are accurately measured and appropriately priced.
  • Our thinking has evolved over the past few years. For example, we are now looking at catastrophic risks as including climate-related risks and earthquake risks. We see a lot of synergies and ways we can apply lessons learned in trying to improve the quantification of climate-related physical risks to what we're learning on earthquake risks.

Moderator:

Looking ahead, what is OSFI's vision for the insurance sector over the next 3 to 5 years, and how will it work with international regulators to ensure alignment on emerging risks?

Superintendent Peter Routledge:

  • Over the next 3 to 5 years, OSFI's vision for the insurance sector (which is consistent with its vision for the banking sector) is anchored in resilience and adaptability. We modernized our supervisory framework so that it will keep pace with fast-moving risks—whether that's integrity and security threats, climate volatility, or the adoption of AI.
  • Second, we see responsible innovation as critical. AI and digital tools will transform insurance, but governance and transparency must evolve alongside technology. That's why we're building supervisory capacity and working with industry to ensure innovation strengthens—not undermines—trust in the system.
  • Finally, these challenges are global. Climate risk and AI don't respect borders, so OSFI continues its long-standing and meaningful collaboration with international regulators and standard-setting bodies. Through initiatives like the Financial Industry Forum on AI and climate scenario exercises, we're aligning principles and sharing best practices to ensure Canada's insurance sector remains competitive and resilient in a global context.

SOURCE Office of the Superintendent of Financial Institutions