ChatGPT Went on Strike: What It Teaches Us About Business Continuity in the Age of AI

How the June 2025 ChatGPT outage underscores the critical importance of business continuity planning for AI-dependent financial services

Handle-AI Research Team June 25, 2025 5 min read

On June 10, 2025, users of ChatGPT experienced a significant and prolonged service outage across OpenAI's consumer-facing tools. While enterprise-facing APIs remained functional in the U.S., the core user services were inaccessible for hours.

Robot holding strike sign with ChatGPT logo - representing AI service outages and business continuity challenges

This incident underscores a critical reminder: in an era where businesses — particularly in the financial sector — increasingly rely on artificial intelligence (AI), business continuity is not optional. It's foundational.

Why Business Continuity Matters — Especially in Finance

Business continuity refers to an organization's ability to maintain acceptable service levels during disruptive events — whether it's a technical failure, a cyberattack, or a natural disaster.

In financial markets, where operations happen in real time, even a brief disruption can cause:

The Role of Regulation: A Look at MiFID II

Take MiFID II (Markets in Financial Instruments Directive II) as a case in point. This EU directive — also mirrored in the UK — requires investment firms and brokers to ensure operational resilience.

Article 16 outlines key organizational obligations, including:

The Bigger Picture: Systemic Risk

Systemic risk refers to threats that impact not just one firm, but the entire financial system. AI failures can trigger such risks — for instance, if a CCP (central counterparty) or clearinghouse experiences an outage due to an AI malfunction, the cascading effects could disrupt entire markets.

In these cases, continuity requirements are significantly more stringent.

Are We Relying on AI Too Soon?

The rise of AI in financial services raises an urgent question: Are we prepared to trust AI with systemically important functions?

And if not — what regulatory steps are needed?

One possible answer lies in frameworks like DORA (Digital Operational Resilience Act), a European regulation set to take effect in January 2025. It aims to set stricter standards for ICT and AI risk management across the financial sector.

Final Thoughts

AI is here to stay. But real scale demands real resilience.

As the financial sector embraces large language models and autonomous agents, now is the time to:

What's your take? Are we ready to deploy AI at the core of critical financial infrastructure?

P.S. This post and image were not generated using ChatGPT 😉