The Breach Was Just the Symptom - Why AI data governance should keep board members awake at night

The Breach Was Just the Symptom

The Breach Was Just the Symptom

Data governance has always been one of financial services’ most stubborn problems. The arrival of AI is about to make it an existential one. And the regulatory frameworks designed to manage that risk are not yet built to see it.

That is the argument this article makes. Four recent incidents illustrate the starting point. What comes next is considerably more troubling.

Trust, Not Technology, Is the Weakest Link

As we enter 2026, four incidents have already illustrated with painful clarity how financial institutions lose control of sensitive data — and what connects them is more instructive than the incidents themselves.

In Abu Dhabi, over 700 passport scans from one of the world’s most prominent finance gatherings — including those of former UK Prime Minister Lord David Cameron and hedge fund billionaire Alan Howard — were found sitting on an unsecured cloud server, first reported by the Financial Times. No sophisticated nation-state actor. No zero-day exploit. An organisation had trusted a third-party vendor to secure its data. It had not verified that trust. In France, authorities confirmed that approximately 1.2 million bank accounts were exposed after attackers accessed the national FICOBA registry using stolen credentials belonging to a government official. The system trusted a set of credentials. It could not distinguish between their legitimate owner and an attacker. Betterment disclosed that a social engineering attack on a third-party communications platform had exposed the personal data of over a million customers — retirement plan details, financial interests, internal meeting notes — before a ransomware group threatened to publish everything Betterment refused to pay for. An employee trusted a caller. And PayPal confirmed that a coding error in its lending platform had exposed highly sensitive customer data — including Social Security numbers — for nearly six months before being detected. The organisation trusted its own internal processes to catch the problem. They did not.

No firewall was breached across any of these four incidents. No encryption was broken. In each case the attacker, where there even was one, walked through a door that had been left open, or was handed a key by someone who should not have given it away. The weakest link was not the technology. It was trust — misplaced, unverified, or simply untested.

This matters because it defines the nature of the problem. Our security frameworks are largely built around keeping attackers out. But the most consequential failures in financial services data happen when trust is misplaced, not when perimeters are breached. And that distinction becomes critically important when we consider what AI now brings to this environment.

AI Does Not Just Consume Data. It Transforms It.

Financial institutions have always struggled to govern data scattered across hundreds of systems, vendors, cloud environments and geographies. Most cannot produce a complete map of where customer data sits at any given moment. The four incidents above illustrate this precisely — in each case, data was somewhere the organisation either did not fully know about, could not control, or had trusted someone else to manage. That is a pre-AI problem, rooted in complexity and the pace of digital transformation outrunning governance maturity.

AI does not solve this problem. It compounds it in ways that existing governance frameworks were not designed to handle.

Traditional data governance asks: where is the data stored, who can access it, and is it being protected? These are the right questions for a database or a cloud environment. They are the wrong questions for an AI system. AI models do not just store data — they learn from it, transform it, and produce outputs that are themselves new data derived from the original. This creates governance obligations that have no equivalent in conventional frameworks.

Consider data provenance. An AI model trained on financial data may have ingested information that was incorrectly classified, poorly governed, or sourced from a compromised third party. The model carries that contamination invisibly, and every output it produces is shaped by it. Consider data multiplication. AI systems generate synthetic data, predictions, and behavioural profiles derived from customer data that may not be treated as customer data under existing frameworks. Who owns a credit risk score generated by a model trained on information your customer provided? Consider data invisibility. Conventional governance tracks where data is stored. AI governance must track where data has been learned — a fundamentally different question that current frameworks are simply not built to answer.

And then there is third-party exposure. Most financial institutions use AI models they did not build and cannot fully inspect, provided by vendors whose own training data may have come from hundreds of sources. The governance obligations that apply to a cloud storage provider do not translate to an AI model provider. The attack surface is different. The risks are different. The questions to ask are different. And most institutions are not yet asking them.

The numbers give some sense of the scale of the underlying problem. According to Verizon’s 2025 Data Breach Investigations Report, third-party involvement in breaches doubled year-over-year to 30%. SecurityScorecard puts it higher, linking 35.5% of all breaches to third-party access. Those figures reflect the conventional third-party risk environment. AI adds a layer of third-party dependency that is more opaque, more embedded, and harder to audit than anything that came before it.

The consequences of getting this wrong are not merely operational. A recent survey by the American Bankers Association found that 51% of US bank customers choose their institution primarily because they trust its security — and 67% say they would consider switching banks after a serious data breach. That is not a reputational risk sitting somewhere in the future. It is the business model at stake.

The Regulatory Gap Nobody Has Named

These incidents and risks are unfolding within a regulatory environment that is, at best, still catching up — and at worst, looking in the wrong direction.

In Europe, the effort is genuine. DORA creates real obligations around ICT resilience and third-party oversight. The EU AI Act adds a risk-based framework for high-risk AI applications — including credit scoring and insurance pricing — with meaningful penalties. Together they are the most serious attempt by any jurisdiction to govern digital risk in finance. But DORA treats AI infrastructure like any other ICT system. It asks whether the system is resilient, whether third-party providers are managed, whether incidents are reported. It does not ask about data provenance, model training integrity, or the governance of AI-generated data. The EU AI Act mandates data quality requirements at the point of model development. It does not address the ongoing governance of data flowing through a deployed model in a live financial environment — nor what happens when a model is attacked through its data inputs rather than its infrastructure. Model poisoning, adversarial inputs designed to fool fraud detection, deepfakes that impersonate an executive to authorise a fraudulent transfer — these fall into a gap that neither framework was designed to own.

The United States is in a more uncertain position. There is no comprehensive federal AI governance framework for financial services. The SEC applies existing disclosure rules to AI. The Treasury has released a non-binding risk management framework. The Trump administration is actively working to preempt state-level AI laws. The result is not yet a coherent framework. It is a set of well-intentioned interventions that have not cohered into a system.

The United Kingdom sits somewhere in between. The FCA has chosen not to introduce AI-specific rules, relying instead on existing accountability regimes. The logic is defensible, and the FCA’s collaborative approach — working directly with firms through its AI Lab — reflects genuine effort. But the Critical Third Parties Regime, designed precisely to bring AI and cloud providers under regulatory oversight, has not yet designated a single organisation since it was established. The UK’s own Treasury Select Committee has concluded that the financial system is not prepared for a major AI-related incident.

What all three jurisdictions share is a failure to name — and therefore govern — the most significant gap: no jurisdiction has yet defined what AI data governance means as a distinct and enforceable obligation in financial services. We have rules about where data is stored. We have emerging rules about how AI systems should be built. We do not yet have rules about the continuous governance of data as it flows into, through, and out of AI systems operating in live financial environments.

That is not a technical gap. It is a conceptual one. Regulators have not yet developed the mental model needed to write the rules.

What Good Looks Like

This is not a criticism of the people working on these problems — in regulatory bodies, in institutions, or in boards. The threat is moving faster than any single organisation can track, and the intent to address it is genuine. But intent is not the same as readiness.

Do businesses truly have a clear picture of where their customer data sits — not just within their own infrastructure, but across every vendor, every cloud environment, every cross-border transfer? Is third-party due diligence keeping pace with the sensitivity of the data being entrusted to those suppliers? And critically — do organisations have access to people who genuinely understand the AI risk landscape — not just cybersecurity generalists, but specialists who can identify where AI systems themselves have become an attack surface?

What good looks like is not a mystery. It means board-level AI risk specialists sitting alongside cyber and legal counsel. It means mandatory AI-specific stress testing by regulators. It means a defined regulatory category for adversarial AI attacks that gives institutions clear obligations and clear accountability. And it means a governance framework that asks not just where data is stored, but where it has been learned — and what that means for every model running in a live financial environment today.

None of these exist at scale anywhere in the world. Until they do, institutions cannot be expected to govern what regulators have not yet defined. But that is precisely why boards need to be asking these questions now, ahead of the frameworks — because by the time the rules arrive, the incidents will already have happened.

The breach is still just the symptom. The governance problem is growing faster than any of us — institutions, regulators, or advisors — are currently equipped to fix. And AI is about to make it orders of magnitude harder.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin