I was doing a review for a client in the entertainment sector recently. Not a cyber incident. A broader governance piece.
During the review we found tools already embedded in their outreach operation. Approved software. Budget already signed off. Users already dependent on it.
One of them was segmenting audiences. Deciding, in effect, who got reached and who didn’t. Which fans, which partners, which commercial targets. The model doing that work had never been classified. Nobody had assessed the risk. Nobody had approved the use case.
The board didn’t know it existed.
That’s not negligence. That’s just how shadow AI arrives. Not through a rogue project or a maverick team. Through a SaaS renewal. A feature update. A capability that was already there when someone clicked enable.
The question it raised wasn’t technical. It was a governance question. Who decided this was acceptable? What harms could it cause? Who owns it if something goes wrong?
Nobody had answers. Not because they were careless. Because nobody had asked.
The Wrong Question
Most boards are asking how to comply with the EU AI Act.
That’s the wrong question.
The right question is what AI risks they’re prepared to own. Compliance produces paperwork. Governance changes decisions. Those aren’t the same thing, and conflating them is how organisations end up with an AI policy nobody uses and a governance gap nobody’s mapped.
The regulatory timeline is real and worth knowing. Prohibited practices under the Act have applied since February 2025. General-purpose AI obligations started August 2025. The main deadline for most organisations is August 2026. By then you need classified systems, assessed risk categories, and implemented measures covering risk management, human oversight, data governance, and transparency.
But here’s what the deadline obscures.
Board oversight of AI risk has tripled in a year. Nearly half of Fortune 100 companies now cite AI as part of board oversight responsibilities, up from 16% in 2024. Only 12% feel very prepared to assess, manage, and recover from AI governance risks.
Committees exist. Control does not.
Boards are naming the risk without building the capability to govern it. That’s not a compliance problem. That’s a management problem. And August 2026 doesn’t fix it — it just makes it visible.
These principles hold beyond the Act itself. Organisations operating outside EU jurisdiction face equivalent pressure from their own regulators, their insurers, and increasingly their customers. The governance question is the same wherever you sit.
Shadow AI: The Hidden Failure Mode
Most boards picture AI as something that generates text or images. ChatGPT. Copilot. The tools people use visibly and deliberately. That’s one kind of AI.
The tools that concern me more are the ones making predictions and decisions quietly in the background. The segmentation model. The candidate ranking system. The churn score. These don’t look like AI to most boards because they don’t produce a conversation or a document. They produce a number, a ranking, a decision. And they’ve been running in enterprise software for years.
The version that concerns me most isn’t someone using ChatGPT to draft an email. That has risk. But it’s visible enough to manage.
The real concern is AI embedded inside business processes without anyone calling it AI. It arrives through software already approved, already budgeted, already trusted. A SaaS renewal. A feature update. A capability that was there when someone clicked enable.
The entertainment client I mentioned at the start is a good example. Nobody bought an AI system. They bought a platform. The audience segmentation was already in it. And for months, decisions about who got reached, with what message, and how often, were being shaped by a model that had never been classified, never been assessed, and never been approved as an AI use case.
When I looked at their wider SaaS estate, the pattern repeated. Tools doing things that would have triggered a governance conversation if anyone had framed them as AI decisions. Scoring. Ranking. Filtering. Prioritising.
Nobody had framed them that way. So nobody had asked the questions.
Not every AI feature embedded in a SaaS tool is high-risk. But you can’t know that until you’ve classified it. That’s the point.
This is how shadow AI scales. Not through rogue projects. Through procurement. The HR platform that ranks candidates. The customer service tool that scores complaint severity. The CRM that predicts churn and adjusts outreach accordingly. Each one approved at purchase. None of them classified as AI risk.
My challenge to boards is simple. Don’t ask only where you’re building AI. Ask where decisions are being automated, scored, ranked, prioritised, or summarised. That’s where the real AI estate is. And in most organisations it’s already larger than anyone in the boardroom thinks.
Classification Before Deployment
The reason shadow AI proliferates is straightforward. Most organisations classify AI risk after deployment, not before.
By the time classification happens, the business has built dependency. Users rely on the tool. Budget is committed. The project has executive sponsorship. Then risk, legal, or security discover it touches sensitive data, influences material decisions, or creates transparency obligations nobody designed for.
Your options at that point are poor. Retrofit controls at greater cost. Delay an initiative people are already counting on. Accept a risk you haven’t properly understood. Or unwind a deployment people already rely on.
I’ve seen all four. None of them are comfortable. And all of them were avoidable.
The better model is straightforward in principle, harder in practice. No AI use case moves from idea to pilot, or pilot to production, without classification, ownership, and approval thresholds being established first. That’s not bureaucracy. It’s the same due diligence discipline organisations already apply to acquisitions, financial instruments, and safety-critical products.
You wouldn’t deploy a new financial instrument without classifying the risk. AI isn’t different. It just arrived faster than the governance frameworks did, and most organisations are still catching up.
In practice, the use cases that need board-level attention tend to share recognisable characteristics. They influence rights or access. They prioritise or rank people. They affect vulnerable individuals. They automate decisions that previously required human judgement. When more than one of those applies to the same tool, that’s the conversation the board needs to have before deployment, not after.
Classification before deployment does one thing above everything else. It means you govern deliberately rather than under pressure. When you classify early, the conversation about risk appetite, ownership, and acceptable harm happens before anyone has an emotional stake in the outcome. When you classify late, you’re having that conversation in a room full of people who’ve already committed.
tbh the late classification conversation is one of the harder ones to run. Everyone knows the right answer. Nobody wants to say it out loud.
What an AI Risk Appetite Statement Should Answer
Most organisations that have done something on AI governance have an AI policy. Fewer have an AI risk appetite statement. They’re not the same thing.
A policy tells people what to do. An appetite statement tells the organisation which risks it’s prepared to own, at what level, under what conditions. It’s the set of prior decisions that stops people making things up under pressure when a use case lands on the table.
In practice it needs to answer a specific set of questions. Not in abstract terms. In the language of the business.
Where is AI allowed to influence decisions, and under what conditions? Are you comfortable with AI in recruitment, pricing, credit, customer vulnerability assessment, clinical support? If yes, what controls are required? If no, why not, and who enforces that?
What level of autonomy is acceptable? There’s a material difference between AI that recommends, AI that decides, and AI that acts without human review. Those are different risk positions and they need different approval thresholds.
Which harms are outside appetite entirely? Discrimination, unsafe advice, loss of privacy, customer detriment, regulatory breach. Boards need to name the non-negotiables before a use case arrives, not while one is being argued over in a committee.
What evidence is required before deployment? Testing, bias assessment, explainability, human oversight, supplier assurance, incident response, monitoring. Not as theory. As go/no-go criteria with a named owner.
Who owns the system when it fails? Not the vendor. Not the algorithm. Not IT. A named business owner with defined accountability.
When I work through this with boards, the conversation that unlocks it is usually around harm. I move them away from abstract categories and into real consequences.
Instead of asking whether discrimination is acceptable, I ask which groups of people could be unfairly affected by this system, and how would you know before they do. Instead of asking about unsafe advice, I ask what’s the worst advice this system could give, who might rely on it, and what happens next. Instead of customer detriment, I ask whether a customer could lose money, access, opportunity, or dignity because of this decision.
That’s when it becomes a real conversation rather than a governance exercise.
I use one test with boards that cuts through most of the hesitation. Would you be comfortable explaining this AI decision to the person affected, to your regulator, and to the front page of a newspaper? If the answer is no, your risk appetite isn’t clear enough on that use case. Go back and define it before you approve deployment.
Three Questions Every Board Should Ask
When I’m assessing how seriously an organisation is taking AI governance, I ask three questions. The answers tell me most of what I need to know.
The first is whether they can tell me which deployed AI systems they couldn’t currently classify. Not hypothetical future deployments. The tools running right now. If the answer takes more than a few minutes to produce, they don’t have an AI inventory. They have a documentation problem they’ve been calling governance.
The second is which AI decisions currently in operation they’d be unwilling to defend publicly. If the answer is none, they’re either very well governed or they haven’t looked hard enough. In my experience it’s usually the second. Most organisations have at least one AI use case that wouldn’t survive media scrutiny. The ones that say otherwise tend to be the ones who haven’t asked the question seriously.
The third is the one that matters most. Can they show me a use case that was stopped, changed, or escalated because of their governance process?
If the answer is no, the governance isn’t working. It might be producing policies, registers, and committee minutes. But it’s not governing.
Real governance has consequences. It changes scope. It adds controls. It delays deployment when the evidence isn’t there. It escalates decisions that are uncomfortable. Occasionally it says no.
I’ve seen governance processes that looked thorough on paper — documented, chaired, minuted — where not a single AI project had ever been materially changed by the process. Every use case that went in came out approved. The governance was providing cover, not challenge.
That’s the test. Not whether you have a policy. Not whether legal reviewed it. Whether the process has ever altered the path of the business when the risk demanded it.
Accountability When It Goes Wrong
When an AI system causes harm, the board conversation starts with three questions. What did you know. When did you know it. What did you do about it.
Not in abstract terms. In evidence.
Where was the inventory? Where was the classification? Who approved the use case? What risks were accepted and by whom? What controls were required? What monitoring was in place? What changed when the warning signs appeared?
If the answer to most of those is “we had a policy,” that won’t be enough. Regulators have seen the policy defence before. So have courts. So has the media.
What they’re looking for is a chain of deliberate decisions. Evidence that foreseeable harms were identified, classified, challenged, and managed before the organisation chose to proceed. Not perfection. Deliberateness.
Directors face growing scrutiny over AI governance failures, including questions about whether foreseeable harms were left unmanaged on their watch. That’s fiduciary territory, and it’s getting harder to argue otherwise. The allocation of accountability for AI systems needs to be mapped explicitly — across executives, managers, and directors — not assumed to sit somewhere in IT or legal.
There’s a specific misunderstanding worth addressing here. Most organisations assume that if a SaaS vendor built the AI feature, the vendor carries the legal responsibility. The EU AI Act doesn’t work that way.
Under the Act, organisations that deploy AI systems — even AI embedded in third-party software they didn’t build and didn’t explicitly buy as AI — are classified as Deployers. Deployers carry distinct legal obligations. Due diligence on the AI capability, risk assessment, human oversight measures, and transparency requirements. It’s worth noting that obligations under the Act are risk-based and proportional, not one-size-fits-all. But proportionality doesn’t remove the obligation to classify and assess. Claiming a feature was embedded in SaaS rather than deliberately procured is unlikely to satisfy a regulator asking why deployer obligations weren’t met.
That changes the procurement conversation considerably. Every SaaS renewal is now a potential AI governance decision. Most procurement processes aren’t set up to ask that question yet.
Governance failure has a recognisable pattern when you see it in hindsight. No clear owner. No risk appetite. No escalation route. Weak supplier assurance. No meaningful challenge before deployment. Each decision looked reasonable in isolation. Collectively they left the organisation exposed to something foreseeable.
That’s what accountability actually looks like when AI fails. Not a fine or a regulatory finding in isolation. Loss of trust, board scrutiny, customer harm, and a question from stakeholders that’s very hard to answer.
Was this foreseeable. And did you govern it properly.
The First Conversation
If your board is behind on this and wants to move, the first conversation isn’t complicated.
Where is AI already influencing decisions in this organisation, and who owns the risk?
Not where is the innovation team experimenting. Not where are we building something new. Where is AI already scoring, ranking, recommending, summarising, routing, or shaping decisions right now. Including the tools nobody bought as AI.
That conversation should involve the CEO, CIO, CISO, legal, risk, procurement, HR, and the major business owners. Not because it’s a technical question. Because the answer touches all of them.
From that conversation, three things should follow within days rather than months. An AI decision-point inventory — what’s running, what it’s doing, who approved it. A first-pass risk classification — not perfect, just enough to know what needs board attention. And a list of use cases that need executive or board escalation before they go any further.
That’s the starting point. Not an AI strategy. Not a new committee. Not a policy review.
The entertainment client I mentioned at the start is working through that process now. The segmentation tool is still running. But it has an owner, a classification, and a set of conditions under which it gets reviewed. That’s a different position than three months ago.
Most boards I speak to know they’re behind on this. The ones that move are the ones that stop treating it as a future problem.
Which part of that first conversation would be hardest to have in your organisation?
Acceptable Risk (Documented)
Reference List
EU AI Act — Timeline and Obligations
EU AI Act — Official implementation timeline Primary source for all three deadline dates (February 2025, August 2025, August 2026). European Commission — Navigating the AI Act https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
EU AI Act — Article 3: Definitions (Provider and Deployer) Statutory definition of “deployer” and “provider” under the Act. EU AI Act — Article 3 https://artificialintelligenceact.eu/article/3/
EU AI Act — Article 26: Obligations of Deployers of High-Risk AI Systems Statutory basis for deployer obligations — due diligence, human oversight, monitoring, transparency. EU AI Act — Article 26 https://artificialintelligenceact.eu/article/26/
EU AI Act — Article 26 (Official Commission service desk) Commission-hosted version of deployer obligations. AI Act Service Desk — Article 26 https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-26
Board Oversight and Preparedness Statistics
Fortune 100 board AI oversight — 48% up from 16%, tripled in one year Source: EY Center for Board Matters, 2025 proxy season analysis of 80 Fortune 100 filings. Cybersecurity Dive — Fortune 100 firms accelerate disclosures linked to AI, cybersecurity risk https://www.cybersecuritydive.com/news/fortune-100-firms-disclosures-ai-cybersecurity-risk/802839/
Only 12% of C-suite respondents correctly identified appropriate controls against AI risks Source: EY Responsible AI Governance survey, October 2025, 975 C-suite leaders across 21 countries. EY Global Newsroom — EY survey: companies advancing responsible AI governance linked to better business outcomes https://www.ey.com/en_gl/newsroom/2025/10/ey-survey-companies-advancing-responsible-ai-governance-linked-to-better-business-outcomes
Supporting source for both statistics (aggregated) Corporate Compliance Insights — Board Oversight of AI Triples Since ’24 https://www.corporatecomplianceinsights.com/news-roundup-october-31-2025/
Deployer Liability — Practitioner and Legal Guidance
Deployer vs Provider distinction — practical guide for August 2026 Confirms that organisations using third-party AI embedded in SaaS are classified as Deployers with Article 26 obligations regardless of procurement route. Savia Learning — EU AI Act Deployer Obligations: A Practical Guide for 2026 https://savialearning.com/articles/eu-ai-act-deployer-obligations
Provider and Deployer roles — legal analysis Stephenson Harwood legal analysis of the provider/deployer distinction and requalification clause. Stephenson Harwood — The roles of the provider and deployer in AI systems and models https://www.stephensonharwood.com/insights/the-roles-of-the-provider-and-deployer-in-ai-systems-and-models/
Deployer obligations — general obligations including non-high-risk systems Confirms deployers have AI literacy and transparency obligations regardless of risk classification. DataGuard — The EU AI Act: What are the obligations for deployers? https://www.dataguard.com/blog/the-eu-ai-act-obligations-for-deployer/
Director and Board Liability
Directors, AI governance failures, and fiduciary scrutiny Covers board fiduciary responsibility and the growing expectation of active AI oversight. Directors & Boards — AI and the Audit Committee https://www.directorsandboards.com/committees/audit-committee/ai-and-the-audit-committee/
D&O underwriting and AI governance maturity Insurers now scrutinise board-level AI governance as part of D&O underwriting. Aon — AI Risk 2026: What Business Leaders Need to Know https://www.aon.com/en/insights/articles/ai-risk-2026-practical-agenda
