The Risk Register Nobody Looks At


Someone asked a simple question in a quarterly risk review last month.

“What’s actually different in the business since last quarter?”

The register was updated. RAG statuses were current. Actions had owners.

Nobody could answer.

That’s a large manufacturing client. Well-governed on paper. But the register had been maintained religiously while the business ran in exactly the same risk position it had been three months earlier.

This happens more than people admit.


When updating the register becomes the work

Most organisations treat risk registers as filing systems. You document the risk. You assign an owner. You update the status quarterly. Job done.

But here’s what actually happens.

The register becomes a collection of promises. Actions like “enhance access control processes” or “improve supplier assurance” sit there for months. They sound reasonable. Nobody challenges them. But they mean nothing operationally.

You can’t test them. You can’t prove them. You can’t fail them.

They’re designed to stay open.

The format often makes it worse. According to the IIA Foundation’s 2025 ERM study, 59% of organisations still rely on spreadsheets for ERM programme management, with only 21% implementing dedicated GRC platforms. A spreadsheet doesn’t ask you what changed. It just lets you update the cell.

The register becomes the work. Updating it feels like progress. But it confuses activity with outcome.


The moment after sign-off

Think about what happens after a risk gets signed off as acceptable.

In theory, someone owns it. They understand the exposure. They know what acceptable means in practice.

In reality, the risk gets overtaken.

Something more urgent lands — a supplier issue, a project delay, a restructure. The signed-off risk doesn’t get revisited. It just sits there, still marked acceptable, while the business quietly moves in a different direction.

I’ve seen this repeatedly. The language in the register sounded precise when it was approved. But three months later, the context had shifted, the team had half-turned its attention elsewhere, and nobody had gone back to ask whether the original acceptance still held.

Acceptable risk has a shelf life. Most registers don’t track it.

What does acceptable actually mean when priorities have shifted? When a control that was deemed sufficient is now understaffed? When the threat landscape has moved but the register hasn’t?

If nobody can answer that without checking the original entry, the risk isn’t managed. It’s filed.


Ownership without a job description

Most risk registers assign ownership by putting someone’s name in a column.

That’s not ownership. That’s administration.

I’ve seen what genuine ownership looks like. It’s not a behaviour pattern you read about. It shows up in specific, observable ways.

The people who actually own risks don’t wait for the next review cycle. Within days of a risk being assigned, they’ve pulled the relevant people together and defined what good looks like in their environment — not in register language, but in operational terms their team can act on.

Their reporting looks different too. Not “we’re planning to improve access control.” More like “privileged accounts are down 40% this month, here’s what’s left.” Outcomes, not intent.

And when challenged, they can close the loop. They bring evidence. They show you what changed, not what they did.

The others update the register. They report status. They attend the review. But the risk sits exactly where it was, because the register entry and the operational reality have quietly decoupled.

That’s where accountability gaps appear. Only 37% of risk leaders are confident their assessments captured all key risk drivers, according to Gartner’s 2024 ERM research. That gap exists because most registers document risks in isolation. The connections between them — the ones that matter most — aren’t visible from a spreadsheet cell.

The gaps only become undeniable during incidents.


The review cadence problem

Quarterly risk reviews become box-ticking exercises because nobody knows what changed since last time.

The meeting has a particular rhythm. Risk owners speak just enough to show activity. The chair moves things along. Board members ask about timelines. Everyone accepts progress as described rather than progress as evidenced.

You hear phrases like:

“We’ve made good progress this quarter.”

“Work is ongoing.”

“We’re socialising the approach with stakeholders.”

What you rarely hear:

“We’ve reduced the risk by doing X.”

“Here’s the evidence it’s working.”

There’s an unspoken agreement in these meetings. Everyone knows some actions have been “in progress” for multiple quarters. Everyone knows the language is deliberately soft. But nobody wants to pull that thread.

Challenging it creates friction. It exposes delivery failures. It forces accountability conversations people would rather avoid.

So the meeting settles into equilibrium. Governance functions. Risks get reviewed. But it’s no longer interrogating reality. It’s validating the narrative.

I’ve sat in enough of these to know the tell. It’s not what gets said. It’s the absence of anyone asking: “compared to last quarter, what is the business actually doing differently?”

If that question makes people uncomfortable, you have your answer.


The supplier blind spot

Third-party risks sit in registers but never get tested until something breaks.

You’ll see entries like “critical supplier outage impacting service delivery” with actions such as “ensure supplier resilience arrangements are in place” and “review SLAs and DR capabilities annually.”

Looks solid. Ticks the box.

But what hasn’t happened is more important.

Nobody has simulated the supplier failing. Nobody has tried to operate without them. Nobody has validated whether failover actually works end-to-end. Nobody has checked whether contractual assurances translate into operational capability.

The organisation relies on supplier attestations, audit reports, and contractual clauses. Not on evidence from its own environment.

When the supplier actually fails, the conversation changes fast.

It starts routine: “We’re seeing an issue with the supplier.”

Then someone asks: “Can we fail over?”

The answer comes back: “We’re just checking that.”

That’s the signal. If it had been tested, the answer would be immediate.

What follows is fragmentation. Systems aren’t in scope. Access doesn’t exist. Dependencies weren’t documented. The plan isn’t a plan. It’s a set of assumptions that nobody pressure-tested because the register said the risk was managed.

Someone senior asks: “So what are our options?”

That’s the moment it becomes clear there’s no rehearsed response. You’re designing one in real time.

A supplier risk that hasn’t been tested is a theoretical control. It exists in contracts, policies, and registers. Not in operational reality.

Third-party involvement in breaches doubled from 15% to 30% in 2024, according to Verizon’s Data Breach Investigations Report. Most of those organisations had supplier risk entries in their registers.

The first time it gets properly tested is when the supplier actually fails.


What it looks like when it’s working

The quickest way to tell if a risk register connects to operational reality is simple.

Can every action be tied to a specific, observable change in the business without explanation?

If you have to ask “what does that actually mean in practice” more than once or twice, you’re looking at a theoretical register.

Actions like “enhance monitoring capability” or “mature identity and access management” require interpretation. They describe intent. They allow endless motion without outcome.

You can run workshops, procure tools, define processes, produce reports. And still have critical vulnerabilities sitting unpatched in production.

When it’s working, the language is uncomfortable in its simplicity:

“All privileged accounts now require MFA.”

“We can restore the core platform within 4 hours. Last tested January.”

“No critical vulnerabilities exposed to the internet for more than 7 days.”

No translation needed. No narrative. Just facts.

If an action can sit at “70% complete” for three months, it’s connected to reporting. Not operations.

Vague actions are safe. They avoid conflict. They avoid accountability. They let progress be narrated rather than demonstrated.

But from a risk perspective, vague equals unmanaged.


The one-hour fix

You don’t need a six-month transformation to fix this.

You need an hour. The right people. Better questions.

Keep it tight. The risk owner. The person actually doing the work. Someone from operations who lives with the outcome. You, to challenge. Four people maximum. Any more and it turns back into a governance session.

Pick one risk. Not ten. One.

Start with: “Talk me through what is different in the business today because of these actions.”

No status updates. No reading from the register. Just that question.

Then push:

“Show me.”

If access is improved, show how it works now. If monitoring is in place, what would we see if something went wrong?

“Where would this break?”

If this control failed tomorrow, how would we know?

“When did we last prove this works?”

Not when it was reviewed. When it was tested.

“What’s still not where it needs to be?”

Forces honesty faster than anything else on the list.

By the end of the hour, you don’t get a better register.

You get clarity.

Actions get rewritten in language that can be proved or disproved. Ownership lands with the person doing the work, not the name in the column. Things that have been “in progress” for months either get a genuine commitment or get killed. Gaps that everyone quietly knew about become impossible to ignore once said out loud.

The dynamic shifts from reporting to reality.

You’re not trying to improve governance in that hour. You’re trying to answer one question: if this risk materialised tomorrow, would what’s written here actually help us?

If the answer is unclear, the register isn’t connected to reality.


The real question

Risk registers fail when they become filing systems pretending to be controls.

The moment after sign-off matters more than the sign-off itself. Acceptable risk has a shelf life and most registers don’t track it. Ownership needs to be a verb. Review cadences need to interrogate change, not validate narrative. Supplier risks need testing, not just documentation. Actions need to create outcomes that can be proved or disproved.

None of that requires a new platform, a consultant, or a transformation programme.

It requires someone willing to ask an uncomfortable question in a room where everyone has quietly agreed not to.

The organisations that get this right aren’t asking “do we understand the risk better?”

They’re asking “is the business actually safer than it was last quarter?”

If you stopped updating your risk register tomorrow, would risk reduction stop as well?

If the answer is yes, the register hasn’t been supporting delivery. It’s been substituting for it.

Discover more from Acceptable Risk (Documented)

Subscribe now to keep reading and get access to the full archive.

Continue reading