Skip to content

Actuarial AI Starts With the Spreadsheets, Not the Models

Here's the short version:

  • Core actuarial modeling engines (Prophet, AXIS, Integrate) are usually the most governed part of the estate. The pressure isn't there.

  • Financial risk, deployment delays, and audit difficulty live in the operational layer around them — the spreadsheets, manual adjustments, and review steps that surround the model.

  • AI strategy keeps collapsing back into a governance question. Generic AI applied to ungoverned logic exposes the gaps faster than it closes them. AI works in actuarial when the underlying structure is extracted, traced, and controlled first.


Over the past two months, our team has sat in on a lot of conversations with actuarial leaders at carriers, reinsurers, and life and retirement firms. Different geographies, different stacks, different stages of modernization. A pattern keeps showing up that's worth naming.

The hardest questions chief actuaries are fielding right now don't come from auditors or regulators. They come from their own CFOs, CIOs, and increasingly boards. And almost none of the questions are about actuarial methodology.

They're about control, speed, traceability, and what happens when something moves unexpectedly. And they're being asked with new urgency because every executive in the building is also asking some version of "what's our AI strategy here."

That conversation surfaces the operational gaps that were already there.

Two answers, two confidence levels

Ask a chief actuary about their core actuarial modeling engine and the answer comes back clean. Prophet, AXIS, Integrate — whichever engine sits at the center — has matured into a relatively well-governed estate. Methodology is documented. Version control exists. Model owners know what they own.

Now ask a different set of questions:

How long would it take to explain why the result moved last quarter?

If we deploy a pricing change next week, can we prove what changed and who approved it?

Which of our spreadsheets are creating financial risk right now?

The answers get noticeably less crisp.

Not because actuarial teams are unprepared — they're usually the most rigorous group in the building — but because these questions are aimed at a different layer entirely.

What is the actuarial operational layer?

It's everything that happens around the core engine to turn modeled output into decisions, reports, and signed-off results. Adjustments to model output. Reporting overlays. Change documentation. Comparison workbooks. Review and approval steps. Local analysis files that explain a deviation to finance.

Most of it lives in Excel, supplemented by SQL, scripts, and BI tools.

A few examples from recent conversations:

A pricing leader at a global life carrier described their pricing logic as fundamentally sound — but said deploying a change takes "bloody ages." The actuarial work isn't the bottleneck. Testing, comparison, and approval are. They happen across files and inboxes, with manual handoffs at every step.

A reinsurance team described running between 75 and 100 Excel-based adjustment workbooks every quarter to amend model results before reporting flows out. The core engine performs. The amendments are the problem.

A multi-line carrier described root-cause investigation — finding the small block of business causing an unexpected movement inside millions of records — as taking up to two weeks. The team is good. The tooling around result explanation isn't.

A senior leader at a global insurer summarized the underlying reality: there is effectively no end-to-end process in the firm that doesn't pass through Excel somewhere.

This layer modernized at a different pace than the core.

Engines got industrialized. The wrapper around them stayed manual.

Now the two are out of step, and that mismatch is what executives are really asking about, even when their questions sound like they're about the model.

Where does governance actually break down in actuarial workflows?

Most governance conversations assume the model is the unit of control. In practice, model risk management (MRM) frameworks, vendor controls, and validation processes have built up a real layer of discipline around core engines. But the discipline is uneven across the rest of the workflow.

Where governance breaks down:

  • the spreadsheet that adjusts model output before it lands in reporting

  • the SQL or script that reshapes results for downstream systems

  • the workbook that compares this quarter to last

  • the file that supports management signoff

  • the local analysis that explains a deviation to finance

This is the end-user computing (EUC) layer — and it's the layer audit and validation are most likely to ask about. Documentation is slowest to produce here. Version evidence is weakest. Dependence on specific individuals is highest. When a senior actuary leaves, what walks out the door usually isn't the model. It's the working knowledge of how the surrounding files actually behave.

When governance is discussed at the executive level, the implicit assumption is that governance applies to the model. The harder problem sits in the layer around it.

Executive stakeholders sense this, even when they don't articulate it precisely.

Can we trace it? Can we prove it? Can we move faster without losing control?

These questions are pointed at the operational layer.

Why AI strategy keeps collapsing back into a governance question

Now to AI directly, because that's where most of these conversations end up.

The teams making real progress on AI in actuarial workflows have one thing in common: they treat structure and traceability as prerequisites, not afterthoughts.  The reason is mechanical. AI applied to a workbook it can't actually understand will produce confident-sounding output anyway — that's how generative models work. It looks like an answer. Whether it reflects the real logic depends entirely on what the AI was given to work with.

Generic AI tools tend to read a spreadsheet the way they read text. They infer. They guess at intent. With actuarial workbooks — formulas, dependencies, named ranges, version drift, partial documentation — guessing produces output that's plausible but unreliable.

Several teams we've spoken with have run early experiments using general-purpose tools against large actuarial outputs and hit the same wall: structure, lineage, and repeatability matter more than language fluency, and generic tools struggle once the dataset gets serious.

The teams getting useful results take a different approach. They extract the structure of the workbook deterministically first — formulas, dependencies, data flow — and then apply AI to that structured representation.

The AI isn't guessing what's in the file. It's working from a real map of it.

That distinction matters because it's the difference between AI that surfaces uncontrolled logic faster than the team can defend it, and AI that genuinely accelerates documentation, comparison, and explanation.

The work that creates real AI-readiness — extracted structure, traceable changes, governed adjustments — is the same work that answers the executive questions chief actuaries are already getting. They're not separate workstreams.

A more useful question to ask

If we could rewrite the question executives are bringing to actuarial right now, this is the version that surfaces what actually needs to happen:

Can we explain what changed in our outputs last quarter, prove the change was controlled, and deploy the next change without a multi-week detour?

It's a more honest framing than "are we AI-ready," because it points at operational reality.

And the answer tells you where to invest.

If the answer is no, the issue is in the layer around the model — the spreadsheet-driven logic, the manual handoffs, the documentation assembled after the fact, the review process that runs on memory rather than evidence.

That work has to happen before any AI ambition becomes credible.

If the answer is yes, the implications go further than most teams expect. Pricing and product changes can move from idea to production in days rather than quarters. Audit and validation conversations stop consuming senior time. Result investigation that used to take two weeks runs in hours.

AI initiatives stop being aspirational — they have a real, governed estate to operate on. And the chief actuary stops absorbing executive pressure as a translation problem and starts shaping the agenda.

The actuarial modernization sequence

Most firms don't need to replace what works.

Core actuarial engines should stay where they are. Spreadsheets that genuinely belong in spreadsheets should stay too.

The question is whether the operational layer around them is governed, traceable, and connected to the rest of the workflow.

The work falls into three steps:

  1. Find what matters across the estate. Most firms can't answer "which spreadsheets are creating risk right now" because no one has visibility across teams and folders. Prioritization needs evidence, not memory. This is where Coherent Insights does its work — discovering, cataloging, and surfacing intelligence across the spreadsheet estate so the conversation moves from anecdote to inventory.

  2. Govern what stays in Excel. The operational layer should carry the same standard of control as the core engine — versioning, approval, audit trail — without forcing business users out of the tool they actually work in. Coherent Control extends IT-grade governance into Excel itself, so the review and approval workflow runs as enterprise infrastructure rather than email chains.

  3. Industrialize the logic that genuinely should scale. Some calculations belong as services — tested, deployed, audited, accessible to downstream systems, independent of any single person or file. Coherent Spark takes governed Excel logic and exposes it as a deterministic, production-ready, callable services without requiring a code rewrite.

That sequence is what lets a chief actuary answer the harder questions confidently. It's also the foundation that makes AI ambition real instead of aspirational. The teams who can answer the executive questions crisply got there by working on the operational layer first.