Lanzko Insights
Practical notes on claims innovation and AI trends—built for claims leaders.
The difference between a program that documents performance and one that changes it.
In 1992, Steve Jobs told a room of MIT Sloan students that he’d basically wasted the first decade of his career. Not because the products were bad, but because he’d spent ten years building tools that made managers more productive instead of tools that changed how the work actually got done.
He called that the difference between management productivity and operational productivity. Thirty years later, that distinction maps almost perfectly onto what’s broken in claims auditing.
The audit that changes nothing
Here’s a story most claims leaders recognize.
A reinsurer schedules an audit. The coordinator spends days chasing file lists, assembling a review team, and rebuilding the same tracking spreadsheet from scratch. Reviewers grind through files for weeks. Someone consolidates the findings, polishes a report, and presents it a month or more after the review period ended.
The report is solid. The issues are real. The recommendations make sense.
And then…almost nothing changes.
Not because anyone’s lazy or indifferent. By the time the report lands, the quarter is closed, the files have moved, adjusters have rotated, and the decisions you would want to revisit are already locked in.
That’s not a people problem. It’s a design problem. The audit program was built to observe the claims operation, not to run inside it.
Management productivity vs operational productivity
Jobs’ argument was that most companies over-invest in tools that help executives see and share information, and under-invest in tools that actually change how work flows. Reporting platforms, dashboards, slide decks — these are management productivity tools. They make performance visible. They’re genuinely useful.
But they don’t change operations. They document them.
Operational productivity tools are different. They sit inside the workflow. They change what happens, not just what gets reported. They close the loop between insight and action.
Most claims audit programs today are still built as management productivity exercises:
The deliverable is a report.
The audience is leadership, not the operation.
The claims operation runs exactly as it did before.
That isn’t the fault of the audit team. It’s the natural outcome of designing for observation rather than intervention.
What the data says about timing and action
The timing gap between when findings are discovered and when they reach decision-makers isn’t a minor inconvenience. It defines what the audit can actually accomplish.
Audit-and-feedback research is pretty clear on this: feedback is more likely to change behavior when it’s timely, specific, and delivered close to the point of care or work. More recent reviews of electronic audit and feedback systems say the same thing — systems drive improvement when they provide near real-time, role-specific feedback and pair it with clear guidance on what to do next, not when they just publish performance data weeks later.
Translate that to claims:
If you spot a pattern of inadequate initial reserves while files are still open and adjusters are still assigned, that insight has operational value. You can intervene on live files.
If you spot the same pattern six weeks later in a summary document, you have historical documentation. Useful for benchmarking, negotiations, and governance — not for the claims that already closed with inadequate reserves.
Speed in claims auditing isn’t a nice-to-have. It’s the difference between an intervention and a paper trail.
High-performing audit programs intuitively get this. They’re not trying to create better reports. They’re trying to shrink the time between “we see something” and “we changed something.”
The coordination tax nobody budgets for
There’s another piece almost every coordinator will recognize.
Most audit programs burn a huge amount of energy on work that has nothing to do with evaluating claim quality:
Chasing file lists and exposure data.
Building and maintaining tracking spreadsheets.
Managing reviewer assignments through email.
Reconciling multiple versions of scoring frameworks.
Reformatting findings into something presentable.
This is coordination overhead. It’s real work, it’s exhausting, and it produces nothing of direct analytical value.
In practice, this coordination tax often consumes most of an audit coordinator’s time. The real work — reviewing, scoring, analyzing patterns, and turning them into actions — becomes secondary to keeping the machine running.
The predictable outcome:
Audit cycles stretch from weeks to months.
Programs that should run quarterly slip to annually.
Time that should go into analysis and follow-through gets absorbed by administration.
Your most experienced people spend their days on tasks a well-designed system could handle automatically.
That’s the operational productivity gap, applied directly to audit.
What an operational audit program looks like
Shifting from management to operational auditing isn’t an abstract strategy exercise. It’s a design choice.
An operational audit program:
Doesn’t end with a report — it uses findings to open an action loop.
Connects every material finding to an owner, a deadline, and a tracked resolution.
Keeps remediation in the same system that surfaced the problem.
Verifies in the next audit cycle whether anything actually changed.
This pattern aligns with what electronic audit-and-feedback research calls out as effective: feedback that is timely, specific, and embedded in a system that supports action planning and follow-through.
Operational programs also run at the cadence the business needs, not the cadence the coordinator can survive. That’s only possible if coordination overhead is automated away instead of handled by heroics.
And they don’t deliver insights six weeks after the review period. They surface them while reviewers are in the files, so leaders can act while there’s still something to act on.
The standard we should hold ourselves to
Here’s the simple test to apply to any claims audit program:
Does it change what happens, or does it document what happened?
Reports delivered weeks after the review are documentation. Findings that translate directly into tracked action plans are operational. Dashboards that require manual updates are management tools. Live progress tracking that surfaces patterns while reviewers work is operational infrastructure.
The business case for claims auditing has always been operational — reduced leakage, better reserving discipline, stronger vendor oversight, improved regulatory posture. Those outcomes don’t come from prettier reports. They come from programs designed to change behavior in the operation, not just observe it.
A claims VP shouldn’t discover a systemic reserving issue from a document that took a month to assemble. They should see it while there’s still time to change reserves, strategies, and outcomes.
That’s the standard. And it’s achievable — if we’re willing to treat the audit not as a periodic reporting exercise, but as operational infrastructure.
Marc Lanzkowsky is the founder of The Audit Portal, a purpose-built claims audit platform for carriers, reinsurers, TPAs, and MGAs. He previously ran audit programs through Lanzko Claims Consulting.
