The Audit Portal platform

Every audit workflow.
One configurable platform.

Three review modes built around the same audit framework — your topics, your questions, your scoring criteria — applied consistently whether a human reviewer is driving or the AI pipeline is running.

Manual review with AI assistance

Built for the files that need
a human in the room.

Complex claims, litigated files, and high-severity cases benefit from reviewer judgment. The platform puts everything they need in one place and handles the documentation automatically.

Live Review Environment

Your auditor works inside the platform with real-time access to claim notes, reserve history, and file data — not a spreadsheet export. Every piece of context is available at the moment of the determination.

Framework-Guided Assessment

Reviews follow your audit template — the same topics, questions, and scoring rules used across every audit in your organization. Findings are structured, comparable, and traceable.

Automatic Report Compilation

When review is complete, findings are compiled into a finished report without a separate write-up step. The report reflects every determination made during the review.

QC Workflow

A built-in quality control pass sits between review completion and report close. QC reviewers see submitted findings and approve or flag before the audit is finalized.

Automated AI pipeline

The same rigor as manual review.
Applied to every claim at once.

The pipeline doesn't apply a generic AI model to your files. It runs your audit framework — your topics, your questions, your scoring criteria — across every claim simultaneously, with findings traceable to specific questions.

Framework-Driven Scoring

Every claim is assessed against the topics and questions in your audit template. Scoring follows the same logic your reviewers apply manually — not a black-box risk model.

Contextualized Findings

Each finding references the specific audit question it responds to, with reasoning captured per claim. Nothing is surfaced without a traceable basis in your framework.

Pre-Processing Pass

Claim notes are pre-processed and digested before the pipeline runs, so scoring is fast and cache-backed rather than waiting on per-claim AI calls during the run.

Calibration and Executive Summary

After the pipeline completes, findings are reviewed through a calibration pass before close. A portfolio-level executive summary is generated automatically on completion.

Bordereaux review

From raw loss run to prioritized
claim list — without manual triage.

Built for large bordereaux files where row-by-row review isn't feasible. The platform ingests any format, scores the full population, and evaluates selected claims against your audit criteria — the same criteria that drive your manual and AI pipeline reviews.

Flexible Ingestion

Upload XLSX or CSV files in any column layout. The platform auto-maps recognized fields and surfaces unrecognized columns for manual mapping with confidence-scored suggestions.

Data Quality Profiling

Before scoring begins, a profiling pass grades the file on completeness, LOB code coverage, and claim notes quality — so you know what you're working with before any selections are made.

Five-Factor Risk Scoring

Every claim receives a composite risk score based on attachment penetration, reserve development, description severity, litigation status, and jurisdiction risk. No manual flagging required.

Tier Classification

Scored claims are assigned to one of four tiers — Act Now, Monitor, Track, or Routine — with corresponding workflow tabs in the results view. Tier 1 and 2 claims are selected for evaluation by default.

Criteria-Based AI Evaluation

Selected claims are evaluated against your specific audit criteria — the same intent-based framework used in manual and AI pipeline reviews. Each claim receives a confidence score and opportunity flags per criterion.

Anomaly Screen

A secondary pass runs across the full population after evaluation — surfacing stale open claims, reserve inconsistencies, and LOB conflicts that risk scoring alone wouldn't catch.

Cross-audit intelligence

Systemic patterns that individual
audit reports can't surface.

When multiple audits are in the platform, results can be analyzed across your entire portfolio — identifying what's consistently well-handled, what needs structural attention, and how performance is trending over time.

Portfolio Pattern Analysis

Topics across selected audits are grouped into four themes — Strength, Concern, Improving, and Inconsistent. Surfaced automatically, not assembled from separate reports.

Topic Heatmap

A cross-audit view of how every topic scores across every audit in the selection. Clusters of underperformance that would appear as isolated findings in individual reports become visible as patterns.

Trend Tracking

Performance on each topic is tracked over time across your audit history — distinguishing persistent issues from variance, and genuine improvement from one-off results.

Partner-Level Scope

Partners managing multiple organizations can run analysis across any combination of audits and clients. Data isolation is enforced at the query level — each organization's data remains scoped.

Across every review type

The infrastructure that runs
underneath every audit.

Configurable Audit Framework

Build your template once — line-of-business scoping, topic hierarchies, scored questions, leakage type tracking. Every review mode draws from the same library.

Multi-Role Workflow

Six roles with scoped access: Admin, Manager, Reviewer, QC, Viewer, and Partner. Assignments, queues, and reporting access follow role automatically.

Reporting Suite

Draft reports, live reports, printable views, and executive summaries — generated from review data, not assembled separately. Delivered on completion of any review mode.

Security and Data Isolation

Row-level security enforced at the database layer. Organization data is isolated by design, with invite-only access and role-based permissions throughout.

See it on your own data.

Request a walkthrough using a file from your book.