Every AI business case looks compelling in the deck. The ROI calculations are almost always wrong — not because of deliberate deception, but because the standard frameworks miss three categories of cost and systematically overstate the pace of value realisation.
The Standard Framework (And Its Problems)
The typical AI ROI model looks like this:
ROI = (Hours saved × Hourly rate + Revenue uplift) / Total investment
This is not wrong, exactly. It is just incomplete in ways that consistently surprise organisations 18 months after deployment.
What it misses on the cost side:
- Change management and training (typically 15–25% of total deployment cost)
- Ongoing model maintenance and retraining (often zero in the initial budget)
- Data quality remediation (the most consistently underestimated cost)
- Human oversight labour for edge cases the AI cannot handle
What it overstates on the value side:
- Assumes linear adoption curves (actual adoption is typically S-shaped with a long tail)
- Counts theoretical hours saved rather than actual capacity reallocation
- Ignores the productivity dip during the transition period
A Better Framework
We use a four-quadrant model that forces explicit reasoning about each category:
Quadrant 1: Direct Cost Reduction
Automation of specific, measurable tasks. This is the easiest to quantify and the most reliable ROI source. Example: an accounts payable agent processing invoices reduces processing cost per invoice from $12 to $2.
Measurement: Process before/after comparison. Track manually.
Quadrant 2: Revenue Enablement
AI that allows the business to do things it could not do before, or to do them at a scale previously uneconomical. Example: personalised outreach at the scale of mass email, but with the conversion rate of individual sales calls.
Measurement: A/B test against control group. Allow 90 days minimum.
Quadrant 3: Risk Reduction
Compliance automation, fraud detection, quality control. The ROI is the expected cost of the risk event multiplied by the probability of occurrence. This is harder to claim credit for when things don't go wrong.
Measurement: Incident rate before/after. Industry benchmark comparison.
Quadrant 4: Strategic Optionality
AI capabilities that position the organisation for future opportunities. Hardest to quantify but often the most significant over a 3-5 year horizon.
Measurement: Milestone-based assessment against strategic objectives.
The Readiness Assessment
Before calculating expected ROI, assess readiness across five dimensions. Each dimension should be scored 1–5. If any dimension scores below 3, expected ROI should be haircut by 30% per dimension below threshold.
- Data quality — Is the data clean, documented, and accessible?
- Process clarity — Is the target process well-defined and consistent?
- Change capacity — Does the organisation have bandwidth to absorb change?
- Technical infrastructure — Are the integration points accessible and documented?
- Governance maturity — Are AI oversight and risk management processes in place?
Want to run a proper AI readiness assessment before committing to investment? Our 20-point diagnostic gives you the full picture.