Most AI ROI projections wouldn't survive a first-year finance student's scrutiny. The CFO asking "where's the money?" isn't being obstructionist—they're doing their job. And their skepticism is backed by data.
Federal Reserve analysis shows generative AI saves 5.4% of work hours weekly, translating to just 1.1% aggregate productivity gains across workforces. Compare that to the transformational claims in most AI business cases and the gap becomes obvious.
Why ROI Projections Fail
Wrong baseline. Most business cases compare AI performance against the current state. The honest comparison is: what would this investment achieve if applied to non-AI improvements with identical resources? Process optimization, system integration, and training programs often deliver comparable results at lower risk.
Unmeasured costs. AI proposals routinely omit governance overhead, talent premiums (56% higher than standard roles for AI-skilled workers), technical debt accumulation, infrastructure scaling, and leadership opportunity costs. When you add the full cost of ownership, many positive business cases turn negative.
Optimistic timelines. Claims of "12 months to value" contradict reality. Even JPMorgan Chase—with massive resources and technical talent—acknowledges most of their 450 AI use cases remain developmental. If they need patience, so does your organization.
Soft benefits masquerading as hard ROI. "Enhanced decision-making" and "improved customer experience" cannot be verified or disproven post-implementation. They exist to make the business case work, not to measure actual impact.
The 10-20-70 Reality Check
Successful implementations follow a consistent resource allocation pattern: 10% on algorithms and models, 20% on data and infrastructure, 70% on people, process, and culture.
Most organizations invert this ratio, spending 80% or more on technology. Then they wonder why adoption fails, processes don't change, and the promised returns never materialize. The business case projected technology-driven efficiency. Reality demanded organizational transformation.
Building a Business Case That Holds Up
Use conservative assumptions aligned with research. If Federal Reserve data shows 1.1% aggregate productivity gains, don't project 30%. Start with evidence-based estimates and identify specific conditions that could improve results.
Explicitly identify your assumptions. Every business case depends on assumptions: data will be clean enough, users will adopt the tool, processes will change, leadership will sustain commitment. List them. Track them. When assumptions prove wrong, the business case needs revision—not wishful thinking.
Include failure scenarios. What happens if adoption reaches only 40% of projected levels? What if data quality issues add three months to the timeline? What if the competitive landscape shifts? Honest business cases model these scenarios rather than pretending they won't happen.
Propose measurable milestones instead of full upfront funding. Stage-gate funding tied to demonstrated results is more credible than requesting the full investment based on projections. It also forces the discipline of measuring actual versus expected performance at each stage.
Where the Real Money Is
Here's the uncomfortable truth most AI proponents ignore: back-office automation consistently delivers 5-10x higher ROI than customer-facing applications. Process automation, document processing, compliance monitoring—these aren't exciting use cases. They don't generate impressive demos. But they deliver $2-10 million annually per implementation.
Over 50% of generative AI budgets flow to sales and marketing applications because they're visible and easy to champion. The organizations achieving real returns are investing where the math works, not where the board presentation is most compelling.
The Honest Bottom Line
The organizations succeeding with AI will gain structural advantages that compound over time. The investment is justified—but only when the business case is honest about costs, realistic about timelines, and specific about how value will be measured.
Your CFO isn't the obstacle. They're the quality check your AI strategy desperately needs.
This article is adapted from Neil's AI Execution Weekly newsletter and the measurement chapters of Why AI Fails.
Ready to Assess Your AI Readiness?
Take the free AI Leadership Assessment and get personalized insights powered by the Seven Pillar Framework.
Take Free AssessmentWant to discuss this topic?
Schedule a consultation with Neil to explore how these insights apply to your organization.
Schedule a ConsultationGet More Insights Like This
Weekly AI leadership insights delivered to your inbox.