3 min read
Laying the foundations: Why most AI projects fail before they start
There’s a startling statistic that only 6% of AI proof-of-concepts ever make it to production. That means for every AI initiative that becomes part of how government actually works, 15 others stall, fade, or get quietly shelved. That’s not a technology problem; it’s a foundations problem.
Across the public sector, departments are experimenting with AI tools at pace. Yet despite promising pilots and executive enthusiasm, most initiatives stall before delivering real value. The issue isn’t the sophistication of the models or the size of the budget. It’s that organisations skip the basics: the unglamorous groundwork that determines whether AI becomes part of daily work or another abandoned experiment.
If you’re serious about moving from pilot to production in your own department, here’s where I’d suggest starting:
Put users first: Enablement that actually enables
AI tools won’t deliver value if people don’t use them, or worse, use them incorrectly. Yet many rollouts assume that simply granting access is enough. It isn’t.
-
Role-specific guidance matters
A caseworker needs different prompts and workflows than a policy analyst. Generic training sessions aren’t going to cut it. Instead, build practical resources tailored to how different teams actually work: prompt libraries for common tasks, step-by-step guides for specific use cases, and clear examples that demonstrate value immediately
-
Make learning continuous, not one-off
AI tools evolve rapidly. What worked last month might have better alternatives today. Create accessible channels, whether that’s regular drop-in sessions, internal champions, or easily updated guidance—where teams can ask questions, share what works, and learn from each other. The goal isn’t just to train people once; it’s to build lasting capability across your organisation
Start with “good enough” governance
Here’s the governance paradox: if you wait for perfect policies, you’ll never start. Launch without any guardrails, and you’ll create risk, confusion, and ultimately resistance.
Start with clarity on the basics. Which tools can staff use? How do you ensure sensitive data stays secure? What requires human oversight? What’s acceptable risk versus a show-stopper? A simple, well-communicated framework beats a comprehensive policy that sits in draft for months whilst teams find workarounds.
The best governance structures we’ve seen are dynamic, not static. They include lightweight approval processes for common updates, regular reviews that reflect what’s actually happening, and clear escalation paths for the edge cases that will inevitably emerge. The objective isn’t to eliminate all risk, it’s to create guardrails that enable progress rather than bureaucracy that blocks it.
And perhaps most importantly, governance needs to build trust. People need to understand not just what they can do, but why certain boundaries exist. Clear ownership, documented decisions, and straightforward explanations work far better than lengthy policy documents that nobody reads.
Don’t forget the other essentials
- Data readiness remains the most common blocker. If your data is siloed, inconsistent, or inaccessible, even the best AI tools will underdeliver. Invest in data quality early, it’s less exciting than experimenting with new models, but it’s far more important
- Senior sponsorship isn’t optional. Without visible executive support – i.esomeone who can unblock issues, secure resources, and champion adoption – initiatives drift. Identify your senior sponsor early and keep them actively engaged
- Measurement frameworks matter from day one. Define what success looks like before you launch: time saved, processes improved, user satisfaction increased. Without clear metrics, you can’t demonstrate value, secure ongoing funding, or know when to pivot
Build to last
The foundations aren’t glamorous. They won’t feature in conference presentations or strategy documents. But they’re what separates the 6% of AI initiatives that succeed from the 94% that don’t.
Get the basics right, user enablement that builds capability, pragmatic governance that protects without paralysing, and infrastructure that supports sustainable delivery, and you create the conditions for AI to move from promising pilot to everyday reality.
We’ve supported departments through this journey, from establishing governance frameworks that enable rather than block, to building user enablement programmes that create lasting capability. The focus is always the same: solving real problems for real people in ways that stick, not flashy pilots that gather dust.
Because at the end of the day, success isn’t measured in models deployed or tools rolled out. It’s measured in services improved, time freed up, and teams empowered to do their best work.
Ready to move beyond pilots? We work with public sector teams to build the foundations for successful AI adoption, from governance frameworks to user enablement. Get in touch to discuss your next steps.