A speaker addressing attendees during Version 1’s Women in Tech Leadership event, standing in front of a green plant wall and engaging the audience during a fireside-style discussion.

AI hype vs reality: inside Pharma’s POC problem

A few weeks ago, I hosted a panel at our Women in Pharma, Medtech and Life Sciences event in London on the topic of AI hype versus reality, and I’ll be honest, I expected a fairly polite conversation about use cases and roadmaps. 

What I got instead was a room full of senior leaders admitting, some reluctantly, that their organisations are stuck, not because AI doesn’t work, but because their pilots do exactly what they’re supposed to do and then nothing happens afterwards. One panellist put it bluntly: the proof of concept succeeds, the slides get presented, everyone agrees it’s promising, and then the whole thing quietly dies. Another described it as a “repeating loop” where teams are still trying to figure out why previous pilots failed while simultaneously launching new ones. 

That, in a nutshell, is the POC trap, and from what I heard in that room, it’s the single biggest obstacle to AI delivering real value in pharma and life sciences right now. 

The board wants ROI on a technology that’s two years old

One of the panellists made a point that stuck with me: generative AI is roughly two years into mainstream adoption, yet most technology shifts take about ten years before organisations properly understand how to apply them. Boards, understandably, are already asking for return on investment, but that pressure is creating a distortion where teams rush into proofs of concept to show momentum without doing the harder work of figuring out whether those pilots were ever designed to become anything more.

There’s a real difference between experimentation and a purposeful proof of concept and conflating the two is where a lot of organisations come unstuck. Experimentation is valuable, but a proof of concept should have a clear outcome, a defined business problem, and a realistic path to production. Most of the ones I see in this sector have the first, sometimes the second, and almost never the third.

The problem isn’t technology, it’s decision discipline

The panellists kept circling back to the same set of gaps, and none of them are technical.

No link to business strategy. One speaker described how the first question she asks any customer is what their AI strategy is and how a given project fits within their overall strategic goals, and most organisations can’t answer that cleanly. The pilot exists because someone had an idea or a vendor offered a free trial, not because it connects to a measurable business outcome.

No executive sponsor who’ll fight for it. Without a senior sponsor willing to push through the inevitable organisational resistance, the pilot stays an IT project and never becomes a business initiative, which means it never gets the cross-functional support it needs to scale.

Optimising for the wrong thing. Proofs of concept are by their nature optimised for a narrow problem in controlled conditions, and one panellist was direct about this: have you accounted for what happens when you try to do this at scale, with real data, real users, and real organisational complexity? Most POCs never ask that question because they’re simply not designed to.

Data foundations aren’t ready. One panellist noted she’s been asking “do you have data, is it good enough, and is it accessible?” for ten years, long before generative AI entered the conversation, and the answer is still often no. If your data isn’t in order, AI will just expose that problem faster than anything else would.

No baseline metrics. If you can’t measure where you started, you can’t prove what changed, and yet many pilots launch without agreed metrics for success, which makes building the business case for scaling almost impossible even when the pilot genuinely works.

What scaling looks like when adoption is designed in

The most practical contribution from the panel was a concrete example from a pharma organisation preparing for an IPO, where field-based sales teams were drowning in data from multiple channels, time zones, and inconsistent campaign models.

Rather than building something new and asking the team to learn yet another tool, they embedded machine learning into the CRM their people already used every day, and it would sift through historical data, market signals, and past activity to generate a prioritised action list each morning, complete with topics, response deadlines, and a confidence score for each sales opportunity.

What struck me was that the adoption approach mattered just as much as the technology itself. They piloted in specific geographies first, built storytelling around quick wins, and wove it into everyday conversations and team meetings, always connecting it back to how it would help individuals do their jobs better today rather than at some vague point in the future. The panellist was clear that putting AI into an existing system people are already familiar with is a quick win in itself because it removes the adoption barrier of learning something new entirely.

Governance that accelerates rather than blocks

In pharma and life sciences, the instinct is often to let compliance slow everything down, but one panellist described a simple risk tiering approach that did the opposite. Low risk covered internal analysis, prioritisation, workflow, and productivity using historical and validated data; medium risk was where AI drafts content or recommendations that still need human review; and high risk meant anything external-facing involving commercials, product releases, R&D intelligence, or personal data.

Being explicit about those boundaries upfront meant teams could move quickly on low-risk use cases without waiting for a compliance review that was never going to be relevant, and it also meant the organisation had a clear policy before it started deploying AI rather than scrambling to write one after the fact. As the panellist pointed out, some organisations she’d worked with had gone full speed with AI without even having an acceptable use policy in place, which is very much cart before horse.

The real question leaders need to answer now

If I took one thing away from that panel, it’s that the organisations scaling AI successfully are not the ones with the best models or the biggest budgets, they’re the ones that did the unglamorous work first around strategy, sponsorship, data readiness, metrics, governance, and change management. Or as one panellist framed it: technology is just the enabler, and this is really an operating model problem and a culture problem.

So, the question for any pharma or medtech leader reading this is a simple one: can your current AI strategy tell you which pilots should die and which should scale? If it can’t, unfortunately it’s not really a strategy, but a collection of experiments.

Stuck in the POC loop?
Let’s talk

If this feels uncomfortably familiar, you’re not alone. Many pharma and medtech organisations are struggling to decide which AI pilots should scale and which should stop. At Version 1, we work with leaders to bring clarity around readiness, governance and what good looks like in practice.