7 min read
“You have to come out of denial.” What 25 years in pharma taught one leader about making AI actually work
At Version 1’s Women in Tech Leadership event in Princeton, New Jersey, Rakhi Agarwal, Chief Operating and Procurement Officer at VCVX Holdings, drew on a quarter-century of experience across Bristol Myers Squibb, Merck, J&J, Sanofi, and the CDMO space to share what pharma organisations consistently get wrong when it comes to AI, and what the leaders getting it right have in common.
Most technology leaders aren’t asking whether AI is powerful enough anymore. They’re asking whether their organisation is ready to use it responsibly, consistently, and at scale. This conversation draws on lessons from organisations where getting that wrong isn’t an option.
Be honest about where your data actually is
Agarwal’s starting point is clear, and it applies well beyond procurement: before any AI conversation begins, organisations need to stop pretending their data is in better shape than it is.
“The first and most important thing is you have to absolutely be true to yourself. If you have a problem with data, you have to acknowledge and say, yes, I have a data problem. If you are in denial, AI will never work.”
This echoes what technology leaders across pharma and life sciences are finding. AI readiness is not about buying a tool. It requires understanding three things clearly: what your actual problems are, what you want AI to deliver, and what processes are already in the workflow. Everything else operates between those guardrails.
And critically, she warns against accepting AI outputs without challenge, a lesson for anyone deploying AI tools across an enterprise. “You cannot mindlessly just start believing in AI. The expert will give you a version, but unless you are challenging it, AI will be worthless.”
Why AI pilots fail: the same two things, every time
When asked about the main blockers preventing AI pilots from scaling, Agarwal points to two consistent culprits she’s seen across every large pharma organisation she’s worked in: lack of standardised processes and lack of governance
“If your process is not generating a standardised workflow, it is not working, no matter how easy it’s making your life. It has to be repeatable, standardised. If you don’t have that, it will fail.”
For CIOs and CTOs, this is a familiar challenge. AI tools get deployed into environments where the underlying processes are inconsistent, undocumented, or different across teams and sites. The tool works in one context and breaks in another. Without standardisation, scaling becomes impossible.
Governance is equally non-negotiable. Without regular checkpoints, voice-of-customer feedback, and sanity checks on whether a solution is still fit for purpose, even a well-designed AI tool will drift. “What’s true today is not true six months down the road. You have to keep tweaking it and shifting it to make sure your solution stays true to your cause.”
In pharma and life sciences, where quality, compliance, and risk are constant, those governance conversations need to happen more often than most people would like. But they’re what keep solutions relevant and organisations protected.
AI as the connective tissue across functions
One of the most valuable threads in the conversation was Agarwal’s argument that AI’s greatest enterprise value isn’t within a single function. It’s in the connections between functions.
She illustrated the point with a story from her earlier career. While working on a major product, a CDMO in Asia had a serious fire that destroyed a workshop. She didn’t find out for a month. No news, no communication from the supplier, no flags in any system. Just missing shipments and silence.
“I did not know about it till a month. All I get is no shipments. Why no shipments? No one is talking about it.”
Today, AI-powered risk detection tools can flag supply chain disruptions almost immediately, giving organisations the chance to react in real time rather than spending weeks working out what went wrong. But the value only materialises when procurement, manufacturing, supply chain, compliance, and audit are connected and sharing information.
“AI today is helping us work together very efficiently, very quickly, so that we can react in the span of the moment and not take a month to just even figure out what the issue is.”
For technology leaders, the lesson is clear: AI delivers its greatest value when it’s breaking down silos, not reinforcing them. The organisations seeing real impact are the ones where AI is connecting functions, surfacing risks across the enterprise, and enabling cross-functional teams to act on shared intelligence.
The ‘Tesla Syndrome’: why investing in skills matters as much as investing in tools
Perhaps the most memorable moment in the conversation was Agarwal’s analogy about her teenage children and their Teslas.
“I bought two for both my teenage boys, thinking I’ll be very comfortable knowing that they are safe. No matter how bad they drive, they’ll be safe. But then I got to thinking. They don’t know how to drive!”
Her point: if the car breaks down, or swerves, or the self-driving system fails, her kids wouldn’t know how to react because they never learned the fundamentals. The same applies to any professional relying on AI tools without understanding the domain underneath.
“You have to have people who absolutely understand the fundamentals. When I see something amiss in an AI output or a prompt, I can very quickly point out where the issue is. But the new generation doesn’t know that.”
She gave the example of negotiation. An experienced professional knows that an uncomfortable silence in a room is strategically valuable. Someone who has only ever worked with AI-generated recommendations wouldn’t recognise that.
For CIOs and CTOs investing in AI across their organisations, this is a critical workforce consideration. If teams can’t challenge AI outputs, spot errors, or understand the context behind a recommendation, the tool becomes a liability. Upskilling and enablement aren’t optional extras. They’re what makes the technology investment worthwhile.
Building trust through transparency and ownership
On the question of how leaders can build trust in AI, both with their own teams and across partner and supplier relationships, Agarwal’s answer centres on two things: clear ownership and radical transparency.
“The most important thing in this whole discussion is the description and definition of roles and ownership. If you don’t define that, it is going to create you problems.”
She advocates for a clearly defined RACI model: who has authority, who makes decisions, who needs to be informed, and who takes action. Alongside that, escalation processes need to be mapped out so people know when to raise their hand rather than spiralling into loops trying to fix problems alone.
“It’s okay to raise your hand. It’s okay to escalate. And that escalation needs to be defined very clearly.”
But ownership alone isn’t enough. Agarwal is equally passionate about transparency. If only a handful of senior people understand what an AI tool is doing and why, while everyone else is simply told to use it, adoption will be hollow at best.
“If you create transparency on why that system is being created, what data it’s being fed, which is leading to this result, that transparency will create a lot of trust. And if trust is there, that’s when AI will be helpful. Because even when you are challenging it, you are challenging it mindfully. You are not accepting it mindlessly.”
This resonates strongly with what Version 1 sees across its customer base. The organisations succeeding with AI adoption aren’t the ones with the most advanced tools. They’re the ones that have brought their people along on the journey with transparency, accountability, and a clear understanding of why.
Process discipline doesn’t have to kill agility
One of the most practical exchanges came from an audience question about how to introduce process discipline into a fast-growing, highly flexible organisation without killing the agility that makes it work.
Agarwal’s advice: build your exceptions into the process itself.
“The easiest way to do that is create your processes with exception rules. But if you start making exceptions to everything, then everything is exceptional. So you have to balance that.”
Her approach: identify the one or two exceptions that will always come up, the ones that genuinely require flexibility, and design a defined process around them. Even the exception needs an approval step, a place to be recorded, and a clear workflow.
“Don’t let anyone use flexibility as an excuse to say we cannot be process oriented. You absolutely can be process oriented. Your flexibility can be very beautifully weaved into your processes and workflows.”
For technology leaders in high-growth or highly acquisitive organisations, where standardisation is often the first casualty of speed, this is practical, actionable advice.
AI is not always the answer
In a moment of candour that captured something Version 1 believes strongly, Becki Pedley made a point that resonated across the room: sometimes organisations don’t actually need AI. They need automation.
“Sometimes it’s not AI. Sometimes it’s automation. You need to automate a process. Understanding that and being able to differentiate what AI is from automation, from machine learning, can really help customers not spend money in spaces where they don’t need to. Automation is sometimes a lot easier to fix than AI, and it requires different data and different governance.”
For CIOs and CTOs fielding pressure from across the business to “do something with AI,” this is an important reality check. The right solution isn’t always the most advanced one. And the most trusted partners are the ones willing to say so.
About Rakhi Agarwal:
Rakhi Agarwal has spent 25 years working across procurement, sourcing, supply chain, and manufacturing at some of the biggest names in pharma -Bristol Myers Squibb (BMS), Merck, Johnson & Johnson (J&J), Sanofi, and CDMO Curia (formerly AMRI). In her early career, she built a risk platform at BMS that became the foundation she’s carried into every role since, leading her to be a TPRM expert (third party risk management). She created an end-to-end contract management platform for global public health division at J&J to manage government contracting and also designed, created and implemented the Agile team concept in procurement which was successfully deployed to handle the COVID-era resource constraints. Until recently, Rakhi has led Sanofi’s enterprise team for risk, supply diversity, and US government contract compliance.
That breadth of experience, spanning operations, compliance, technology, and supplier management, gives her a cross-functional perspective that most technology leaders rarely get. She’s seen what works and what fails when organisations try to introduce AI into complex, regulated environments. And she’s consistently found that the biggest barriers have nothing to do with the technology itself.
Now, as Chief Operating and Procurement Officer at VCVX Holdings, she is building compliance platforms designed to simplify operations for organisations and vendors navigating an increasingly complex regulatory landscape, including CS3D, CSRD, ESG Scope 3 reporting, and the German Supply Chain Act.
This article is based on a fireside conversation at Version 1’s Women in Tech Leadership event, Princeton, New Jersey, March 2026.
At Version 1, we partner with pharma, medtech and life sciences leaders who are under pressure to turn digital ambition into outcomes that are responsible, repeatable and trusted.
Across data, AI, automation, cloud and enterprise platforms, we help organisations build the foundations that allow technology to scale without creating risk or complexity.