3 min read
Five things a pharma COO taught me about making AI work at scale
Leadership lessons from the front lines of AI transformation
Recently, I hosted a fireside chat at Version 1’s Women in Tech Leadership event in Princeton, New Jersey with Rakhi Agarwal, Chief Operating and Procurement Officer at VCVX Holdings. Rakhi has spent 25 years across Bristol Myers Squibb (BMS), Merck, Johnson & Johnson, and Sanofi. She was brilliantly candid about what organisations get wrong with AI, and what the best leaders do differently.
Here are the five takeaways I’m already using in conversations with technology decision-makers:
- Get honest about your data (really honest).The fastest way to stall an AI programme is pretending your data is in better shape than it is. Rakhi put it bluntly: “If you are in denial, AI will never work.” Before anyone invests in a shiny new tool, you need a clear view of the problem you’re trying to solve, what outcome you want, and what data and processes actually exist today. It’s not glamorous work, but it’s the work that makes everything else possible.
- Standardise the process and put governance in place, or don’t bother. Across Rakhi’s career, the two most consistent blockers have been (1) non-standard, non-repeatable processes and (2) weak governance. AI dropped into a messy, inconsistent environment can’t scale. Without clear checkpoints and feedback loops, even a good solution drifts away from the original intent. If you’re a CIO/CTO (or you sit with the teams who feel that pressure), this is your reminder that process and governance aren’t “phase two”. They are the runway.
- Invest in skills, not just tools. Rakhi shared an analogy that made everyone smile, then immediately think. Her teenage children can drive a Tesla safely… until the system fails. At that point, they need the fundamentals. It’s the same with AI at work: if teams can’t challenge outputs, spot errors, or understand the context behind a recommendation, the tool becomes a risk. Upskilling and enablement aren’t “nice to have”. They turn an AI purchase into an AI capability.
- Trust comes from transparency (and clear ownership). Adoption won’t stick if only a few people understand what the AI is doing and why. Rakhi’s advice was practical: define ownership properly (a clear RACI helps), map escalation paths, and be transparent about what data is going in and what results are coming out. As she said, “If trust is there, that’s when AI will be helpful, because even when you are challenging it, you are challenging it mindfully.”
- Sometimes the best answer isn’t AI. This one landed with a lot of leaders in the room. Not every problem needs AI, and in plenty of cases a simpler automation approach is faster, cheaper, and more reliable. Being clear on the difference between AI, automation, and machine learning can save teams from overspending (or building complexity they don’t need). If you’re feeling the internal pressure to “do something with AI”, this is a good reset question: what’s the simplest solution that delivers the outcome?
These insights come from my conversation with Rakhi Agarwal (VCVX Holdings) at Version 1’s Women in Tech Leadership event in Princeton, New Jersey (March 2026).
If you’re exploring AI in pharma, medtech, or life sciences and want to pressure-test readiness (data, governance, skills, and the ‘do we even need AI?’ question), I’m always happy to compare notes – you can connect with me here on LinkedIn.
If the challenges around AI adoption in pharma feel familiar, this longer‑form piece goes deeper. Drawing on a candid fireside conversation with Rakhi Agarwal, it explores what 25 years across BMS, Merck, J&J, Sanofi, and the CDMO space reveal about why AI initiatives stall—and what leaders who make it work actually do differently.
From data honesty and governance to skills, ownership, and trust, the article offers practical leadership lessons for anyone trying to move AI beyond pilots and into real operating models.