At Version 1’s recent event for female tech leaders, our team along with Microsoft hosted a panel of AI experts from leading firms to debate whether AI will surpass the need for human oversight. As companies embrace AI faster, the discussion centred on governance, risk, talent, and the continued importance of human involvement.

The questions below were some of those asked during the open Q&A and reflect the challenges, opportunities, and critical thinking required to navigate responsible AI adoption in modern enterprises. Each answer is drawn directly from the panel’s transcript, offering authentic perspectives from leaders at the forefront of AI and governance.

          Four speakers seated in front of a lush green plant wall, engaged in a panel discussion at an AI Women in Tech Leadership event. Audience members are visible in the foreground.    Women in tech AI panel

Theme 1: Evolving AI governance structures

AI governance in enterprises has matured rapidly. Where once risk teams scrambled to keep up, organisations now have robust processes for onboarding third-party AI tools. Validators and critical thinkers are essential to ensure safe adoption, even if it slows down innovation for the right reasons.

Q&A:

Q: How have governance structures around AI and third-party tools evolved in organisations?
A: Governance structures have evolved from lightweight to robust, particularly for onboarding third parties with AI capabilities. Organisations now depend on skilled validators, emphasising critical and divergent thinking to assess new tools thoroughly. While this can delay adoption, it ensures effective systems and risk prevention. Adaptable validation methods and resources are essential to meet these demands.

Theme 2: Managing third-party risks and embedded AI features

The proliferation of embedded AI features in enterprise software brings new risks. Organisations must balance innovation with rigorous validation and oversight, ensuring that AI tools align with ethical and regulatory standards.

Q&A:

Q: What are the biggest risks enterprises face with embedded AI features in third-party tools?
A: New AI features in tools can expose sensitive data without sufficient oversight. This poses major risks, as organisations must manage both the technology and individual use cases. For instance, letting everyone use LLMs could lead to risky applications like candidate selection from large datasets. Addressing these risks requires training and awareness across the organisation, as there’s no silver bullet solution. It’s an emerging threat that needs ongoing attention.

Theme 3: The human in the loop. Critical thinking and oversight

Human oversight remains crucial in AI adoption. Critical thinking, ethical judgement, and domain expertise are needed to interpret AI outputs, challenge assumptions, and prevent unintended consequences.

Q&A:

Q: What strategies can help organisations encourage teams to see the value in combining human expertise with AI systems?
A: The key is to balance control with education about integrating human intelligence and AI. Rather than removing people from the process, organisations should structure collaboration between humans and AI for better results. Critical thinking remains essential, especially to prevent automation bias. Fostering scepticism and evaluation is important, particularly as younger employees may lack these skills.

Q: Why is it important for humans involved in AI processes to maintain a critical mindset, and what risks arise if they do not?
A: Critical thinking is essential. The human in the loop must take their role seriously and not simply accept AI outputs at face value. There’s a risk of fatigue or complacency, where people blindly accept answers. Organisations need to encourage alternative voices and naysayers to maintain a healthy level of scrutiny. Reminding ourselves of basic critical thinking skills is important, especially as AI outputs become more convincing and tailored. The human in the loop acts as a safeguard, ensuring outputs are questioned and validated.

Theme 4: Skills for responsible AI governance

The demand for responsible AI practitioners is growing. Organisations need talent with a blend of technical, ethical, and regulatory expertise to navigate the evolving landscape.

Q&A:

Q: What skills are most needed for teams working in AI governance, and why is it so challenging to find the right talent?
A: There’s been considerable research on this topic. Structurally, the profession is shifting towards defining the role of a responsible AI practitioner, rather than just AI governance. Strengthening the data protection sector and risk management from security are recognised as potential blind spots, as there are more risks to AI than just data and security. The challenge lies in finding people with the right mix of skills – critical thinking, validation expertise, and an understanding of responsible AI practices.

Key takeaways

Integrating AI into enterprise environments presents both complex challenges and significant opportunities. The panel discussion emphasised that successful adoption hinges on robust governance, vigilant critical thinking, and the purposeful blending of human expertise with technological innovation.

By embedding a culture of awareness, ongoing training, and healthy scepticism, organisations not only mitigate risks such as automation bias and poor compliance but also empower teams to leverage their domain knowledge in tandem with AI capabilities. Learning from governance missteps, as highlighted in resources like The Version 1 Guide to Terrible AI Governance, equips enterprises to identify weaknesses and implement more resilient, responsible frameworks.

Ultimately, the human in the loop remains indispensable – serving as the safeguard that ensures AI-driven decisions are questioned, validated, and aligned with organisational values, fostering a future where technology and human judgement work hand in hand for sustained success.

Your data foundations determine AI success.

We make organisations AI-ready in 12 weeks or less—fixing quality, accessibility, governance, and security issues that block AI deployment. Whether starting fresh or scaling enterprise-wide, we ensure your infrastructure can support AI production, not just prototypes, and make your Data and AI strategy a reality.

Learn more

FAQ: Responsible AI Governance

Q1: What is responsible AI governance?

Responsible AI governance ensures that artificial intelligence is developed and deployed ethically, transparently, and in compliance with regulations. It addresses risks related to bias, privacy, accountability, and societal impact.  

Q2: Why does human oversight matter in AI adoption?

Human oversight is essential to interpret AI outputs, challenge assumptions, and ensure decisions align with organisational values and ethical standards. 

Q3: What are the biggest risks with embedded AI features?

Risks include bias, lack of transparency, and regulatory non-compliance. Organisations must validate tools and maintain clear oversight. 

Q4: How can organisations stay compliant with new AI regulations

By implementing robust governance frameworks, investing in training, and staying informed about evolving standards like the EU AI Act 

Q5: What skills are needed for responsible AI teams?

Teams need expertise in risk management, regulatory compliance, ethics, and technical AI skills. Cross-functional collaboration is key.