6 min read
Solace for quantum: How classical computing struggles to keep up with AI
Artificial Intelligence has made leaps that were unimaginable even a decade ago. We’re now witnessing models that can write essays, generate artwork, and drive cars. But behind the scenes, there’s a looming problem – AI’s appetite for compute power is spiralling out of control.
Training models like GPT-4 comes with enormous costs – both in terms of money and energy. OpenAI’s GPT-4 is estimated to have cost around $100 million to train, and that figure doesn’t factor in the environmental impact. These models require immense data centres running at full tilt, consuming staggering amounts of electricity.
As AI models become larger and more sophisticated, we are rapidly approaching the limits of what classical computers can handle. Traditional computing is hitting a plateau in terms of processing power and efficiency. The current infrastructure isn’t scalable to meet the growing demands of AI – it’s simply not built to handle this level of complexity.
So, what comes next? Enter quantum computing – a technology that has the potential to break AI out of its compute bottleneck.
Magic bits: The power of qubits and superposition
At its core, quantum computing represents a complete departure from the principles that underlie classical computing. Classical computers rely on bits, which exist in a binary state – either 0 or 1. This means that classical systems process one task at a time, moving sequentially from one calculation to the next. Even the most advanced classical supercomputers still follow this fundamental rule, which limits their ability to handle the increasingly complex, multi-dimensional problems that AI throws at them.
Quantum computers, on the other hand, process information in a radically different way. Instead of bits, they use qubits, which can exist in multiple states at once, thanks to a principle called superposition. This allows quantum computers to evaluate many possibilities simultaneously, rather than step-by-step.
It’s like comparing a flashlight to a floodlight. While classical computers illuminate one small area at a time, quantum computers can light up the whole room at once. For AI, this could translate into the ability to process vast datasets, train models faster, and solve optimisation problems that are simply out of reach for classical systems.
In 2019, Google’s Sycamore quantum processor provided a glimpse into what’s possible when quantum computing is applied to specific tasks. The processor completed a calculation in 200 seconds that would have taken the world’s most powerful classical supercomputer, IBM’s Summit, around 10,000 years to finish. While this demonstration was limited to a specialised task, it’s a stark indicator of the potential quantum holds for tackling computational challenges that today’s systems can’t even approach.
The power in real-world applications
The implications for AI are vast. Quantum computing could drastically reduce the time it takes to train machine learning models and unlock solutions to complex problems that classical systems struggle with today.
Take drug discovery, for example. AI models currently simulate how different molecules interact, but the sheer number of combinations makes this a slow and expensive process. Quantum computers could simulate multiple molecular interactions simultaneously, potentially cutting research time from years to months.
For AI itself, quantum computing could revolutionise the training of neural networks, which today is a time-consuming and resource-intensive process. Classical systems, even with advanced hardware like GPUs and TPUs, can take weeks or even months to train sophisticated AI models due to the sheer volume of data and computations involved. These systems process one calculation at a time, which limits their ability to efficiently handle the parallelism needed for complex models like deep neural networks.
Quantum computers, however, leveraging qubits and principles like superposition and entanglement, can process multiple possibilities simultaneously. This means that quantum systems could tackle many training tasks in parallel, drastically accelerating the learning process. What would normally take weeks could be reduced to days or even hours, making real-time training of AI models a viable possibility.
The hardware hurdle
Despite the incredible promise, quantum computing isn’t ready to solve AI’s problems just yet. The biggest challenge lies in the hardware. Qubits are notoriously unstable and prone to errors. They are highly sensitive to their environment and require extreme conditions to operate correctly – we’re talking temperatures close to absolute zero. Even slight fluctuations in temperature or magnetic fields can cause a quantum computer to lose information, a phenomenon known as quantum decoherence.
The fragility of quantum systems means that building a quantum computer capable of solving real-world AI problems at scale is still several years away. While companies like Google, IBM, and Rigetti are making significant strides in developing more stable and scalable quantum processors, we’re not yet at the point where quantum systems can handle the types of large-scale AI workloads we rely on today.
Google’s Sycamore processor, while impressive, only had 53 qubits. To unlock the true potential of quantum computing, we’ll need systems that can manage thousands or even millions of qubits reliably. Achieving that level of scalability is one of the most significant hurdles standing between us and the quantum future.
Don’t forget the software
Even if we solve the hardware challenge, AI software isn’t ready to fully leverage quantum computing. Current AI algorithms are designed to run on classical architectures, and they don’t simply transfer over to quantum systems. Quantum computing introduces entirely new ways of processing data, and as a result, we require new algorithms to take advantage of these capabilities.
One example is Grover’s Algorithm, which provides a quadratic speedup for search problems. This could have huge implications for AI, especially in tasks that involve processing massive datasets, but implementing it in a way that integrates with existing AI workflows is still a long way off. Similarly, Quantum Support Vector Machines could allow for faster, more accurate model training, particularly for complex tasks like financial forecasting or diagnosing diseases. But these quantum-native algorithms are still in the early stages of development, and much work remains before they can be used at scale.
The realistic future of quantum computing in AI likely lies in hybrid systems, where quantum and classical computers work together. Quantum systems could serve as accelerators, handling the most computationally intensive parts of the process, while classical systems manage the more routine tasks.
The future: It’s both here and not here
Quantum computers won’t replace classical systems overnight, nor will they instantly solve AI’s compute crisis. But the potential for quantum to enhance and extend AI’s capabilities is real, and we’re already seeing the early signs of that transformation.
In the coming years, we’re likely to see quantum systems used as specialised tools for specific, high-complexity tasks – think drug discovery, financial optimisation, or climate modelling. For most AI workloads, classical systems will still dominate. But as quantum technology matures, the balance will shift.
AI’s future depends on our ability to overcome the limitations of classical computing, and quantum offers a way forward. It may take time to get there, but once we do, the possibilities will be incredible.
About the author
Nathan Marlor leads the development and implementation of data and AI strategies at Version 1, driving innovation and business value. With experience at a leading Global Integrator and Thales, he leveraged ML and AI in several digital products, including solutions for capital markets, logistics optimisation, predictive maintenance, and quantum computing. Nathan has a passion for simplifying concepts, focussing on addressing real-world challenges to support businesses in harnessing data and AI for growth and for good.