HomeArtificial IntelligenceArtificial Intelligence NewsDeepMind is at odds over Quantum AI research

DeepMind is at odds over Quantum AI research

DeepMind, a London-based Alphabet research company, published an intriguing research paper last year in which it claimed to have solved the enormous challenge of simulating matter on the quantum scale with Artificial Intelligence. Now, nearly eight months later, a group of academic researchers from Russia and South Korea believes they have found a flaw in the original study that calls the paper’s entire conclusion into question.

If the paper’s conclusions are correct, the significance of this cutting-edge research could be enormous. In essence, we’re discussing the possibility of using Artificial Intelligence to discover new ways to manipulate matter’s building blocks.

The key concept here is the ability to simulate quantum interactions. Our world is composed of matter, which is composed of molecules, which are composed of atoms. It becomes increasingly difficult to simulate at each level of abstraction.

When you get to the quantum level, which exists inside atoms, simulating potential interactions becomes extremely difficult.

According to a DeepMind blog post: To accomplish this on a computer, electrons, the subatomic particles governing the way atoms bond to form molecules, and are also in charge of the flow of electricity in solids, must be simulated.

Regardless of decades of attempts and a lot of remarkable advances, modeling the electron’s quantum mechanical behavior precisely is still a challenge.

The fundamental issue is that predicting the chances of an electron ending up in a specific position is extremely difficult. And the more you add, the more complicated it becomes.

As DeepMind indicated in the same blog post, in the 1960s, a pair of physicists made a breakthrough: It was discovered by Pierre Hohenberg and Walter Kohn that it is not necessary to track each electron individually. Instead, knowing the probability of any electron being in each position (i.e., the electron density) is enough to precisely compute all interactions. Kohn was awarded the Nobel Prize in Chemistry for establishing Density Functional Theory (DFT).

Unfortunately, DFT could only go so far in simplifying the process. The theory’s “functional” component depended entirely on humans to do all of the heavy liftings.

DeepMind produced a paper titled “Pushing the frontiers of density functionals by solving the fractional electron problem” in December, which changed everything.

The DeepMind team claims in this paper that they have significantly improved current methods for modeling quantum behavior by developing a neural network: By declaring the function as a neural network and integrating these accurate properties into the training data, we determine functionals free of significant systematic errors – which results in a better description of a wide range of chemical reactions.

The academics retaliate

DeepMind’s paper passed the initial, formal review process with flying colors. Until August 2022, when a group of eight academics from Russia and South Korea published a comment calling the study’s conclusion into question.

According to a press release issued by the Skolkovo Institute of Science and Technology: The ability of DeepMind AI to generalize the behavior of such systems does not follow the published results and must be reconsidered.

In other words, academics disagree about how DeepMind’s AI arrived at its conclusions.

As reported by the commenting researchers, the training process utilized by DeepMind to build its neural network directed it how to master the answers to the particular problems it would encounter during benchmarking – the process by which scientists determine whether one approach is superior to another.

The researchers state in their comment: Although the inference of Kirkpatrick and others regarding the FC/FS system’s role in the training set might be right, it is not the sole potential explanation for their observations.

In our opinion, the enhancements in the DM21’s performance on the BBB test dataset relative to DM21m may be due to a much more mundane reason: an unintentional overlap between the training and test datasets.

If this is correct, DeepMind did not actually train a neural network to predict quantum mechanics.

The AI’s Return

DeepMind responded quickly. The company responded the same day as the comment, issuing an immediate and firm rebuke: We differ with their analysis and consider that the points raised are either incorrect or irrelevant to the paper’s main conclusions and the overall quality of DM21.

The team elaborates on this throughout its response: The fact that the DM21 Exc changes over the entire range of distances considered in BBB and is not equal to the infinite separation limit, as demonstrated in Fig. 1, A, and B, for H2+ and H2, indicates that DM21 is not memorizing the data. At 6Ă…, for instance, the DM21 Exc is ~13 kcal/mol from the infinite limit in both H2+ as well as H2 (although in opposite directions).

While it is beyond this article’s scope to explain the above jargon, we can safely accept that DeepMind was prepared for that specific objection.

It remains to be seen whether this sorts out the problem. At this point, we have yet to hear from the academic team to see if their concerns have been addressed.

Meanwhile, the implications of this debate may extend far beyond a single research paper.

As the fields of Artificial Intelligence and quantum science become more entwined, they are increasingly dominated by corporate research organizations with deep pockets.

What happens in case of a scientific deadlock — opposing sides are not able to agree on the efficacy of a given technological approach using the scientific method — and corporate interests enter the picture?

What happens next?

The heart of the problem could be the inefficiency to define how AI models “crunch the numbers” to reach their conclusions. Before producing an answer, these systems can explore millions of permutations. It would be impractical to explain every step of the process, which is why we need algorithmic shortcuts and AI to brute force large-scale problems that are too large for a human or computer to tackle head-on.

As AI systems continue scaling, we may eventually find ourselves without the tools needed to understand how they work. When this occurs, there may be a gap between corporate technology and that which passes external peer review.

That is not to say that DeepMind’s paper is an example. In their press release, the commenting academic team stated: The use of fractional-electrons systems in the training set is not the only novel aspect of DeepMind’s work. Their concept to introduce the physical constraints into a neural network through the training set, and their idea to inflict physical sense via training on the right chemical potential, are presumed to be utilized widely in the construction of neural network DFT functionals in the future.

However, we are witnessing a bold, new, AI-powered technology paradigm. It’s probably time to start thinking about what life will be like after peer review.

 

Most Popular