HomeData EngineeringData NewsGetting the most out of Quantum Machine Learning through Simple Data

Getting the most out of Quantum Machine Learning through Simple Data

According to new theoretical study, quantum computers can learn from much simpler data than previously thought. The discovery offers hope for improving quantum sensors and creates a way to maximizing the utility of today’s noisy, intermediate-scale quantum computers for better simulation of quantum systems and other activities than classical digital computers.

According to Lukasz Cincio, a quantum theorist at Los Alamos National Laboratory, we demonstrate that surprisingly straightforward data in small amounts is adequate to train a quantum neural network. He contributed to the work that was used as the proof and was printed in the magazine Nature Communications. This research advances efforts to simplify, open up, and accelerate the development of quantum machine learning.

The new work was produced through cooperation between a Los Alamos team, Freie Universität Berlin’s Matthias Caro, who served as the publication’s primary author, and other scientists from the United States, the United Kingdom, and Switzerland. While the industry continues to enhance the quality and increase the size of quantum computers, the group has been creating the theoretical underpinnings for more effective algorithms, notably for quantum machine learning, to take use of the potential of these noisy machines.

The current study expands on earlier research by the Los Alamos National Laboratory and its associates that showed training a quantum neural network only needs a modest quantity of data. Together, these recent theoretical advances demonstrate that organizing training with a small number of simple states is a specific method for completing practical tasks on current quantum computers more quickly than on conventional, classical-physics-based computers.

Prior research in quantum machine learning focused on the quantity of training data, but in this study, Caro added, they are more interested in the kind of training data. They demonstrate that even if we limit ourselves to a simple form of data, a small number of training data points are sufficient.

According to Cincio, this essentially means that you can train a neural network on images that are far simpler than just a few images of cats, for example. It implies that you can practice on quantumly simple states for quantum simulations.

According to co-author Zoe Holmes, a professor of physics at the École Polytechnique Fédérale de Lausanne and former postdoc at Los Alamos, such states are simple to create, which makes the entire learning procedure much easier to run on near-term quantum computers.

A near-term application for quantum computers

The processing power of existing quantum computer technology is constrained by errors brought on by noise in the form of interactions between quantum bits, or qubits, and the environment. Despite the noise, quantum computers are excellent at specific tasks, such simulating a quantum system in materials science and classifying quantum states using machine learning.

According to Cincio, there is a specific amount of noise you can accept and yet correctly classify quantum data. Due to this, quantum machine learning could have some promising short-term uses.

According to Andrew T. Sornborger, a co-author of the paper, quantum machine learning can handle more noise than other types of algorithms since tasks like classification, a machine learning mainstay, don’t need to be exact to be useful. The Quantum Science Center’s thrust area for quantum algorithms and simulation is headed by Sornborger. The center, which is run by Oak Ridge National Laboratory, is a partnership between national laboratories like Los Alamos, universities, and business.

In a quantum-chemistry simulation of a system of molecules evolving, for example, the new work demonstrates how using simpler data enables a less-complex quantum circuit to produce a given quantum state on the computer. Simple circuits are simple to implement, less noisy, and capable of doing calculations. The brand-new paper in Nature Communications illustrates a technique for assembling quantum machine-learning algorithms using straightforward states.

Off-loading to classical computers

Even very huge classical computers cannot process complex quantum algorithms. However, the scientists also discovered that compiling quantum algorithms can be off-loaded to a classical computer because their new method makes developing algorithms simpler. After that, a quantum computer can successfully execute the built algorithm. This novel method enables programmers to avoid the error-causing noise of long circuits on quantum computers while reserving quantum-computing resources for tasks they can uniquely execute but that bog down classical computers, such as simulating quantum systems.

Applications of the Lab’s research can be seen in the emerging discipline of quantum sensing. By utilizing specific quantum mechanical concepts, it is possible to build incredibly sensitive instruments for monitoring magnetic or gravitational fields, for example.

In the theoretical absence of noise, quantum sensing techniques are simple and well understood, but when noise is taken into account, the situation becomes far more complex, according to Sornborger. It is possible to use the technique when the encoding process is unclear or when hardware noise impairs the quantum probe by incorporating quantum machine learning into a quantum-sensing protocol. In a study funded by the DOE and overseen by Los Alamos researchers Lukasz Cincio and Marco Cerezo, that use of quantum machine learning is being studied.

Source link

Most Popular