Home Machine Learning Education Improving quantum computation with classical machine learning

Improving quantum computation with classical machine learning

Quantum computers aren’t constrained to two states; they encode data as quantum bits, or qubits, which can exist in superposition. Qubits represent, particles, photons or electrons, and their respective control devices that are working together to act as computer memory and a processor.

Qubits can interact with anything nearby that carries energy close to their own, for example, photons, phonons, or quantum defects, which can change the state of the qubits themselves.

Manipulating and controlling out qubits is performed through old-style controls: pure signal as electromagnetic fields coupled to a physical substrate in which the qubit is implanted, e.g., superconducting circuits. Defects in these control electronics, from external sources of radiation, and variances in digital-to-analog converters, introduce even more stochastic errors that degrade the performance of quantum circuits. These practical issues impact the fidelity of the computation and thus limit the applications of near-term quantum devices.

Emerging reinforcement learning techniques using deep neural networks have shown great promise in control optimization. They harness non-local regularities of noisy control trajectories and facilitate transfer learning between tasks.

To improve the computational capacity of quantum computers, and to pave the road towards large-scale quantum computation, Google scientists have created a new quantum control framework called UFO. For this, scientists used deep reinforcement learning, where a single control cost function can encapsulate various practical concerns in quantum control optimization.

 

The framework provides fast and high-fidelity quantum gate-control optimization by reducing average quantum logic gate error of up to two orders-of-magnitude over standard stochastic gradient descent solutions and a significant decrease in gate time from optimal gate synthesis counterparts.

The novelty of this new quantum control paradigm pivots upon the development of a quantum control function and a proficient advancement technique dependent on deep reinforcement learning.

To devise an overall cost function, scientists primarily need to develop a physical model for the realistic quantum control process, one where they can reliably predict the amount of error.

One of the most detrimental errors to the accuracy of quantum computation is leakage: the amount of quantum information lost during the calculation. Such information leakage usually happens when the quantum state of a qubit gets eager to a higher energy state or decays to a lower energy state through unconstrained emanation. Leakage errors do not just lose valuable quantum data; they likewise corrupt the “quantumness” and in the end, diminish the performance of a quantum computer to that of classical.

Simulating the whole computation is a common practice used to precisely evaluate the leaked information. Though, he defeats the purpose of building large-scale quantum computers, since their advantage is that they can perform calculations infeasible for classical systems.

This work opens up a new direction for quantum analog-control optimization using RL, where random control errors and incomplete physical models of environmental interactions are taken into account during the control optimization. On-policy RL is well known for its ability to leverage non-local features in control trajectories, which becomes crucial when the control landscape is high-dimensional and packed with a combinatorially large number of non-global solutions, as is often the case for quantum systems.

Scientists then encoded the control trajectory into a three-layer: Fully connected neural network- the policy neural network and the cost control cost function into a second NN—the value NN—which encodes the discounted future reward.

Robust control solutions were obtained by fortification learning agents, which trains both neural networks under a stochastic situation that mimics a realistic noisy control actuation. We provide control solutions to a set of continuously parameterized two-qubit quantum gates that are important for quantum chemistry applications but are costly to implement using the standard universal gate set.

Scientists noted, “Under this new framework, our numerical simulations show a 100x reduction in quantum gate errors and reduced gate times for a family of continuously parameterized simulation gates by an average of one order-of-magnitude over traditional approaches using a universal gate set.”

“This work highlights the importance of using novel machine learning techniques and near-term quantum algorithms that leverage the flexibility and additional computational capacity of a universal quantum control scheme. More experiments are needed to integrate machine learning techniques, such as the one developed in this work, into practical quantum computation procedures to fully improve its computational capacity through machine learning.”

The work is published in Nature Partner Journal (npj) Quantum Information.

Source link

Must Read

BEYOND 5G: MACHINE LEARNING ON 6G

As the world tries to grapple with the implications of 5G, researchers from China have already started looking into 6G. 6G will operate on...

Building a Continuous Integration pipeline

What is continuous integration? In the event that you haven’t used continuous integration systems in the past, let’s do a quick run through of what...

IOHK Joins Hyperledger

Leading blockchain research and development company behind Cardano, IOHK, has joined the Hyperledger consortium. Hyperledger is an open-source community focused on developing a suite of...

Transforming the pension system using blockchain

 When teachers retire, they expect accurate pension payouts. That’s also the goal of plan administrators, who have an obligation to ensure pension system integrity.Still,...

Business utilities of Machine Learning & Predictive Analytics

What’s the first thing that comes to mind when you hear “artificial intelligence” (AI)? While I-Robot was a great film, it doesn’t count. Many don’t realize how...

Google Meet gets AI based noise cancellation for video calls

Google has added a new noise cancellation feature on Google Meet that uses Artificial Intelligence (AI) to cancel out the noise in the background...

Highlighting AI Bias

On Monday, IBM made a monumental announcement: the company is getting out of the facial recognition business, citing racial justice concerns and the need...

Understanding Federal IT

http://www.podcastone.com/downloadsecurity?url=aHR0cHM6Ly9wZHN0LmZtL2UvY2h0YmwuY29tL3RyYWNrL0UyRzg5NS9hdy5ub3hzb2x1dGlvbnMuY29tL2xhdW5jaHBvZC9hZHN3aXp6LzE3MDYvMDYwOWZlZGVyYWx0ZWNodGFsa19wb2RjYXN0X21scDJfYWQyNzk4OWMubXAzP2F3Q29sbGVjdGlvbklkPTE3MDYmYXdFcGlzb2RlSWQ9N2UwNDEzYWItZmEyZi00YTdjLWJlMWItZmQwZmFkMjc5ODljKip8MTU5MjM4Nzc5NTM2OCoqfA==.mp3This week on Federal Tech Talk, host John Gilroy interviews Chase Cunningham, principal analyst serving security and risk professionals at Forrester Research. Cunningham has four patents,...

Artificial Brains Need Sleep Too

 States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains.No one can...

Differenciating Bitcoin and Electronic Money

Bitcoin has the largest market share among virtual currencies, and is already being used on a daily basis overseas. Since it is a virtual...
banner image