HomeArtificial IntelligenceArtificial Intelligence NewsAssessing Cancer Surgery Success with Deep Learning

Assessing Cancer Surgery Success with Deep Learning

A deep learning microscope can rapidly image large tissue sections, allowing surgeons to inspect the margins of tumors after their removal.

A microscope powered by deep learning technology can quickly image tissue sections, potentially during surgery, to help surgeons determine whether they’ve removed all cancer cells, according to a study published in Proceedings of the National Academy of Sciences (PNAS).

With a typical microscope, only things that are the same distance from the lens can be brought clearly into focus. Features that are even a few millionths of a meter closer or farther from the microscope’s objective will appear blurry. This means that microscope samples are typically thin and mounted between glass slides.

Providers currently use slides to examine tumor margins, and they are difficult to prepare. Clinicians usually send removed tissue to a hospital lab, where experts either freeze it or prepare it with chemicals before making razor-thin slices and mounting them on slides.

The process is time-consuming, and requires specialized equipment and workers with skilled training. Researchers noted that it’s rare for hospitals to have the ability to examine slides for tumor margins during surgery, and hospitals in many parts of the world don’t have the necessary equipment and expertise.

“The main goal of the surgery is to remove all the cancer cells, but the only way to know if you got everything is to look at the tumor under a microscope,” said Rice’s Mary Jin, a PhD student in electrical and computer engineering and co-lead author of the study.

“Today, you can only do that by first slicing the tissue into extremely thin sections and then imaging those sections separately. This slicing process requires expensive equipment and the subsequent imaging of multiple slices is time-consuming. Our project seeks to basically image large sections of tissue directly, without any slicing.”

Researchers developed a deep learning extended depth-of-field microscope (DeepDOF), a tool that optimizes both image collection and image post-processing. The team used 1,200 images from a database of histological slides to train the algorithm.

DeepDOF uses a standard optical microscope in combination with an inexpensive optical phase mask costing less than ten dollars to image whole pieces of tissue and deliver depths-of-field as much as five times greater than today’s state-of-the-art microscopes.

“Traditionally, imaging equipment like cameras and microscopes are designed separately from imaging processing software and algorithms,” said study co-lead author Yubo Tang, a postdoctoral research associate.

“DeepDOF is one of the first microscopes that’s designed with the post-processing algorithm in mind.”

The phase mask is placed over the microscope’s objective to module the light coming into the microscope.

“The modulation allows for better control of depth-dependent blur in the images captured by the microscope,” said Ashok Veeraraghavan, an imaging expert and associate professor in electrical and computer engineering at Rice.

“That control helps ensure that the deblurring algorithms that are applied to the captured images are faithfully recovering high-frequency texture information over a much wider range of depths than conventional microscopes.”

DeepDOF is able to do this without sacrificing spatial resolution.

“In fact, both the phase mask pattern and the parameters of the deblurring algorithm are learned together using a deep neural network, which allows us to further improve performance,” Veeraraghavan said.

DeepDOF was able to select the optimal phase mask for imaging a particular sample and it also learned how to eliminate blur from the images it captures from the sample, bringing cells from varying depths into focus.

“Once the selected phase mask is printed and integrated into the microscope, the system captures images in a single pass and the machine learning algorithm does the deblurring,” Veeraraghavan said.

DeepDOF can capture and process images in as little as two minutes.

“We’ve validated the technology and shown proof-of-principle,” said Rebecca Richards-Kortum, a corresponding author of the study. “A clinical study is needed to find out whether DeepDOF can be used as proposed for margin assessment during surgery. We hope to begin clinical validation in the coming year.”

With this new microscope, researchers expect that they can advance current processes related to cancer cell removal.

“Current methods to prepare tissue for margin status evaluation during surgery have not changed significantly since first introduced over 100 years ago,” said study co-author Ann Gillenwater, MD, a professor of head and neck surgery at MD Anderson.

“By bringing the ability to accurately assess margin status to more treatment sites, the DeepDOF has potential to improve outcomes for cancer patients treated with surgery.”

This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.

Source link

Most Popular