HomeMachine LearningMachine Learning NewsMachine Learning Provides Details of Cell Machinery

Machine Learning Provides Details of Cell Machinery

Open up an introductory biology textbook and you’ll see a familiar diagram: a blob-like cell filled with colorful structures – the internal machinery that makes the cell work.

Cell biologists have known the basic functions of most of these structures, called organelles, for decades. For example, bean-shaped mitochondria produce energy, and the very long microtubules help the charge slide around the cell. Much remains to be learned by ecosystems about the interaction of all their parts.

High-performance microscopy and a large dose of machine learning are now helping to change that: New computer algorithms can automatically identify about 30 different types of organelles and other structures in super-high-resolution images of whole cells, a team of scientists from the Janelia Research Campus of the Howard Hughes Medical Institute reported October 6, 2021 in the journal Nature.

The detail in these images would be almost impossible to analyze by hand across the entire cell, says Aubrey Weigel, who led the Janelia project team called COSEM (for Segmentation of Cell Organelles in Electron Microscopy). Single cell data consists of tens of thousands of images; It would take a person over 60 years to track all of the organelles in a cell through this collection of images, but new algorithms are making it possible to image an entire cell in hours instead of years.

In addition to two accompanying articles in Nature, the Janelia scientists have also launched a data portal, OpenOrganelle, through which anyone can access the data sets and tools they have created.

These resources are invaluable to scientists studying how organelles keep cells running, says Jennifer Lippincott Schwartz, senior group leader and executive director of Janelia’s new research area, 4D Cell Physiology, which is already using the data in her own research. For the first time these hidden relationships are visible.

Detailed data

The COSEM team’s journey began with data collected by high-performance electron microscopes housed in a special vibration-proof room in Janelia.

These microscopes have been producing high-resolution snapshots of the fly’s brain for ten years. Janelia’s core group leader Harald Hess and senior scientist Shan Xu developed these oscilloscopes to extract super-thin fragments from the fly’s brain using a focused ion beam. An approach called FIBSEM imaging.

Oscilloscopes capture images layer by layer, and computer programs then combine these images into a detailed 3D representation of the brain. Based on these data, the Janelia researchers revealed the most detailed neural map of the fly’s brain to date.

In the midst of imaging the fly brain, Hess and Xu’s team also analyzed other samples and, over time, collected a collection of data from many cell types, including mammalian cells.

Weigel, then a postdoc in LippincottSchwartz’s laboratory, began extracting this data for her own research. The resolving power of the FIBSEM images was amazing, and Weigel could see things at a level that she could not imagine before, but there was more information in a sample than it could analyze in several lifetimes. When she realized that there were others at Janelia working on computer projects that could speed things up, she started organizing a collaboration.

Setting limits

Larissa Heinrich, doctoral student in the laboratory of group leader Stephan Saalfeld, had previously developed machine learning tools that could identify synapses, the connections between neurons, in electron microscopic data.

For COSEM, he adapted these algorithms to track or segment organelles in cells.

Saalfeld and Heinrich’s segmentation algorithms worked by assigning each pixel in an image a number that reflected how far the pixel was from the nearest synapse, and then an algorithm used those numbers to identify all of the synapses in an image and to label. The algorithms work similarly, but with more dimensions, says Saalfeld. They classify each pixel according to its distance from each of 30 different types of organelles and structures. The algorithms then integrate all of these numbers to predict where the organelles are located.

Using data from scientists who manually traced the boundaries of organelles and assigned numbers to pixels, the algorithm can learn that certain combinations of numbers are unreasonable, says Saalfeld. For example, a pixel cannot be in one mitochondrion at the same time as it is in the endoplasmic reticulum.

In order to answer questions about how many mitochondria are in a cell or how large their surface area is, algorithms have to go even further, says group leader Jan Funke, who has developed algorithms that incorporate prior knowledge of the properties of organelles. Know that microtubules are long and thin.

Based on this information, the computer can assess where a microtubule begins and ends. The team can observe how this prior knowledge affects the results of the computer program, whether it makes the algorithm more or less accurate, and then make the necessary adjustments.

After two years of work, the COSEM team has developed a set of algorithms that provide good results for the data collected so far. These results are an important basis for future research at Janelia, says Weigel. A new effort led by Xu brings FIBSEM images to even higher levels of detail. Cell annotation database with detailed images of many other types of cells and tissues.

Together, these advances will support Janelia’s upcoming 15-year research area, 4D Cell Physiology, a preliminary effort by LippincottSchwartz to understand how cells interact with each other in each of the many different types of tissue they are made of, says Wyatt Korff, project team manager from Janelia.

Source link

Most Popular