Understanding how the brain adapts using Machine Learning

Researchers at Johns Hopkins University have created a technique that uses artificial intelligence to visualize and monitor changes in the strength of synapses, the junctions that allow nerve cells in the brain to communicate with one another, in living animals. The method, which the researchers report in Nature Methods, should help them better understand how these connections in humans’ brains alter as a result of learning, ageing, injury, and disease.

According to Dwight Bergles, professor in the Department of Neuroscience at the Johns Hopkins University School of Medicine, if you want to understand more about how an orchestra plays, you have to study individual members over time. This new technique does that for synapses in the brains of living animals.

In addition to Richard Huganir, the Bloomberg Distinguished Professor at JHU and head of the neuroscience division, Bergles collaborated on the work alongside Adam Charles, Jeremias Sulam, and Richard Huganir, all assistant professors in the department of biomedical engineering. The Kavli Neuroscience Discovery Institute at Johns Hopkins University houses all four of the researchers.

At synapses, also known as junctions, nerve cells exchange chemical messengers to transmit information. According to the authors, learning new abilities and being exposed to new settings are only two examples of how varied life events may alter synapses in the brain, either increasing or weakening the connections that allow for learning and memory. Although it is a difficult task, understanding how these little changes take place across the trillions of synapses in our brains is essential to understanding how the brain functions normally and how sickness affects it.

The high density of synapses in the brain and their small size—characteristics that make them extremely difficult to visualise even with new state-of-the-art microscopes—have forced scientists to search for better ways to visualise the shifting chemistry of synaptic messaging for a long time.

Charles explains that in order to extract the signal sections it’s necessary to view from tough, hazy, noisy imaging data.

Bergles, Sulam, Charles, Huganir, and their coworkers utilised machine learning, a foundation for computing that enables flexible development of automatic data processing tools, to achieve this. The scientists used machine learning to improve the quality of images made up of thousands of synapses because it has been successfully used to numerous fields spanning biomedical imaging. The system must first be “trained,” showing the algorithm what high quality photos of synapses should look like, in order for it to be a powerful tool for automated detection that can outperform human speeds.

Researchers used genetically modified mice in these studies whose synapses’ chemical sensors, glutamate receptors, fluoresced or lit up green when exposed to light. In these mice, the quantity of fluorescence produced by a synapse is a measure of its strength because each receptor emits the same amount of light. This is because each receptor emits the same amount of light.

As was to be predicted, imaging in the intact brain yielded poor-quality images that made it difficult to distinguish and follow over time individual clusters of glutamate receptors at synapses. The scientists trained a machine learning algorithm on photographs of brain slices (ex vivo) acquired from the same kind of genetically modified mice in order to turn these into higher quality images. Because these photos weren’t taken from living animals, it was possible to create both low quality photographs that were comparable to those acquired from live animals and considerably higher quality images using a different microscopy approach.

The team was able to create an enhancement algorithm that can generate higher resolution photos from lower quality ones, much like the images they collected from living mice, due to their cross-modality data gathering system. By doing this, data obtained from the intact brain can be much improved, allowing for the detection and tracking of individual synapses (numbering in the thousands) over the course of multiday trials.

The researchers next employed microscopy to repeatedly scan the same synapses in mice over a period of weeks in order to track alterations in receptors over time in living animals. The animals were kept in a chamber with novel sights, odors, and tactile stimulation for a single five-minute period after baseline photos were taken. The same region of the brain was subsequently examined every other day to determine whether and how the novel stimuli had changed the amount of glutamate receptors at synapses.

Although the primary goal of the research was to create a set of techniques for analyzing synapse level changes in a variety of contexts, the team discovered that this minor environmental change led to a spectrum of fluorescence changes in synapses in the cerebral cortex, indicating connections where the strength increased and others where it decreased, with a bias towards strengthening in animals exposed to the novel environment.

The investigations were made possible by tight cooperation amongst experts with different specialties—ranging from molecular biology to artificial intelligence—who don’t frequently work closely together. This machine learning strategy is currently being used by researchers to examine synaptic changes in animal models of Alzheimer’s disease, and they hope that it may provide fresh insight into synaptic changes that take place in various disease and injury scenarios.

Sulam claims that they are quite interested in seeing how and where the rest of the scientific community will go with this.

Yu Kang Xu, a PhD candidate and Kavli Neuroscience Discovery Institute scholar at JHU, Austin Graves, an assistant research professor of biomedical engineering at JHU, and Gabrielle Coste, a PhD candidate in neuroscience at JHU, carried out the experiments for this work.

Source link