Zijiao Chen can read your mind with a little help from an fMRI scanner and potent artificial intelligence.
According to a report published in November, Chen, a doctorate student at the National University of Singapore, is a member of a research team that has demonstrated that they can read human brain scans to determine what an individual is imagining in their head.
They did this by having participants undergo functional magnetic resonance imaging, or fMRI, while viewing more than 1,000 images, such as a red firetruck, a grey building, and a giraffe eating leaves. Their group, which comprised scientists from Stanford University, the National University of Singapore, and the Chinese University of Hong Kong, captured the resulting brain impulses throughout time. In order to teach an AI model to link particular brain patterns with particular images, the researchers then sent those signals through it.
Astonishing and surreal results are the end outcome. The result of combining an image of a home’s driveway and bedroom was a room with a similar colour scheme. A study participant who saw an elaborate stone tower created mental pictures of a tower like it with windows at odd angles. A bear transformed into a bizarre, canine-like creature with shaggy fur.
About 84% of the time, the generated image closely matched the original image’s semantic content and physical characteristics (color, form, etc.).
While the experiment’s model needs to be trained on each participant’s unique brain activity over the period of about 20 hours before it can infer images from fMRI data, researchers think the technology will be available to everyone in the world within the next ten years.
According to Chen, it might be able to assist individuals with disabilities in regaining their ability to see and think. In a perfect world, Chen continued, cellphones would not even be necessary for human communication. We only need to think.
Although a small number of study participants were engaged, but the results imply that the team’s noninvasive brain recordings could be a first step towards more precise and effective image decoding from within the brain.
For more than ten years, scientists have been developing technologies to decipher brain activity. Also, a lot of AI researchers are presently engaged in various neuro-related AI applications, including initiatives to decode speech and language like those from Meta and the University of Texas at Austin.
Scientist Jack Gallant from the University of California, Berkeley started looking into brain decoding over ten years ago using a different method. According to him, the speed at which this technology advances depends not only on the model used to decode the brain, in this case the AI, but also on the brain imaging devices and the amount of data that researchers have access to. Anybody researching brain decoding faces challenges with both the development of fMRI machines and data collection.
It’s comparable, according to Gallant, to walking to Xerox PARC in the 1970s and declaring, Oh, look, we’re all going to have PCs on our desks.
He predicted that over the next ten years, brain decoding will be employed in the medical industry, but that it would be decades before it was applied to the general population.
Yet, it’s the most recent development in an AI technological boom that has caught the public’s attention. Shakespearean sonnets and term papers have all been produced by AI, demonstrating some of the advances that the technology has made recently. This is especially true now that so-called transformer models have made it possible to feed AI massive amounts of data so that it can quickly pick up patterns.
The National University of Singapore team created stylized images of kittens, pals, spaceships, and pretty much anything else a person could imagine using the image-generating AI program known as Stable Diffusion, which has gained popularity throughout the world.
With the use of the program, associate professor Helen Zhou and her colleagues may quickly create an image using Stable Diffusion by summarizing an image using a vocabulary of colour, form, and other factors.
Although not a photographic match, the images the algorithm generates are thematically accurate to the original image, maybe because each individual perceives reality differently, she added.
Maybe I’ll think of the mountains when you look at the grass, and you’ll think of the flowers, and others will think of the river, Zhou added.
She stated that variations in visual output can result from human imagination. However the AI, which can produce different visuals from the identical set of inputs, may also be to blame for the variances.
To create visuals of a person’s brain activity, the AI model is fed visual “tokens” or input data. As a result, it is given a vocabulary of colours and shapes that combine to form the picture rather than a vocabulary of words.
The technique is far from being widely used since it must be carefully trained on a single person’s brain waves.
Zhou admitted that there was still much space for development. In order for us to forecast what will happen to you, you must essentially enter a scanner and look at hundreds of photographs.
She claimed that although it is not currently possible to read the minds of random people they encounter on the street, they are trying to generalize across subjects in the future.
The use of brain-reading technology presents ethical and legal questions, like many other recent breakthroughs in AI. According to some experts, the AI model might be utilized for surveillance or interrogation in the wrong hands.
According to Nita Farahany, a professor of law and ethics in new technologies at Duke University, the line between what could be empowering and repressive is really thin and we’re more likely to see the repressive consequences of the technology unless we get out ahead of it.
She described brain-sensing products that are already on the market or soon to be there, which might bring about a world in which we are not just sharing our brain readings but also judged for them. She worries that governments or businesses could misuse the knowledge if AI brain decoding results in its commoditization.
This is a world where people are employed, dismissed, and promoted based on what their brain measurements reflect, she added, not just your brain activity is being gathered and your brain state, from attention to focus, is being watched.
Before technology genuinely becomes a part of everyone’s daily lives, she said, we need to put governance and rights in place now because it is already becoming widely used.
The Singaporean researchers are working to advance their technology in order to reduce the time a participant has spend inside an fMRI machine. The number of subjects tested will then be scaled back.
“We believe that it will be feasible in the future,” Zhou remarked. And a machine learning model will do much better with [a larger] amount of data available.