Researchers are utilizing neural networks to address problems in science, energy, health, and security, such as the detection of rogue nuclear weapons, while the entire technology world is focused on generative AI and its supposed potential to ruin the economy and the job market.
The Pacific Northwest National Laboratory (PNNL) is utilizing machine learning (ML) techniques to look for undiscovered nuclear risks. ML is widely utilized nowadays, according to PNNL, one of the national laboratories of the US Department of Energy, and it may be used to build “safe, trustworthy, science-based systems” that are intended to provide people and countries with solutions to many challenging scientific problems.
According to PNNL, the first ML algorithm was used in 1962 when an IBM 7094 computer defeated a human opponent in a game of checkers. According to the aforementioned method, the system was able to adapt on its own without having to be expressly trained to do so when facing off against chess player Robert Nealey.
Machine learning is used to power voice-activated assistants like Siri and Alexa as well as tailored purchasing recommendations today. The most recent public manifestation of a technology that has had decades to develop and improve is generative AI tools like ChatGPT.
In order to detect and (potentially) mitigate nuclear risks, PNNL researchers are also using machine learning. The laboratory’s scientists are merging their expertise in nuclear nonproliferation with “artificial reasoning.” Their primary goal is to use machine learning and data analytics to monitor radioactive materials that could be used to make nuclear weapons.
The International Atomic Energy Agency (IAEA) can benefit from the AI used by PNNL because it is watching nuclear reprocessing facilities in countries without nuclear weapons to see if the plutonium removed from spent nuclear fuel is later used to make nuclear weapons. In addition to in-person inspections, which may be a time-consuming and labor-intensive operation, the IAEA also uses sample analysis and process monitoring.
The algorithms developed by PNNL are capable of building a virtual representation of the IAEA-inspected facility while monitoring “important temporal patterns” in order to train the model and forecast the pattern that would result from routine operation of the facility’s various regions. The inspectors may be summoned back to the plant to conduct another inspection if data gathered on-site does not match the virtual prediction.
Another machine learning (ML)-powered system developed at PNNL can interpret photos of radioactive material using a “autoencoder” model. This model can be taught to “compress and decompress images” into brief descriptions that are helpful for computational research. The model analyses images of small radioactive particles to seek for the distinctive structure that the radioactive material develops as a result of the environmental factors or the purity of the raw materials used in its production facility.
In order to expedite the identification process, law enforcement authorities (such as the FBI) can then compare the microstructures of field samples with a collection of electron microscope pictures created by university and national laboratories, according to PNNL. PNNL experts caution that while computers and machine learning algorithms will not replace humans in detecting nuclear threats any time soon, they can be effective in spotting and preventing a potential nuclear disaster on US territory.