HomeArtificial IntelligenceArtificial Intelligence NewsEngineers create a LEGO-style AI chip

Engineers create a LEGO-style AI chip

What if there is a possibility in the future to retain cellphones, smartwatches, and other wearable devices without abandoning them for a new model? Rather than being abandoned, these models can be enhanced with cutting-edge sensors and processors that snap onto a device’s internal chip, much like LEGO bricks incorporated into an existing structure. Such reprogrammable chipware could keep devices advanced while reducing electronic waste.

With a LEGO-like design for a stackable, reconstructable artificial intelligence (AI) chip, MIT engineers have taken a step toward that modular outlook.

The layers of the chip communicate optically thanks to changeable layers of sensing and processing elements, together with light-emitting diodes (LED). Other modular chip designs utilize typical wiring to transmit signals between layers. Such complex connections are difficult, if not inconceivable, to sever and rewire, as a result, such stackable designs cannot be reconfigured.

The MIT design transfers data via the chip utilizing light instead of physical wires. As a result, the chip can be reconfigured, with layers that can be swapped out or stacked on, for example, to add new sensors or updated processors.

Computing layers and sensors can be added as per the requirement, like for light, pressure, and even smell, states Jihoon Kang, an MIT postdoc. We call this a LEGO-like reconfigurable AI chip since it contains indefinite expandability depending on the combination of layers.

The researchers are keen to apply the design to edge computing devices, which are self-contained sensors and other electronics that operate independently of any central or distributed resources, like supercomputers or cloud computing.

As we move into the era of the internet of things (IoT) based on sensor networks, the need for multifunctioning edge computing devices will skyrocket, says Jeehwan Kim, MIT’s associate professor of mechanical engineering. In the future, our recommended hardware architecture will offer high versatility of edge computing.

The team’s findings were published in Nature Electronics today. Besides Kim and Kang, MIT authors comprise co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, as well as contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, together with associates from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The design of the team is currently configured to perform basic image recognition tasks. It achieves this through the use of a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” developed previously by the team, which operate

as a physical neural network, or “brain-on-a-chip.” Without the necessity for external software or an Internet connection, each array can be trained for processing and analyzing signals directly on a chip.

The researchers merged image sensors with artificial synapse arrays, which they trained to recognize particular letters — in this case, M, I, and T — in their new chip design. Rather than using physical wires to relay sensor signals to a processor, the team built an optical system between every sensor and an artificial synapse array for enabling communication between the layers without the necessity for a physical connection.

Other chips are physically wired through metal, making them difficult to rewire and redesign, so you would be required to make a new chip in case of any new function addition, MIT postdoc Hyunseok Kim explains. We changed that physical wire connection with an optical communication system, which allows us to stack and add chips however we see fit.

The optical communication system developed by the team is made up of paired photodetectors and LEDs, each patterned with teeny pixels. Photodetectors serve as an image sensor for receiving data and LEDs serve as data transmitters to the next layer. When a signal (for example, an image of a letter) arrives at the image sensor, the image’s light pattern encodes a specific configuration of LED pixels, which stimulates another layer of photodetectors, as well as an artificial synapse array, which analyzes the signal based on the pattern and power of the incoming LED light.

Stacking up

The team fabricated a single chip with a computing core about 4 square millimeters in size, or about the size of confetti. The chip is made up of three image recognition “blocks,” each of which includes an image sensor, an optical communication layer, and an artificial synapse array for analyzing one of three letters: M, I, or T. They then flashed a pixellated image of arbitrary letters onto the chip and measured the electrical current produced by each neural network array. (The higher the current, the more likely the image is the letter that the particular array has been trained to recognize.).

The researchers discovered that the chip correctly classified clear images of each letter, but it was less capable of distinguishing between blurry images, such as I and T. The researchers were able to quickly replace the chip’s processing layer with a better “denoising” processor, and the chip then correctly identified the images.

We demonstrated stackability, replaceability, and the capability to embed a new function into the chip, says Min-Kyu Song, an MIT postdoc.

The researchers aim to enlarge the chip’s sensing and processing efficiencies, and they foresee limitless applications.

We can add layers to a cellphone’s camera so it can recognize more complex images, or we can turn these into healthcare monitors that can be inserted in wearable electronic skin, says Choi, who previously developed a “smart” skin for examining vital signs with Kim.

Another notion he suggests is modular chips built into electronics that users can customize with the newest sensor and processor “bricks.”

We can design a general chip platform and sell each layer independently, like a video game, Jeehwan Kim says. We could create various types of neural networks, such as for image or voice recognition, and let the customer select what they want to add to an existing chip, much like a LEGO.

This study was funded in part by the South Korean Ministry of Trade, Industry, and Energy (MOTIE), the Korea Institute of Science and Technology (KIST), and the Samsung Global Research Outreach Program.

Source link

Most Popular