Neural networks can be trained to do a wide variety of tasks, with self-driving cars and advanced recognition technology being some of their potential applications. However, neural networks are constrained by large consumption of computing power and are horribly slow. This makes them incompatible for low-power driven technology such as smartphones and embedded devices. Researchers at the University of California San Diego have claimed to find a promising way to get over these bottlenecks. They developed an approach based on neuroinspired hardware-software design that made training of neural network more energy-efficient and radically fast. The team of researchers, in collaboration with Adesto Technologies, a California-based semiconductor technology company, devised hardware and algorithms that could make computations related to neural network training super-fast and energy efficient.
The details of the study are published in the journal Nature Communications on December 14, 2018.
In-Memory Computing Confers Large Performance Gains on Neural Networks
The researchers essentially developed a hardware component with subquantum Conductive Bridging RAM (CBRAM) array that could do in-memory computing for neural network training. This in-memory computing consumers substantially less energy—around 10–100 times—than the existing memory technologies. They further improved the performance of the high-capacity memory array with the help of algorithms, which is reckoned as ‘soft-pruning’.
Researchers hope to develop Energy-efficient Integrated Neural Network System
The hardware design enabled the researchers to do neuroinspired unsupervised learning for the neural networks, known as spiking neural network, which would make these devices considerably useful in automation applications. In addition, the study showed that the hardware didn’t alter the accuracy any significantly. The researchers noticed substantial gains in energy consumption—the device reduced the energy consumption by 100 to 1000 times compared to similar memory technology.
The authors of the study are planning to team with several other memory technology companies in near future. They intend to develop an integrated neural network system that could do tasks with very less computational power and in dramatically less time.