Today’s computers are limited by their internal configuration: a transistor is only connected on either side of its gate. But a neuron is connected to thousands of other neurons simultaneously. Additionally, rather than the binary on–off approach of today’s digital systems, neurons respond to both the quantity and duration of incoming signals, giving vastly more capacity to process information despite far lower energy needs. “This will both massively reduce IT overhead and give us an important tool for addressing climate change through reduction of energy use,” says Scott Likens, Global AI and Innovation Technology Leader, PwC United States.
There are hurdles: neuroscientists are still struggling to understand and model even simple animal brains, and memristors are more theory than reality. However, applying neuromorphic principles in both software and hardware has still led to impressive advancements, such as UC Santa Cruz’s SpikeGPT, a neural network that uses 22 times less energy than comparable systems. Neuromorphic systems can also self-improve through evolution, just like living creatures do, with potential for dramatic positive feedback loops in many fields of R&D.
“If a scalable breakthrough can be made, neuromorphic processors will lead to a dizzying number of new applications that we can only begin to speculate about, and at the same time truly integrate AI into our lives,” says Likens. “Something the size of your phone will be able to run tasks that currently require a supercomputer and enable new forms of AI that could see today’s generative AI models seem positively slow and limited in comparison.”