March 23, 2023 by Gabriel Bassett

Spiking Neural Networks

blog-feature-image

Spiking Neural Networks

Spiking Neural Networks (or SSNs) are a unique type of neural network. A traditional Deep Neural Network (DNN) used in machine learning consists of nodes that have a numeric weight (usually between zero and 1), that, if the node is turned on, all edges connected pass their associated weight as input to the next neuron.

SNNs are different in that each edge can only transmit the value 1. If they are not transmitting, they transmit nothing. This ‘1’ is called a spike. To make up for the fact that it cannot transmit values, SNNs tend to focus on the rate of firing. For example, seven spikes in a second may be the equivalent of transmitting a 0.7 for a traditional neural network.

THe benefit of SNNs is that because they only use power when a spike is transmitted, they use far less power than a traditional DNN. At any given time in an neural network, only a few edges are actively transmitting a value. In traditional neural networks, that still takes power (as a value of 0 is sent). In spiking neural networks, it does not. The downside of SNNs is that they are much harder to train than traditional DNNs. This limits their use and prevents them from doing the things multi-billion node DNNs such as chatGPT can.

This power benefit is even greater in hardware. While DNNs have their own hardware (Tensor Processing Units or Neural Processing Units, etc), SNNs have their own dedicated hardware, generally referred to as neuromorphic hardware. Neuromorphic hardware mirrors the way real brains work, sending one-bit signals when in use and nothing otherwise. This allows them to be extremely low power, appropriate for doing machine learning tasks in miniature and portable systems.

One area I expect to show particular promise for SNNs is embeddings. I refer to embeddings in my Large Language Models blog. They are a way of describing entities, (usually words), by how similar they are to other entities. While traditional DNNs create a static connection of nodes and edges to ’train’ the embedding on, neural networks inherently have a distance in the strength of the connection between two neurons. Using a methods for evolving networks like EONs, I suspect a SNN could be evolved that efficiently represents the embeddings as the network itself would intrinsically follow the manifold of the data.

For security, instead of embedding things, (words, computers, etc), the network could embed events (borrowing from the concepts in our Blast Radius post) or possibly even both because events are connected through time and the entities they involve. The resulting space would then embed how a computer system, potentially both IT assets and people, and allow security and IT professionals to model, monitor, and intervene in the system. All potentially in a light-weight computing package.

In conclusion, SNNs have incredible potential to produce machine learning models that are more efficient and more biologically plausible, bringing machine learning from the server room to everyone.

LET’S WORK TOGETHER