Neuromorphic
technology

Neuromorphic machine learning technology is based on spiking neural networks (SNNs) and biologically plausible learning algorithms that rely on spike-based inter-neuron communication and synaptic plasticity. These mechanisms give neuromorphic solutions advantages in energy efficiency and performance, which are most pronounced when SNNs are combined with neuromorphic processors and neuromorphic sensors.

Spiking neural networks

SNNs represent the next (third) generation of artificial neural networks that more accurately replicate the operating principles of biological neurons and synapses. SNNs are central to advancing neuromorphic computing, bringing artificial intelligence systems closer to their biological counterparts.

The key differences between SNNs and traditional
artificial neural networks are as follows
Spikes instead of numbers

Neurons exchange discrete pulses (spikes) of unit amplitude rather than real numbers, reflecting how communication occurs in living organisms.

Asynchrony and event-driven operation

Spike generation and processing occur asynchronously rather than layer by layer. A neuron generates a spike in response to a stimulus (event). Furthermore, spike transmission from a presynaptic neuron to a postsynaptic neuron isn’t instantaneous and may involve synaptic delay.

Time as an informative factor

The asynchronous, event-driven communication between neurons means that the timing of spiking activity is itself a source of information. This naturally introduces time as an additional dimension in computations.

Local learning

Learning algorithms are a crucial aspect of SNNs. Unlike conventional artificial neural networks (ANNs), which use classic backpropagation, SNNs rely on spike-timing-dependent synaptic plasticity (STDP), where learning occurs through local and asynchronous weight adjustments.

Learn more

Variable structure

SNNs can adapt their internal topology by growing or pruning connections between neurons.

The asynchronous, event-driven operating principle of SNNs makes them extremely energy-efficient and fast, which opens up broad opportunities for use in real-world, energy-autonomous AI systems that operate in real time.

Kaspersky
Neuromorphic Platform

Our research team has developed the open-source neuromorphic machine learning platform, Kaspersky Neuromorphic Platform (KNP).

We use the KNP toolkit to conduct research for developing effective STDP-based methods for training SNNs, exploring new cognitive architectures, and building applied solutions based on them.

Examples of explored SNN architectures


Columnar/layered architecture

CoLaNET (Columnar Layered NETwork) [1] is a novel SNN architecture with a columnar/layered structure for solving classification problems. One feature of CoLaNET is the combination of similar neural network structures (columns) corresponding to different object classes, and functionally distinct neuron populations (layers) within the columns. Another distinguishing feature is a novel combination of anti-Hebbian and dopamine-modulated plasticity [2]. The plasticity rules are local, and don’t use backpropagation.

Columnar network ColaNet

Shows an example of a CoLaNET network structure with three columns and five layers in each column

Structure of a column

The three lower layers of each column (learning neurons, WTA, REWGATE) contain several neuron triplets, each forming a microcolumn. Columns represent classes, and microcolumns represent the variability of objects within a class.


Neuro-semantic architecture

A fundamentally new cognitive architecture – the neuro-semantic network, or NSN - has the following distinctive features:

An extended neuron model

An extended neuron model with additional input ordering as an alternative to positional encoding.

A local learning principle

A local learning principle with reinforcement learning capability instead of the global backpropagation-based method.

Attention mechanism

An alternative to the attention used in transformer architectures [3]. It is configured manually and trained dynamically on a data stream, rather than on historical data.

Neurogenesis and feature hierarchy

Neurogenesis creates space for new features in the network, rather than reusing space from existing features. The network has a hierarchical layer-by-layer structure: a feature at layer l can include up to 8 features (a network hyperparameter) from layer l-1. This enables exponential growth in complexity, allowing the network to represent even the most complex concepts (an alternative to LLM design).

Resource minimization principle

Resource minimization principle: Neurogenesis must not lead to uncontrolled network growth. Minimizing information representation within a hierarchical layered network serves both as a control mechanism and a learning principle. The core of this approach is an optimization problem - finding the most compact hierarchical partitioning into patterns.

Neuromorphic processors

Neuromorphic processors are inspired by the operating principles of biological neural networks and designed according to the near-memory computing concept.

In neuromorphic processors, the computing core is placed as close to the memory as manufacturing technology allows. As a result, unlike computing devices based on the classic von Neumann architecture, neuromorphic processors avoid excessive energy consumption caused by data transfer between memory and the computing core. Furthermore, the neuromorphic architecture naturally lends itself to the efficient implementation of asynchronous, event-driven sparse computations inherent in bio-inspired spiking neural networks (SNNs).

The combination of SNNs and neuromorphic processors enables AI problems to be solved with energy consumption orders of magnitude lower than similar solutions running on CPUs or GPUs. This unlocks a wide range of opportunities to bring AI capabilities directly to edge devices, in line with technological trends like on-chip AI and edge AI.

For this reason, more companies and research centers are investing in neuromorphic architecture research and in the development and production of neuromorphic processors, recognizing their strong business potential.

Kaspersky develops and advances the open-source software and hardware platform, Kaspersky Neuromorphic Platform (KNP), which is deeply integrated with the neuromorphic chip AltAI.

AltAI is a joint development by Motiv-Neuromorphic Technologies and Kaspersky.

Neuromorphic sensors

Neuromorphic sensors are the third component of neuromorphic AI. These are event-driven sensors that register changes in the observed process.

Neuromorphic cameras, also known as dynamic vision sensors (DVSs), are an example of neuromorphic sensors [4, 5, 6, 7]. Each pixel of a DVS camera operates independently. It generates a spike (event) in response to a local (corresponding to its position) change in brightness, such as that caused by an object’s movement. DVS cameras can generate hundreds of millions of events per second, enabling them to capture high-speed processes.

In addition to high speed, DVS cameras have a wide dynamic range (>120dB). This makes them well-suited for real-world applications where lighting conditions can’t be controlled, such as in self-driving cars or mobile phones.

Below are examples of images captured by a DVS camera:

An LED operating at up to 1.2 kHz

An hourglass

A road scene amid during a snowfall

References

Expand

Contact

To discuss collaboration opportunities, please email us at neuro@kaspersky.com,
or use the contact form below