Editorial Volume 8 Issue 1
Emeritus Professor, Department of Computer Science, Auckland University of Technology, New Zealand
Correspondence: Nikola K. Kasabov, Emeritus Professor, Department of Computer Science, Auckland University of Technology, Auckland, New Zealand; Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria; Computer Engineering Department, Dalian University, China, Tel +6421488214
Received: April 24, 2024 | Published: May 8, 2024
Citation: Kasabov NK. Life-long learning and evolving associative memories in brain-inspired spiking neural networks. MOJ App Bio Biomech. 2024;8(1):56-57. DOI: 10.15406/mojabb.2024.08.00208
The paper argues that evolving associative memories (EAM), that are manifested in all biological systems and realised in the human brain through life-long learning (LLL), can be realised in brain-inspired computational architectures based on spiking neural networks (SNN). The paper points to the importance of the duality of the concepts of EAM and LLL for future AI systems.
Evolving associative memory (EAM) is perhaps one of the most fundamental principles of evolution in nature and in the development of living organisms, the ultimate result being the human brain. Still this principle is not much explored, neither in the brain nor in AI systems. EAM are natural or artificial systems that associate and capture incrementally and continuously related items, objects, processes and can be recalled using partial information. EAM are created continuously and manifested in all existing dynamical systems, from atoms, molecules and the Universe, to neural networks and the brain (Figure 1).
Figure 1 EAM is a basic information principle of all dynamical systems in nature, from atoms to the brain.
EAMs in biological systems and more specifically in the brain are created through life-long learning (LLL), where structures (e.g. clusters) of related items in time and space are created and modified continuously. On the other hand, LLL relies on adding new items to already existing structures based on commonality and similarity, so that LLL and EAM are dual principles of the same process. This duality involves different levels of molecular and neural functions in the brain, such as: neurogenesis; neuromodulation; episodic replay; metaplasticity; multisensory integration.1 LLL in the brain is the ultimate inspiration for LLL in artificial systems based on neural networks, and more specifically, on brain-inspired spiking neural network (SNN) architectures, where spatio-temporal connectionist structures are formed and modified continuously to form evolving spatio-temporal associative memories (ESTAM).2–5
As an AI machine learning model, an ESTAM is trained on a full set of spatio-temporal variables, but can be successfully recalled on only a subset of these variables measured in different time intervals. In addition, an ESTAM model can be further incrementally evolved on a new set of variables measured at different time windows. In3 ESTAM are built with the use of evolving spatio-temporal learning (ESTL) methods using SNN, where existing spatial-, temporal- and other multimodal data are integrated to train the model. The model captures evolvable and explainable spatio/spectro temporal patterns and can be further evolved on new data.
The idea of ESTAM using a brain-inspired SNN architecture NeuCube has been suggested in4 and it was further developed in.3 A NeuCube model processes information represented as spikes, forming binary time sequences. It has a 3D structure that is initialised using a brain template for brain data applications, but in general it can be initiliased in a different way, still accounting for similarity of input temporal variables.6 LLL in NeuCube is achieved in a connections way, where new connections are created and updated all the time and they can be recalled/activated using only partial input information based on the “synfire”7 and polychronisation8 principles. To achieve LLL in a brain-inspired SNN architecture, several methods can be used in a concert, such as: integrated spike-time and error backpropagation learning;5,9 neuromodulatory synaptic connections;10 synaptic weight regulation;5 homeostasis;11 Lyapunov energy function;12 evolving classifiers, where output neurons are evolved and aggregated continuously from data.13 A NeuCube based ESTAM learns through transfer learning, always evolving and reporting fuzzy spatio-temporal rules.14,15
A major advantage of building EAM with SNN through LLL is that such models are continuously trainable on new multimodal data and can be recalled on smaller data sets and missing modalities, allowing for efficient and early future event prediction. This has already been demonstrated on several AI problems, such as: brain neuroimaging data classification;16 moving object recognition using audio-visual data;5 financial and economic data prediction;17 and other. LLL and EAM at a personal level are key concepts for the process of aging-well and perhaps for reverse aging18,19 and definitely, major concepts to be achieved in future AI systems.
The work is supported by Auckland University of Technology and Knowledge Engineering Consulting Ltd (https://knowledgeengineering.ai). A NeuCube software development system that allows for building ESTAM is available in both Matlab and Python versions from: https://www.aut.ac.nz/neucube.
None.
The author declares that there is no conflict of interest.
©2024 Kasabov. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.