Modality-Dependent Memory Mechanisms in Cross-Modal Neuromorphic Computing

Effiong Blessing, Chiung-Yi Tseng, Somshubhra Roy, Junaid Rehman, Isaac Nkrumah
Saint Louis University; Luxmuse AI; North Carolina State University; Independent Researcher
Preprint, 2026

Abstract

Memory-augmented spiking neural networks (SNNs) promise energy-efficient neuromorphic computing, yet their generalization across sensory modalities remains unexplored. We present the first comprehensive cross-modal ablation study of memory mechanisms in SNNs, evaluating Hopfield networks, Hierarchical Gated Recurrent Networks (HGRNs), and supervised contrastive learning (SCL) across visual (N-MNIST) and auditory (SHD) neuromorphic datasets. Our systematic evaluation of five architectures reveals striking modality-dependent performance patterns: Hopfield networks achieve 97.68% accuracy on visual tasks but only 76.15% on auditory tasks (21.53 point gap), revealing severe modality-specific specialization, while SCL demonstrates more balanced cross-modal performance (96.72% visual, 82.16% audio, 14.56 point gap). These findings establish that memory mechanisms exhibit task-specific benefits rather than universal applicability. Joint multi-modal training with HGRN achieves 94.41% visual and 79.37% audio accuracy (88.78% average), matching parallel HGRN performance through unified deployment. Quantitative engram analysis confirms weak cross-modal alignment (0.038 similarity), validating our parallel architecture design. Our work provides the first empirical evidence for modality-specific memory optimization in neuromorphic systems, achieving 603× energy efficiency over traditional neural networks.

Poster

BibTeX

@article{Blessing2026ModalityDependent,
  title={Modality-Dependent Memory Mechanisms in Cross-Modal Neuromorphic Computing},
  author={Blessing, Effiong and Tseng, Chiung-Yi and Roy, Somshubhra and Rehman, Junaid and Nkrumah, Isaac},
  journal={Preprint},
  year={2026},
  url={}
}

References

  1. D. J. Felleman and D. C. Van Essen, “Distributed hierarchical processing in the primate cerebral cortex,” Cerebral Cortex, vol. 1, no. 1, pp. 1–47, 1991.
  2. H. Ramsauer, B. Schäfl, J. Lehner, P. Seidl, M. Widrich, L. Gruber, M. Holzleitner, M. Pavlović, G. K. Sandve, V. Greiff et al., “Hopfield networks is all you need,” in International Conference on Learning Representations, 2020.
  3. G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long short-term memory and learning-to-learn in networks of spiking neurons,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  4. P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 18661–18673, 2020.
  5. G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor, “Converting static image datasets to spiking neuromorphic datasets using saccades,” Frontiers in Neuroscience, vol. 9, p. 437, 2015.
  6. B. Cramer, Y. Stradmann, J. Schemmel, and F. Zenke, “The heidelberg spiking data sets for the systematic evaluation of spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 7, pp. 2744–2757, 2022.
  7. W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge, UK: Cambridge University Press, 2014.
  8. P. Paul, P. Sosik, and L. Ciencialova, “A survey on learning models of spiking neural membrane systems and spiking neural networks,” arXiv preprint arXiv:2403.18609, 2024.
  9. J. D. Nunes, M. Carvalho, D. Carneiro, and J. S. Cardoso, “Spiking neural networks: A survey,” IEEE Access, vol. 10, pp. 60738–60764, 2022.
  10. N. Rathi, I. Chakraborty, A. Kosta, A. Sengupta, A. Ankit, P. Panda, and K. Roy, “Exploring neuromorphic computing based on spiking neural networks: Algorithms to hardware,” ACM Computing Surveys, vol. 55, no. 12, pp. 1–49, 2023.
  11. E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, 2019.
  12. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
  13. F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G.-J. Nam et al., “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537–1557, 2015.
  14. M. Voudaskas et al., “Spiking neural networks in imaging: A review and case study,” Sensors, vol. 25, no. 21, p. 6747, 2025.
  15. S. Baek and J. Lee, “SNN and sound: A comprehensive review of spiking neural networks in sound,” Biomedical Engineering Letters, vol. 14, no. 5, pp. 981–991, 2024.
  16. D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory. New York, NY, USA: Wiley, 1949.
  17. Y. Chen et al., “Spiking neural network with working memory can integrate and rectify spatiotemporal features,” Frontiers in Neuroscience, vol. 17, p. 1167134, 2023.
  18. O. Garcia-Palencia, J. Fernandez, V. Shim, N. K. Kasabov, A. Wang, and Alzheimer’s Disease Neuroimaging Initiative, “Spiking neural networks for multimodal neuroimaging: A comprehensive review of current trends and the NeuCube brain-inspired architecture,” Bioengineering, vol. 12, no. 6, p. 628, 2025.
  19. X. Liu, N. Xia, J. Zhou, Z. Li, and D. Guo, “Towards energy-efficient audio-visual classification via multimodal interactive spiking neural network,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 21, no. 5, pp. 144:1–144:24, 2025.