Researchers from the Institute for Basic Science (IBS)’s Center for Cognition and Sociality and Data Science Group have discovered an unexpected similarity between the hippocampus of the human brain and the memory processing mechanisms of artificial intelligence (AI) models.
This discovery is the result of a groundbreaking collaborative effort. This discovery sheds new light on the memory consolidation process in AI systems, offering a fresh perspective on how these systems transform short-term memories into long-term ones.
Mystery AI Memory
Understanding and simulating human-like intelligence has become crucial in scientific study due to the constant pursuit of Artificial General Intelligence (AGI), led by powerful organizations like OpenAI and Google DeepMind. The Transformer model, whose underlying ideas are currently being closely examined to reveal the workings of potent AI systems, is at the center of these technical advancements.
Decoding the Learning and Remembering Process
Unlocking the potential of AI systems hinges on comprehending how they learn and retain information. The interdisciplinary research team decided to apply principles of human brain learning, specifically focusing on memory consolidation through the NMDA receptor in the hippocampus, to AI models.
NMDA Receptor: A Gateway to Memory
Often likened to a smart door in the brain, the NMDA receptor plays a pivotal role in facilitating learning and memory formation. Acting as a gatekeeper, a magnesium ion blocks the door, allowing substances to flow into the nerve cell only when it steps aside. This intricate process mirrors the brain’s ability to create and store memories, with the magnesium ion’s role being highly specific.
AI’s Imitation Game: The Transformer’s Gatekeeping Process
In a surprising revelation, the research team found that the Transformer model appears to employ a gatekeeping process analogous to the brain’s NMDA receptor. This led to a deeper investigation into whether the Transformer’s memory consolidation could be influenced by a mechanism similar to the gating process of the NMDA receptor.
Tweaking Parameters for Enhanced Memory
Drawing parallels between the brain’s memory function and the Transformer model, the researchers experimented with altering parameters to mimic the NMDA receptor’s gating action. Just as changing magnesium levels impact memory strength in the human brain, tweaking the Transformer’s parameters resulted in enhanced memory within the AI model.
A Leap Forward in AI and Neuroscience
C. Justin LEE, a neuroscientist director at the institute, emphasized the significance of this research, stating, “This makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”
Towards Low-Cost, High-Performance AI Systems
CHA Meeyoung, a data scientist in the team and at KAIST, highlighted the potential for creating low-cost, high-performance AI systems that learn and remember information akin to human brains. The study’s integration of brain-inspired nonlinearity into AI constructs marks a significant stride in simulating human-like memory consolidation.
Convergence of Minds: AI’s Insights into the Brain
What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a substantial advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only promises the creation of efficient AI systems but also offers valuable insights into the workings of the brain through AI models.
(Source: Institute for Basic Science)