Large Language Models - Memory and Language
Can we give LLMs memories, and what can LLMs tell us about the structure of language?
Large Language Models (LLM) have become a focal point of machine learning research for their capacity to generate surprisingly coherent text. They are trained on large corpora of to predict the next word in a sequence. To do so effectively their architectures contain an attention mechanism that allows them to attend differently to the proceeding words such that they are using the correct context to infer the next word in the sequence. The underlying mechanisms governing the LLM are relatively simple, however, at scale, we observe that they have a remarkable capacity to coherently generate long sequences of text that relate contextually to a prompt it is given.
It turns out the task of predicting the next word in the sequence is sufficient for the model to be incentivised to learn basic forms of reasoning. For instance, on some small language models, it has been shown that the model learned to detect and utilise skip trigrams to inform their prediction of the next word in the sequence [1].
Therefore, LLMs are potentially a useful tool to study concepts such as memory, which are intricately related to language. Similarly, we can study the structure of language, as LLMs are designed to generate statistically significant representations of the connections between words.
Memory
We are still unsure how memories are encoded in the human brain. There are many peculiar observed phenomena regarding memory that it is difficult to construct a unifying theory. For instance, many people do not have strong recollections of memories from their infancy. It is apparent that the formation of memories is dependent on many different factors.
From a user experience perspective, LLMs can be made more useful by instilling them with memories such that they can recall previous interactions to form more appropriate answers. As we are unsure how memories are stored in human brains, this task is not so straight forward. I do not think just allocating some computer memory to store previous interactions is the way to give LLMs memory, in a way that resembles humans or leads to realistic conversations. The reasons for my scepticism here are due to the following unanswered questions that emerge from such an implementation.
How long should we store memories for? The answer to this cannot be a fixed rule, say that the potency of memories decays at some pre-determined rate, as I can recall memories from my childhood and forget what I did last week. Therefore, it is going to be a function of may different factors which will hard to precisely model.
When space is limited which memories do we remove to make room for new memories?
For any given interaction a LLM only has access to the text. However, the user interacting with the LLM has an experience that is beyond just the text. Therefore, we need to understand whether just storing the text of the interaction is sufficient for it to be useful in future interactions. Similarly, we need to understand how much of the textual information from an interaction we store. Are we required to store every word of the interaction, or is only recording a few words sufficient?
Human memories are not perfect and are often subjected to the individual’s prior beliefs and experiences. This makes the recollection of memories a highly subjective and personal experience. Thus when we converse with someone who recalls a memory we are engaged by the nuances in their recollection. What personality should we provide the LLM such that when it forms and recalls its memories it does so in a personable and engaging manner?
Human memories are not stored in their entirety due to memory constraints. We attend to different components of an interaction and connect it to previous memories. What parts of interactions should we make the LLM attend to, and how to we incorporate a given interaction into existing memories?
Human memories come with triggers such that we can recall them. We are not able to search our brains for a particular memory, indeed sometimes we seem to just stumble across memories and we are not really sure on why we recalled them. What triggers should we apply to LLM memories such that they are recalled in a realistic manner?
Currently, for many applications of LLMs the size of the context length is sufficient to store the necessary context to allow for an appropriate interaction between the LLM and the user. However, as we aim for these systems to become more personalised to the individual, the context length may not be sufficient. Moreover, as this technology should eventually be accessible to many through handheld devices, thus we need to be able to store memories efficiently.
[2] constructs a memory module that augments an existing LLM with memory storage and memory retrieval capabilities. The paper emphasises the detail and specificity of the memories stored in its module. It employs the Ebbinghaus curve to vary the strength of memories to reflect the fact that we slowly forget memories and that memories we have some recollection of are easier to relearn. I think this implementation is insufficient for endowing LLMs with the capability to have memories the way humans do. Indeed, I do not recall the timestamps and specific details of interactions I have with others, which is what MemoryBank claims to do. One may argue that as computers have the capacity to store such information then they should do so. However, I would argue that the lack of specificity in my memories is a feature, not a bug. It means that I have to store it with an abstract representation so that I can reconstruct the details upon recall. In storing the intricate details I would be overfitting to the situation and would not be able to extract the general principles of the interaction. Humans have evolved an intricate attention pattern so that we only consciously register around 20 bits of the 11 million or so bits of information we are exposed to. If we tried to attend to all 11 million bits our lives would be chaotic and we would be rife with anxiety in trying to process all the information.
In contrast to MemoryBank, I think that memories should be stored implicitly in the weights of the model, just as we store our memories in our neurons. After all, it seems as those humans forms memories more strongly in response to learning. If something is surprising to you store that event more vividly in your memory. Therefore, if we instead allow the model to continually learn and update its weights, then we may be able to mimic the memory-forming structure of humans in the weights of the model.
Language
LLMs may be an interesting object to investigate from the perspective of language theory. After all, they are trained by studying large corpora of text, to develop connections that then lead to text generation that complies with syntactic rules. By interpreting its inner-computations we may be able to arrive at a theory of language, not of human psychology. We would only be able to understand the structure of our language, but we cannot make any inference about the human condition from this. Therefore, we would be studying language from the perspective that it is external to the human condition, much like mathematics is thought to be a fundamental aspect of nature.
In this study, we may be able to arrive at canonical interpretations of written texts. Although the canonical interpretation may not be the interpretation intended by the author, the canonical interpretation would be the one dictated by the structure of language. Therefore, as the form of language changes over time, it is probably the case that the canonical interpretation of a piece of text may change over time. The advantage of LLM systems is that we can train them on specific corpora of text to understand the interpretation of a piece of text that is offered by the structure of language present in a particular time period. The LLM is not influenced by texts it is not trained on, hence, we can observe how the meaning of language changes over time.
Summary
LLMs have a remarkable capacity to generate coherent text that conforms to grammatical conventions. Therefore, as we are now developing sophisticated methods for interpreting these models, we can use them to help investigate phenomena such as memory. Moreover, their scale means that they can offer global insights into the structure of language. However, we ought to be careful that we do not over-anthropomorphise these systems as their architecture and development do not resemble the human brain or human learning strategies. Indeed Moravec's paradox is evidence that we cannot immediately make the links between humans and LLMs.
References
[1] A Mathematical Framework for Transformer Circuits. https://transformer-circuits.pub/2021/framework/index.html
[2] Zhong, W., Guo, L., Gao, Q., Ye, H., & Wang, Y. (2024). MemoryBank: Enhancing Large Language Models with Long-Term Memory. _Proceedings of the AAAI Conference on Artificial Intelligence_, _38_(17), 19724-19731. https://doi.org/10.1609/aaai.v38i17.29946