• Physics 18, 117
The memory of a story appears to have a tree-like structure, with abstract summaries branching out into more specific details.
oigro/stock.adobe.com
Oh, those star-crossed lovers, Romeo and Juliet. Most of us can recount their story, even if we can’t recite any specific lines from the play. A new theory aims to explain our story-recalling ability by modeling the memory of a story as a hierarchical tree [1]. The portion of the tree closest to the root consists of abstractions or summaries, which branch out into finer and finer details. The researchers who developed this model show that it reproduces statistical trends seen in an earlier study of human subjects recalling stories.
People remember all kinds of things: phone numbers, grocery lists, historical dates. There are models in neuroscience that can explain this sort of “random recall.” But story memory is different. “When you recall a narrative, you never recall it verbatim,” says Misha Tsodyks from the Weizmann Institute of Science in Israel. Instead, you remember what the story is about. “You can give a one-sentence summary of Romeo and Juliet, but that sentence most likely doesn’t appear anywhere in the story,” Tsodyks says. This type of abstraction has made story memory more difficult to characterize than random recall.
Recently, Tsodyks and his colleagues tested the memories of 100 participants in an online study. Each subject read a short, first-person narrative and later wrote down what they recalled [2]. To analyze these written recollections, the researchers have now modeled the storage of a narrative in the brain as a tree-like structure. The “branches” provide a rough outline of the story on which the “leaves,” or individual memories, are arranged. When a person recalls a story, they recount these leaf-like memories as a sequence of sentences (one sentence for each leaf). But the sequence is not a grocery list; there’s a hierarchical structure—provided by the branches—that helps the person keep track of where they are in the story, Tsodyks explains.
Each person creates their own individual tree when committing a story to memory. To capture this diversity, Tsodyks and colleagues generated trees by randomly dividing a given story text into sections, then dividing those into subsections, and so on. The researchers limited both the number of levels and the number of divisions per level to four, reflecting how we can only concentrate on a few ideas at one time. The end result of this procedure is a set of “chunks” of text of varying lengths, where each chunk corresponds to a single memory, or leaf, in the tree. A short chunk implies that the corresponding memory is a specific detail, whereas a long chunk is a broad memory that, for example, summarizes several events from the story.
The researchers generated a large number of trees and compiled several statistics. They found that the average length of a recalled story grows with the length of the original story but reaches a plateau for very long stories. The model also predicts the level of compression, which is the number of sentences in the original story that are represented by a typical leaf.
With this model in hand, the team went back to the written responses from the earlier study and analyzed them with two artificial intelligence (AI) models, GPT-4 and DeepSeek. The AI algorithms mapped each sentence in a subject’s recollection to a chunk of sentences in the original story. For sentences corresponding to specific story details, the two AI algorithms made the same mapping. But they disagreed over more abstract memories, like “Romeo loves Juliet,” which correspond to large chunks of the original story. However, the overall statistical trends were the same, suggesting that the tree model is capturing aspects of memory organization. To explore the tree model further, Tsodyks plans to do similar experiments with other types of recall, such as remembering a two-person dialogue.
Neuroscientist Jeremy Manning from Dartmouth College in New Hampshire says that hierarchical structures have been used before in explaining stories, but the tree model offers a new framework in which broader, more “central” memories occupy lower branches. Taken together, these types of models “show that not all ‘events’ in a narrative are equally important or memorable,” he says.
“I am extremely enthusiastic about this work,” says memory researcher Janice Chen from Johns Hopkins University in Maryland. She says that psychologists have studied human stories for over a century, but they have been limited by the difficulty of analyzing large numbers of subjective recollections. Tsodyks and colleagues have shown how AI tools can break through this barrier, Chen says. “I think this is the beginning of a new field of powerful computational research on narratives and memory.”
–Michael Schirber
Michael Schirber is a Corresponding Editor for Physics Magazine based in Lyon, France.
References
- W. Zhong et al., “Random tree model of meaningful memory,” Phys. Rev. Lett. 134, 237402 (2025).
- A. Georgiou et al., “Large-scale study of human memory for meaningful narratives,” Learn. Mem. 32, a054043 (2025).