Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.
This dataset contains all the model output used to generate the figures and data reported in the article "Climate, soil organic layer, and nitrogen jointly drive forest development after fire in the North American boreal zone". The data was generated during spring 2015 using the a modified version of the Ecosystem Demography model version 2, provided as a supplement accompanying the article. The data was generated using the computational resources supported by the PICSciE OIT High Performance Computing Center and Visualization Laboratory at Princeton University. The dataset contains a pdf Readme file which explains in detail how the data can be used. Users are recommended to go through this file before using the data.