It is well known that formation of new episodic memories depends on hippocampus, but in real-life settings (e.g., conversation), hippocampal amnesics can utilize information from several minutes earlier. What neural systems outside hippocampus might support this minutes-long retention? In this study, subjects viewed an audiovisual movie continuously for 25 min; another group viewed the movie in 2 parts separated by a 1-day delay. Understanding Part 2 depended on retrieving information from Part 1, and thus hippocampus was required in the day-delay condition. But is hippocampus equally recruited to access the same information from minutes earlier? We show that accessing memories from a few minutes prior elicited less interaction between hippocampus and default mode network (DMN) cortical regions than accessing day-old memories of identical events, suggesting that recent information was available with less reliance on hippocampal retrieval. Moreover, the 2 groups evinced
reliable but distinct DMN activity timecourses, reflecting differences in information carried in these regions when Part 1 was recent versus distant. The timecourses converged after 4 min, suggesting a time frame over which the continuous-viewing group may have relied less on hippocampal retrieval. We propose that cortical default mode regions can intrinsically retain real-life episodic information for several minutes.
Small changes in word choice can lead to dramatically different interpretations of narratives. How does the brain accumulate and integrate such local changes to construct unique neural representations for different stories? In this study we created two distinct narratives by changing only a few words in each sentence (e.g. “he” to “she” or “sobbing” to “laughing”) while preserving the grammatical structure across stories. We then measured changes in neural responses between the two stories. We found that the differences in neural responses between the two stories gradually increased along the hierarchy of processing timescales. For areas with short integration windows, such as early auditory cortex, the differences in neural responses between the two stories were relatively small. In contrast, in areas with the longest integration windows at the top of the hierarchy, such as the precuneus, temporal parietal junction, and medial frontal cortices, there were large differences in neural responses between stories. Furthermore, this gradual increase in neural difference between the stories was highly correlated with an area’s ability to integrate information over time. Amplification of neural differences did not occur when changes in words did not alter the interpretation of the story (e.g. “sobbing” to “crying”). Our results demonstrate how subtle differences in words are gradually accumulated and amplified along the cortical hierarchy as the brain constructs a narrative over time.
Wilterson, Andrew; Nastase, Samuel; Bio, Branden; Guterstam, Arvid; Graziano, Michael
The attention schema theory (AST) posits a specific relationship between subjective awareness and attention, in which awareness is the control model that the brain uses to aid in the endogenous control of attention. We proposed that the right temporoparietal junction (TPJ) is involved in that interaction between awareness and attention. In previous experiments, we developed a behavioral paradigm in human subjects to manipulate awareness and attention. The paradigm involved a visual cue that could be used to guide a shift of attention to a target stimulus. In task 1, subjects were aware of the visual cue, and their endogenous control mechanism was able to use the cue to help control attention. In task 2, subjects were unaware of the visual cue, and their endogenous control mechanism was no longer able to use it to control attention, even though the cue still had a measurable effect on other aspects of behavior. Here we tested the two tasks while scanning brain activity in human volunteers. We predicted that the right TPJ would be active in relation to the cue in task 1, but not in task 2. This prediction was confirmed. The right TPJ was active in relation to the cue in task 1; it was not measurably active in task 2; the difference was significant. In our interpretation, the right TPJ is involved in a complex interaction in which awareness aids in the control of attention.
Cara L. Buck; Jonathan D. Cohen; Field, Brent; Daniel Kahneman; Samuel M. McClure; Leigh E. Nystrom
Studies of subjective well-being have conventionally relied upon self-report, which directs subjects’ attention to their emotional experiences. This method presumes that attention itself does not influence emotional processes, which could bias sampling. We tested whether attention influences experienced utility (the moment-by-moment experience of pleasure) by using functional magnetic resonance imaging (fMRI) to measure the activity of brain systems thought to represent hedonic value while manipulating attentional load. Subjects received appetitive or aversive solutions orally while alternatively executing a low or high attentional load task. Brain regions associated with hedonic processing, including the ventral striatum, showed a response to both juice and quinine. This response decreased during the high-load task relative to the low-load task. Thus, attentional allocation may influence experienced utility by modulating (either directly or indirectly) the activity of brain mechanisms thought to represent hedonic value.
Mondal, Shanka Subhra; Webb, Taylor; Cohen, Jonathan
A dataset of Raven’s Progressive Matrices (RPM)-like problems using realistically rendered
3D shapes, based on source code from CLEVR (a popular visual-question-answering dataset) (Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2901-2910)).
Antony, James W.; Cheng, Larry Y.; Brooks, Paula P.; Paller, Ken A.; Norman, Kenneth A.
Competition between memories can cause weakening of those memories. Here we investigated memory competition during sleep in human participants by presenting auditory cues that had been linked to two distinct picture-location pairs during wake. We manipulated competition during learning by requiring participants to rehearse picture-location pairs associated with the same sound either competitively (choosing to rehearse one over the other, leading to greater competition) or separately; we hypothesized that greater competition during learning would lead to greater competition when memories were cued during sleep. With separate-pair learning, we found that cueing benefited spatial retention. With competitive-pair learning, no benefit of cueing was observed on retention, but cueing impaired retention of well-learned pairs (where we expected strong competition). During sleep, post-cue beta power (16–30 Hz) indexed competition and predicted forgetting, whereas sigma power (11–16 Hz) predicted subsequent retention. Taken together, these findings show that competition between memories during learning can modulate how they are consolidated during sleep.
Pacheco, Diego A; Thiberge, Stephan; Pnevmatikakis, Eftychios; Murthy, Mala
Sensory pathways are typically studied starting at receptor neurons and following postsynaptic neurons into the brain. However, this leads to a bias in analysis of activity towards the earliest layers of processing. Here, we present new methods for volumetric neural imaging with precise across-brain registration, to characterize auditory activity throughout the entire central brain of Drosophila and make comparisons across trials, individuals, and sexes. We discover that auditory activity is present in most central brain regions and in neurons responsive to other modalities. Auditory responses are temporally diverse, but the majority of activity is tuned to courtship song features. Auditory responses are stereotyped across trials and animals in early mechanosensory regions, becoming more variable at higher layers of the putative pathway, and this variability is largely independent of spontaneous movements. This study highlights the power of using an unbiased, brain-wide approach for mapping the functional organization of sensory activity.
Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Pereira, Talmo D.; Aldarondo, Diego E.; Willmore, Lindsay; Kislin, Mikhail; Wang, Samuel S.-H.; Murthy, Mala; Shaevitz, Joshua W.
Recent work quantifying postural dynamics has attempted to define the repertoire of behaviors performed by an animal. However, a major drawback to these techniques has been their reliance on dimensionality reduction of images which destroys information about which parts of the body are used in each behavior. To address this issue, we introduce a deep learning-based method for pose estimation, LEAP (LEAP Estimates Animal Pose). LEAP automatically predicts the positions of animal body parts using a deep convolutional neural network with as little as 10 frames of labeled data for training. This framework consists of a graphical interface for interactive labeling of body parts and software for training the network and fast prediction on new data (1 hr to train, 185 Hz predictions). We validate LEAP using videos of freely behaving fruit flies (Drosophila melanogaster) and track 32 distinct points on the body to fully describe the pose of the head, body, wings, and legs with an error rate of <3% of the animal's body length. We recapitulate a number of reported findings on insect gait dynamics and show LEAP's applicability as the first step in unsupervised behavioral classification. Finally, we extend the method to more challenging imaging situations (pairs of flies moving on a mesh-like background) and movies from freely moving mice (Mus musculus) where we track the full conformation of the head, body, and limbs.
Monitoring the attention of others is fundamental to social cognition. Most of the literature on the topic assumes that our social cognitive machinery is tuned specifically to the gaze direction of others as a proxy for attention. This standard assumption reduces attention to an externally visible parameter. Here we show that this assumption is wrong and a deeper, more meaningful representation is involved. We presented subjects with two cues about the attentional state of a face: direction of gaze and emotional expression. We tested whether people relied predominantly on one cue, the other, or both. If the traditional view is correct, then the gaze cue should dominate. Instead, people employed a variety of strategies, some relying on gaze, some on expression, and some on an integration of cues. We also assessed people’s social cognitive ability using two, independent, standard tests. If the traditional view is correct, then social cognitive ability, as assessed by the independent tests, should correlate with the degree to which people successfully use the gaze cue to judge the attention state of the face. Instead, social cognitive ability correlated best with the degree to which people successfully integrated the cues together, instead of with the use of any one specific cue. The results suggest a rethink of a fundamental component of social cognition: monitoring the attention of others involves constructing a deep model that is informed by a combination of cues. Attention is a rich process and monitoring the attention of others involves a similarly rich representation.
Surprise signals a discrepancy between past and current beliefs. It is theorized to be linked to affective experiences, the creation of particularly resilient memories, and segmentation of the flow of experience into discrete perceived events. However, the ability to precisely measure naturalistic surprise has remained elusive. We used advanced basketball analytics to derive a quantitative measure of surprise and characterized its behavioral, physiological, and neural correlates in human subjects observing basketball games. We found that surprise was associated with segmentation of ongoing experiences, as reflected by subjectively perceived event boundaries and shifts in neocortical patterns underlying belief states. Interestingly, these effects differed by whether surprising moments contradicted or bolstered current predominant beliefs. Surprise also positively correlated with pupil dilation, activation in subcortical regions associated with dopamine, game enjoyment, and long-term memory. These investigations support key predictions from event segmentation theory and extend theoretical conceptualizations of surprise to real-world contexts.
What mechanisms support our ability to estimate durations on the order of minutes? Behavioral studies in humans have shown that changes in contextual features lead to overestimation of past durations. Based on evidence that the medial temporal lobes and prefrontal cortex represent contextual features, we related the degree of fMRI pattern change in these regions with people's subsequent duration estimates. After listening to a radio story in the scanner, participants were asked how much time had elapsed between pairs of clips from the story. Our ROI analysis found that the neural pattern distance between two clips at encoding was correlated with duration estimates in the right entorhinal cortex and right pars orbitalis. Moreover, a whole-brain searchlight analysis revealed a cluster spanning the right anterior temporal lobe. Our findings provide convergent support for the hypothesis that retrospective time judgments are driven by 'drift' in contextual representations supported by these regions.
Bejjanki, Vikranth R.; da Silveira, Rava Azeredo; Cohen, Jonathan D.; Turk-Browne, Nicholas B.
Multivariate decoding methods, such as multivoxel pattern analysis (MVPA), are highly effective at extracting information from brain imaging data. Yet, the precise nature of the information that MVPA draws upon remains controversial. Most current theories emphasize the enhanced sensitivity imparted by aggregating across voxels that have mixed and weak selectivity. However, beyond the selectivity of individual voxels, neural variability is correlated across voxels, and such noise correlations may contribute importantly to accurate decoding. Indeed, a recent computational theory proposed that noise correlations enhance multivariate decoding from heterogeneous neural populations. Here we extend this theory from the scale of neurons to functional magnetic resonance imaging (fMRI) and show that noise correlations between heterogeneous populations of voxels (i.e., voxels selective for different stimulus variables) contribute to the success of MVPA. Specifically, decoding performance is enhanced when voxels with high vs. low noise correlations (measured during rest or in the background of the task) are selected during classifier training. Conversely, voxels that are strongly selective for one class in a GLM or that receive high classification weights in MVPA tend to exhibit high noise correlations with voxels selective for the other class being discriminated against. Furthermore, we use simulations to show that this is a general property of fMRI data and that selectivity and noise correlations can have distinguishable influences on decoding. Taken together, our findings demonstrate that if there is signal in the data, the resulting above-chance classification accuracy is modulated by the magnitude of noise correlations.
Rafidi, Nicole S; Hulbert, Justin C; Brooks, Paula P; Norman, Kenneth A
Repeated testing (as opposed to repeated study) leads to improved long-term memory retention, but the mechanism underlying this improvement remains controversial. In this work, we test the hypothesis that retrieval practice benefits subsequent recall by reducing competition from related memories. This hypothesis implies that the degree of reduction in competition between retrieval practice attempts should predict subsequent memory for the practiced items. To test this prediction, we collected electroencephalography (EEG) data across two sessions. In the first session, participants practiced selectively retrieving exemplars from superordinate semantic categories (high competition), as well as retrieving the names of the superordinate categories from exemplars (low competition). In the second session, participants repeatedly studied and were then tested on Swahili-English vocabulary. One week after session two, participants were again tested on the vocabulary. We trained a within-subject classifier on the data from session one to distinguish high and low competition states. We then used this classifier to measure competition across multiple retrieval practice attempts in the second session. The degree to which competition decreased for a given vocabulary word predicted whether that item was subsequently remembered in the third session. These results are consistent with the hypothesis that repeated testing improves retention by reducing competition.
Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.
Antony, James W.; Piloto, Luis; Wang, Margaret; Brooks, Paula P.; Norman, Kenneth A.; Paller, Ken A.
The stability of long-term memories is enhanced by reactivation during sleep. Correlative evidence has linked memory reactivation with thalamocortical sleep spindles, although their functional role is not fully understood. Our initial study replicated this correlation and also demonstrated a novel rhythmicity to spindles, such that a spindle is more likely to occur approximately 3–6 s following a prior spindle. We leveraged this rhythmicity to test the role of spindles in memory by using real-time spindle tracking to present cues within versus just after the presumptive refractory period; as predicted, cues presented just after the refractory period led to better memory. Our findings demonstrate a precise temporal link between sleep spindles and memory reactivation. Moreover, they reveal a previously undescribed neural mechanism whereby spindles may segment sleep into two distinct substates: prime opportunities for reactivation and gaps that segregate reactivation events.
This archive contains spike trains simultaneously recorded from ganglion cells in the tiger salamander retina with a multi-electrode array while viewing a repeated natural movie clip. These data have been analyzed in previous papers, notably Puchalla et al. Neuron 2005 and Schneidman et al. Nature 2006.
Chang, Claire H. C.; Lazaridi, Christina; Yeshurun, Yaara; Norman, Kenneth A.; Hasson, Uri
This study examined how the brain dynamically updates event representations by integrating new information over multiple minutes while segregating irrelevant input. A professional writer custom-designed a narrative with two independent storylines, interleaving across minute-long segments (ABAB). In the last (C) part, characters from the two storylines meet and their shared history is revealed. Part C is designed to induce the spontaneous recall of past events, upon the recurrence of narrative motifs from A/B, and to shed new light on them. Our fMRI results showed storyline-specific neural patterns, which were reinstated (i.e. became more active) during storyline transitions. This effect increased along the processing timescale hierarchy, peaking in the default mode network. Similarly, the neural reinstatement of motifs was found during part C. Furthermore, participants showing stronger motif reinstatement performed better in integrating A/B and C events, demonstrating the role of memory reactivation in information integration over intervening irrelevant events.
In the attention schema theory, the brain constructs a model of attention, the attention schema, to aid in the endogenous control of attention. Growing behavioral evidence appears to support this proposal. However, a central question remains: does a controller of attention actually benefit by having access to an attention schema? We constructed an artificial, deep Q-learning, neural network agent that was trained to control a simple form of visuospatial attention, tracking a stimulus with its attention spotlight in order to solve a catch task. The agent was tested with and without access to an attention schema. In both conditions, the agent received sufficient information such that it should, theoretically, be able to learn the task. We found that with an attention schema present, the agent learned to control its attention spotlight and learned the catch task to a high degree of performance. Once the agent learned, if the attention schema was disabled, the agent could no longer perform effectively. If the attention schema was removed before learning began, the agent was drastically impaired at learning. The results show how the presence of even a simple attention schema provides a profound benefit to a controller of attention. We interpret these results as supporting the central argument of AST: the brain evolved an attention schema because of its practical benefit in the endogenous control of attention.
Recent advances in experimental techniques have allowed the simultaneous recordings of
populations of hundreds of neurons, fostering a debate about the nature of the collective
structure of population neural activity. Much of this debate has focused on the
empirical findings of a phase transition in the parameter space of maximum entropy
models describing the measured neural probability distributions, interpreting this phase
transition to indicate a critical tuning of the neural code. Here, we instead focus on the
possibility that this is a first-order phase transition which provides evidence that the
real neural population is in a `structured', collective state. We show that this collective
state is robust to changes in stimulus ensemble and adaptive state. We find that the
pattern of pairwise correlations between neurons has a strength that is well within the
strongly correlated regime and does not require fine tuning, suggesting that this state is
generic for populations of 100+ neurons. We find a clear correspondence between the
emergence of a phase transition, and the emergence of attractor-like structure in the
inferred energy landscape. A collective state in the neural population, in which neural
activity patterns naturally form clusters, provides a consistent interpretation for our
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.