It is well known that formation of new episodic memories depends on hippocampus, but in real-life settings (e.g., conversation), hippocampal amnesics can utilize information from several minutes earlier. What neural systems outside hippocampus might support this minutes-long retention? In this study, subjects viewed an audiovisual movie continuously for 25 min; another group viewed the movie in 2 parts separated by a 1-day delay. Understanding Part 2 depended on retrieving information from Part 1, and thus hippocampus was required in the day-delay condition. But is hippocampus equally recruited to access the same information from minutes earlier? We show that accessing memories from a few minutes prior elicited less interaction between hippocampus and default mode network (DMN) cortical regions than accessing day-old memories of identical events, suggesting that recent information was available with less reliance on hippocampal retrieval. Moreover, the 2 groups evinced
reliable but distinct DMN activity timecourses, reflecting differences in information carried in these regions when Part 1 was recent versus distant. The timecourses converged after 4 min, suggesting a time frame over which the continuous-viewing group may have relied less on hippocampal retrieval. We propose that cortical default mode regions can intrinsically retain real-life episodic information for several minutes.
Small changes in word choice can lead to dramatically different interpretations of narratives. How does the brain accumulate and integrate such local changes to construct unique neural representations for different stories? In this study we created two distinct narratives by changing only a few words in each sentence (e.g. “he” to “she” or “sobbing” to “laughing”) while preserving the grammatical structure across stories. We then measured changes in neural responses between the two stories. We found that the differences in neural responses between the two stories gradually increased along the hierarchy of processing timescales. For areas with short integration windows, such as early auditory cortex, the differences in neural responses between the two stories were relatively small. In contrast, in areas with the longest integration windows at the top of the hierarchy, such as the precuneus, temporal parietal junction, and medial frontal cortices, there were large differences in neural responses between stories. Furthermore, this gradual increase in neural difference between the stories was highly correlated with an area’s ability to integrate information over time. Amplification of neural differences did not occur when changes in words did not alter the interpretation of the story (e.g. “sobbing” to “crying”). Our results demonstrate how subtle differences in words are gradually accumulated and amplified along the cortical hierarchy as the brain constructs a narrative over time.
Wilterson, Andrew; Nastase, Samuel; Bio, Branden; Guterstam, Arvid; Graziano, Michael
The attention schema theory (AST) posits a specific relationship between subjective awareness and attention, in which awareness is the control model that the brain uses to aid in the endogenous control of attention. We proposed that the right temporoparietal junction (TPJ) is involved in that interaction between awareness and attention. In previous experiments, we developed a behavioral paradigm in human subjects to manipulate awareness and attention. The paradigm involved a visual cue that could be used to guide a shift of attention to a target stimulus. In task 1, subjects were aware of the visual cue, and their endogenous control mechanism was able to use the cue to help control attention. In task 2, subjects were unaware of the visual cue, and their endogenous control mechanism was no longer able to use it to control attention, even though the cue still had a measurable effect on other aspects of behavior. Here we tested the two tasks while scanning brain activity in human volunteers. We predicted that the right TPJ would be active in relation to the cue in task 1, but not in task 2. This prediction was confirmed. The right TPJ was active in relation to the cue in task 1; it was not measurably active in task 2; the difference was significant. In our interpretation, the right TPJ is involved in a complex interaction in which awareness aids in the control of attention.
Cara L. Buck; Jonathan D. Cohen; Field, Brent; Daniel Kahneman; Samuel M. McClure; Leigh E. Nystrom
Studies of subjective well-being have conventionally relied upon self-report, which directs subjects’ attention to their emotional experiences. This method presumes that attention itself does not influence emotional processes, which could bias sampling. We tested whether attention influences experienced utility (the moment-by-moment experience of pleasure) by using functional magnetic resonance imaging (fMRI) to measure the activity of brain systems thought to represent hedonic value while manipulating attentional load. Subjects received appetitive or aversive solutions orally while alternatively executing a low or high attentional load task. Brain regions associated with hedonic processing, including the ventral striatum, showed a response to both juice and quinine. This response decreased during the high-load task relative to the low-load task. Thus, attentional allocation may influence experienced utility by modulating (either directly or indirectly) the activity of brain mechanisms thought to represent hedonic value.
Antony, James W.; Cheng, Larry Y.; Brooks, Paula P.; Paller, Ken A.; Norman, Kenneth A.
Competition between memories can cause weakening of those memories. Here we investigated memory competition during sleep in human participants by presenting auditory cues that had been linked to two distinct picture-location pairs during wake. We manipulated competition during learning by requiring participants to rehearse picture-location pairs associated with the same sound either competitively (choosing to rehearse one over the other, leading to greater competition) or separately; we hypothesized that greater competition during learning would lead to greater competition when memories were cued during sleep. With separate-pair learning, we found that cueing benefited spatial retention. With competitive-pair learning, no benefit of cueing was observed on retention, but cueing impaired retention of well-learned pairs (where we expected strong competition). During sleep, post-cue beta power (16–30 Hz) indexed competition and predicted forgetting, whereas sigma power (11–16 Hz) predicted subsequent retention. Taken together, these findings show that competition between memories during learning can modulate how they are consolidated during sleep.
Pacheco, Diego A; Thiberge, Stephan; Pnevmatikakis, Eftychios; Murthy, Mala
Sensory pathways are typically studied starting at receptor neurons and following postsynaptic neurons into the brain. However, this leads to a bias in analysis of activity towards the earliest layers of processing. Here, we present new methods for volumetric neural imaging with precise across-brain registration, to characterize auditory activity throughout the entire central brain of Drosophila and make comparisons across trials, individuals, and sexes. We discover that auditory activity is present in most central brain regions and in neurons responsive to other modalities. Auditory responses are temporally diverse, but the majority of activity is tuned to courtship song features. Auditory responses are stereotyped across trials and animals in early mechanosensory regions, becoming more variable at higher layers of the putative pathway, and this variability is largely independent of spontaneous movements. This study highlights the power of using an unbiased, brain-wide approach for mapping the functional organization of sensory activity.
Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Pereira, Talmo D.; Aldarondo, Diego E.; Willmore, Lindsay; Kislin, Mikhail; Wang, Samuel S.-H.; Murthy, Mala; Shaevitz, Joshua W.
Recent work quantifying postural dynamics has attempted to define the repertoire of behaviors performed by an animal. However, a major drawback to these techniques has been their reliance on dimensionality reduction of images which destroys information about which parts of the body are used in each behavior. To address this issue, we introduce a deep learning-based method for pose estimation, LEAP (LEAP Estimates Animal Pose). LEAP automatically predicts the positions of animal body parts using a deep convolutional neural network with as little as 10 frames of labeled data for training. This framework consists of a graphical interface for interactive labeling of body parts and software for training the network and fast prediction on new data (1 hr to train, 185 Hz predictions). We validate LEAP using videos of freely behaving fruit flies (Drosophila melanogaster) and track 32 distinct points on the body to fully describe the pose of the head, body, wings, and legs with an error rate of <3% of the animal's body length. We recapitulate a number of reported findings on insect gait dynamics and show LEAP's applicability as the first step in unsupervised behavioral classification. Finally, we extend the method to more challenging imaging situations (pairs of flies moving on a mesh-like background) and movies from freely moving mice (Mus musculus) where we track the full conformation of the head, body, and limbs.
Monitoring the attention of others is fundamental to social cognition. Most of the literature on the topic assumes that our social cognitive machinery is tuned specifically to the gaze direction of others as a proxy for attention. This standard assumption reduces attention to an externally visible parameter. Here we show that this assumption is wrong and a deeper, more meaningful representation is involved. We presented subjects with two cues about the attentional state of a face: direction of gaze and emotional expression. We tested whether people relied predominantly on one cue, the other, or both. If the traditional view is correct, then the gaze cue should dominate. Instead, people employed a variety of strategies, some relying on gaze, some on expression, and some on an integration of cues. We also assessed people’s social cognitive ability using two, independent, standard tests. If the traditional view is correct, then social cognitive ability, as assessed by the independent tests, should correlate with the degree to which people successfully use the gaze cue to judge the attention state of the face. Instead, social cognitive ability correlated best with the degree to which people successfully integrated the cues together, instead of with the use of any one specific cue. The results suggest a rethink of a fundamental component of social cognition: monitoring the attention of others involves constructing a deep model that is informed by a combination of cues. Attention is a rich process and monitoring the attention of others involves a similarly rich representation.
Surprise signals a discrepancy between past and current beliefs. It is theorized to be linked to affective experiences, the creation of particularly resilient memories, and segmentation of the flow of experience into discrete perceived events. However, the ability to precisely measure naturalistic surprise has remained elusive. We used advanced basketball analytics to derive a quantitative measure of surprise and characterized its behavioral, physiological, and neural correlates in human subjects observing basketball games. We found that surprise was associated with segmentation of ongoing experiences, as reflected by subjectively perceived event boundaries and shifts in neocortical patterns underlying belief states. Interestingly, these effects differed by whether surprising moments contradicted or bolstered current predominant beliefs. Surprise also positively correlated with pupil dilation, activation in subcortical regions associated with dopamine, game enjoyment, and long-term memory. These investigations support key predictions from event segmentation theory and extend theoretical conceptualizations of surprise to real-world contexts.
What mechanisms support our ability to estimate durations on the order of minutes? Behavioral studies in humans have shown that changes in contextual features lead to overestimation of past durations. Based on evidence that the medial temporal lobes and prefrontal cortex represent contextual features, we related the degree of fMRI pattern change in these regions with people's subsequent duration estimates. After listening to a radio story in the scanner, participants were asked how much time had elapsed between pairs of clips from the story. Our ROI analysis found that the neural pattern distance between two clips at encoding was correlated with duration estimates in the right entorhinal cortex and right pars orbitalis. Moreover, a whole-brain searchlight analysis revealed a cluster spanning the right anterior temporal lobe. Our findings provide convergent support for the hypothesis that retrospective time judgments are driven by 'drift' in contextual representations supported by these regions.
Bejjanki, Vikranth R.; da Silveira, Rava Azeredo; Cohen, Jonathan D.; Turk-Browne, Nicholas B.
Multivariate decoding methods, such as multivoxel pattern analysis (MVPA), are highly effective at extracting information from brain imaging data. Yet, the precise nature of the information that MVPA draws upon remains controversial. Most current theories emphasize the enhanced sensitivity imparted by aggregating across voxels that have mixed and weak selectivity. However, beyond the selectivity of individual voxels, neural variability is correlated across voxels, and such noise correlations may contribute importantly to accurate decoding. Indeed, a recent computational theory proposed that noise correlations enhance multivariate decoding from heterogeneous neural populations. Here we extend this theory from the scale of neurons to functional magnetic resonance imaging (fMRI) and show that noise correlations between heterogeneous populations of voxels (i.e., voxels selective for different stimulus variables) contribute to the success of MVPA. Specifically, decoding performance is enhanced when voxels with high vs. low noise correlations (measured during rest or in the background of the task) are selected during classifier training. Conversely, voxels that are strongly selective for one class in a GLM or that receive high classification weights in MVPA tend to exhibit high noise correlations with voxels selective for the other class being discriminated against. Furthermore, we use simulations to show that this is a general property of fMRI data and that selectivity and noise correlations can have distinguishable influences on decoding. Taken together, our findings demonstrate that if there is signal in the data, the resulting above-chance classification accuracy is modulated by the magnitude of noise correlations.