Monitoring the attention of others is fundamental to social cognition. Most of the literature on the topic assumes that our social cognitive machinery is tuned specifically to the gaze direction of others as a proxy for attention. This standard assumption reduces attention to an externally visible parameter. Here we show that this assumption is wrong and a deeper, more meaningful representation is involved. We presented subjects with two cues about the attentional state of a face: direction of gaze and emotional expression. We tested whether people relied predominantly on one cue, the other, or both. If the traditional view is correct, then the gaze cue should dominate. Instead, people employed a variety of strategies, some relying on gaze, some on expression, and some on an integration of cues. We also assessed people’s social cognitive ability using two, independent, standard tests. If the traditional view is correct, then social cognitive ability, as assessed by the independent tests, should correlate with the degree to which people successfully use the gaze cue to judge the attention state of the face. Instead, social cognitive ability correlated best with the degree to which people successfully integrated the cues together, instead of with the use of any one specific cue. The results suggest a rethink of a fundamental component of social cognition: monitoring the attention of others involves constructing a deep model that is informed by a combination of cues. Attention is a rich process and monitoring the attention of others involves a similarly rich representation.
Chang, Claire H. C.; Lazaridi, Christina; Yeshurun, Yaara; Norman, Kenneth A.; Hasson, Uri
Abstract:
This study examined how the brain dynamically updates event representations by integrating new information over multiple minutes while segregating irrelevant input. A professional writer custom-designed a narrative with two independent storylines, interleaving across minute-long segments (ABAB). In the last (C) part, characters from the two storylines meet and their shared history is revealed. Part C is designed to induce the spontaneous recall of past events, upon the recurrence of narrative motifs from A/B, and to shed new light on them. Our fMRI results showed storyline-specific neural patterns, which were reinstated (i.e. became more active) during storyline transitions. This effect increased along the processing timescale hierarchy, peaking in the default mode network. Similarly, the neural reinstatement of motifs was found during part C. Furthermore, participants showing stronger motif reinstatement performed better in integrating A/B and C events, demonstrating the role of memory reactivation in information integration over intervening irrelevant events.