These files contain code used to segment D. virilis acoustic duets, quantification of courtship behaviors during acoustic duets, and measurements of duet song features.
Battaglia, D. J.; Boyer, M. D.; Gerhardt, S.; Mueller, D.; Myers, C. E.; Guttenfelder, W.; Menard, J. E.; Sabbagh, S. A.; Scotti, F.; Bedoya, F.; Bell, R. E.; Berkery, J. W.; Diallo, A.; Ferraro, N.; Jaworski, M. A.; Kaye, S. M.; LeBlanc, B. P.; Ono, M.; Park, J. -K.; Podesta, M.; Raman, R.; Soukhanovskii, V.
This archive contains spike trains simultaneously recorded from ganglion cells in the tiger salamander retina with a multi-electrode array while viewing a repeated natural movie clip. These data have been analyzed in previous papers, notably Puchalla et al. Neuron 2005 and Schneidman et al. Nature 2006.
Berryman, Eleanor J.; Winey, J. M.; Gupta, Yogendra M.; Duffy, Thomas S.
Abstract:
Stishovite (rutile-type SiO2) is the archetype of dense silicates and may occur in post-garnet eclogitic rocks at lower-mantle conditions. Sound velocities in stishovite are fundamental to understanding its mechanical and thermodynamic behavior at high pressure and temperature. Here, we use plate-impact experiments combined with velocity interferometry to determine the stress, density, and longitudinal sound speed in stishovite formed during shock compression of fused silica at 44 GPa and above. The measured sound speeds range from 12.3(8) km/s at 43.8(8) GPa to 9.8(4) km/s at 72.7(11) GPa. The decrease observed at 64 GPa reacts a decrease in the shear modulus of stishovite, likely due to the onset of melting. By 72 GPa, the measured sound speed agrees with the theoretical bulk sound speed indicating loss of all shear stiffness due to complete melting. Our sound velocity results provide direct evidence for shock-induced melting, in agreement with previous pyrometry data.
Monitoring the attention of others is fundamental to social cognition. Most of the literature on the topic assumes that our social cognitive machinery is tuned specifically to the gaze direction of others as a proxy for attention. This standard assumption reduces attention to an externally visible parameter. Here we show that this assumption is wrong and a deeper, more meaningful representation is involved. We presented subjects with two cues about the attentional state of a face: direction of gaze and emotional expression. We tested whether people relied predominantly on one cue, the other, or both. If the traditional view is correct, then the gaze cue should dominate. Instead, people employed a variety of strategies, some relying on gaze, some on expression, and some on an integration of cues. We also assessed people’s social cognitive ability using two, independent, standard tests. If the traditional view is correct, then social cognitive ability, as assessed by the independent tests, should correlate with the degree to which people successfully use the gaze cue to judge the attention state of the face. Instead, social cognitive ability correlated best with the degree to which people successfully integrated the cues together, instead of with the use of any one specific cue. The results suggest a rethink of a fundamental component of social cognition: monitoring the attention of others involves constructing a deep model that is informed by a combination of cues. Attention is a rich process and monitoring the attention of others involves a similarly rich representation.
Bourrianne, Philippe; Chidzik, Stanley; Cohen, Daniel; Elmer, Peter; Hallowell, Thomas; Kilbaugh, Todd J.; Lange, David; Leifer, Andrew M.; Marlow, Daniel R.; Meyers, Peter D.; Normand, Edna; Nunes, Janine; Oh, Myungchul; Page, Lyman; Periera, Talmo; Pivarski, Jim; Schreiner, Henry; Stone, Howard A.; Tank, David W.; Thiberge, Stephan; Tully, Christopher
Abstract:
The detailed information on the design and construction of the Princeton Open Ventilation Monitor device and software are contained in this data repository. This information consists of the electrical design files, mechanical design files, bill of materials, human subject recording and analysis code, and a copy of the code repository for operating the patient monitors and central station.
Canal, G. P.; Ferraro, N. M.; Evans, T. E.; Osborne, T. H.; Menard, J. E.; Ahn, J. -W.; Maingi, R.; Wingen, A.; Ciro, D.; Frerichs, H.; Schmitz, O.; Soukhanovskii, V.; Waters, I.; Sabbagh, S. A.
Cara L. Buck; Jonathan D. Cohen; Field, Brent; Daniel Kahneman; Samuel M. McClure; Leigh E. Nystrom
Abstract:
Studies of subjective well-being have conventionally relied upon self-report, which directs subjects’ attention to their emotional experiences. This method presumes that attention itself does not influence emotional processes, which could bias sampling. We tested whether attention influences experienced utility (the moment-by-moment experience of pleasure) by using functional magnetic resonance imaging (fMRI) to measure the activity of brain systems thought to represent hedonic value while manipulating attentional load. Subjects received appetitive or aversive solutions orally while alternatively executing a low or high attentional load task. Brain regions associated with hedonic processing, including the ventral striatum, showed a response to both juice and quinine. This response decreased during the high-load task relative to the low-load task. Thus, attentional allocation may influence experienced utility by modulating (either directly or indirectly) the activity of brain mechanisms thought to represent hedonic value.
Chang, Claire H. C.; Lazaridi, Christina; Yeshurun, Yaara; Norman, Kenneth A.; Hasson, Uri
Abstract:
This study examined how the brain dynamically updates event representations by integrating new information over multiple minutes while segregating irrelevant input. A professional writer custom-designed a narrative with two independent storylines, interleaving across minute-long segments (ABAB). In the last (C) part, characters from the two storylines meet and their shared history is revealed. Part C is designed to induce the spontaneous recall of past events, upon the recurrence of narrative motifs from A/B, and to shed new light on them. Our fMRI results showed storyline-specific neural patterns, which were reinstated (i.e. became more active) during storyline transitions. This effect increased along the processing timescale hierarchy, peaking in the default mode network. Similarly, the neural reinstatement of motifs was found during part C. Furthermore, participants showing stronger motif reinstatement performed better in integrating A/B and C events, demonstrating the role of memory reactivation in information integration over intervening irrelevant events.
It is well known that formation of new episodic memories depends on hippocampus, but in real-life settings (e.g., conversation), hippocampal amnesics can utilize information from several minutes earlier. What neural systems outside hippocampus might support this minutes-long retention? In this study, subjects viewed an audiovisual movie continuously for 25 min; another group viewed the movie in 2 parts separated by a 1-day delay. Understanding Part 2 depended on retrieving information from Part 1, and thus hippocampus was required in the day-delay condition. But is hippocampus equally recruited to access the same information from minutes earlier? We show that accessing memories from a few minutes prior elicited less interaction between hippocampus and default mode network (DMN) cortical regions than accessing day-old memories of identical events, suggesting that recent information was available with less reliance on hippocampal retrieval. Moreover, the 2 groups evinced
reliable but distinct DMN activity timecourses, reflecting differences in information carried in these regions when Part 1 was recent versus distant. The timecourses converged after 4 min, suggesting a time frame over which the continuous-viewing group may have relied less on hippocampal retrieval. We propose that cortical default mode regions can intrinsically retain real-life episodic information for several minutes.
Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.
Choi, W.; Poli, F. M.; Li, M. H.; Baek, S. G.; Gorelenkova, M.; Ding, B. J.; Gong, X. Z.; Chan, A.; Duan, Y. M.; Hu, J. H.; Lian, H.; Lin, S. Y.; Liu, H. Q.; Qian, J. P.; Wallace, G.; Wang, Y. M.; Zang, Q.; Zhao, H. L.