Have a look around our new website for the discovery and sharing of research data and let us know what you think. See How to Submit for instructions on how to publish your research data and code.
Mondal, Shanka Subhra; Webb, Taylor; Cohen, Jonathan
Abstract:
A dataset of Raven’s Progressive Matrices (RPM)-like problems using realistically rendered
3D shapes, based on source code from CLEVR (a popular visual-question-answering dataset) (Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2901-2910)).
Pereira, Talmo D.; Aldarondo, Diego E.; Willmore, Lindsay; Kislin, Mikhail; Wang, Samuel S.-H.; Murthy, Mala; Shaevitz, Joshua W.
Abstract:
Recent work quantifying postural dynamics has attempted to define the repertoire of behaviors performed by an animal. However, a major drawback to these techniques has been their reliance on dimensionality reduction of images which destroys information about which parts of the body are used in each behavior. To address this issue, we introduce a deep learning-based method for pose estimation, LEAP (LEAP Estimates Animal Pose). LEAP automatically predicts the positions of animal body parts using a deep convolutional neural network with as little as 10 frames of labeled data for training. This framework consists of a graphical interface for interactive labeling of body parts and software for training the network and fast prediction on new data (1 hr to train, 185 Hz predictions). We validate LEAP using videos of freely behaving fruit flies (Drosophila melanogaster) and track 32 distinct points on the body to fully describe the pose of the head, body, wings, and legs with an error rate of <3% of the animal's body length. We recapitulate a number of reported findings on insect gait dynamics and show LEAP's applicability as the first step in unsupervised behavioral classification. Finally, we extend the method to more challenging imaging situations (pairs of flies moving on a mesh-like background) and movies from freely moving mice (Mus musculus) where we track the full conformation of the head, body, and limbs.
Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.
Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Cara L. Buck; Jonathan D. Cohen; Field, Brent; Daniel Kahneman; Samuel M. McClure; Leigh E. Nystrom
Abstract:
Studies of subjective well-being have conventionally relied upon self-report, which directs subjects’ attention to their emotional experiences. This method presumes that attention itself does not influence emotional processes, which could bias sampling. We tested whether attention influences experienced utility (the moment-by-moment experience of pleasure) by using functional magnetic resonance imaging (fMRI) to measure the activity of brain systems thought to represent hedonic value while manipulating attentional load. Subjects received appetitive or aversive solutions orally while alternatively executing a low or high attentional load task. Brain regions associated with hedonic processing, including the ventral striatum, showed a response to both juice and quinine. This response decreased during the high-load task relative to the low-load task. Thus, attentional allocation may influence experienced utility by modulating (either directly or indirectly) the activity of brain mechanisms thought to represent hedonic value.