Pacheco, Diego A; Thiberge, Stephan; Pnevmatikakis, Eftychios; Murthy, Mala
Abstract:
Sensory pathways are typically studied starting at receptor neurons and following postsynaptic neurons into the brain. However, this leads to a bias in analysis of activity towards the earliest layers of processing. Here, we present new methods for volumetric neural imaging with precise across-brain registration, to characterize auditory activity throughout the entire central brain of Drosophila and make comparisons across trials, individuals, and sexes. We discover that auditory activity is present in most central brain regions and in neurons responsive to other modalities. Auditory responses are temporally diverse, but the majority of activity is tuned to courtship song features. Auditory responses are stereotyped across trials and animals in early mechanosensory regions, becoming more variable at higher layers of the putative pathway, and this variability is largely independent of spontaneous movements. This study highlights the power of using an unbiased, brain-wide approach for mapping the functional organization of sensory activity.
Pereira, Talmo D.; Aldarondo, Diego E.; Willmore, Lindsay; Kislin, Mikhail; Wang, Samuel S.-H.; Murthy, Mala; Shaevitz, Joshua W.
Abstract:
Recent work quantifying postural dynamics has attempted to define the repertoire of behaviors performed by an animal. However, a major drawback to these techniques has been their reliance on dimensionality reduction of images which destroys information about which parts of the body are used in each behavior. To address this issue, we introduce a deep learning-based method for pose estimation, LEAP (LEAP Estimates Animal Pose). LEAP automatically predicts the positions of animal body parts using a deep convolutional neural network with as little as 10 frames of labeled data for training. This framework consists of a graphical interface for interactive labeling of body parts and software for training the network and fast prediction on new data (1 hr to train, 185 Hz predictions). We validate LEAP using videos of freely behaving fruit flies (Drosophila melanogaster) and track 32 distinct points on the body to fully describe the pose of the head, body, wings, and legs with an error rate of <3% of the animal's body length. We recapitulate a number of reported findings on insect gait dynamics and show LEAP's applicability as the first step in unsupervised behavioral classification. Finally, we extend the method to more challenging imaging situations (pairs of flies moving on a mesh-like background) and movies from freely moving mice (Mus musculus) where we track the full conformation of the head, body, and limbs.
Does the default mode network (DMN) reconfigure to encode information about the changing environment? This question has proven difficult, because patterns of functional connectivity reflect a mixture of stimulus-induced neural processes, intrinsic neural processes and non-neuronal noise. Here we introduce inter-subject functional correlation (ISFC), which isolates stimulus-dependent inter-regional correlations between brains exposed to the same stimulus. During fMRI, we had subjects listen to a real-life auditory narrative and to temporally scrambled versions of the narrative. We used ISFC to isolate correlation patterns within the DMN that were locked to the processing of each narrative segment and specific to its meaning within the narrative context. The momentary configurations of DMN ISFC were highly replicable across groups. Moreover, DMN coupling strength predicted memory of narrative segments. Thus, ISFC opens new avenues for linking brain network dynamics to stimulus features and behaviour.
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
Small changes in word choice can lead to dramatically different interpretations of narratives. How does the brain accumulate and integrate such local changes to construct unique neural representations for different stories? In this study we created two distinct narratives by changing only a few words in each sentence (e.g. “he” to “she” or “sobbing” to “laughing”) while preserving the grammatical structure across stories. We then measured changes in neural responses between the two stories. We found that the differences in neural responses between the two stories gradually increased along the hierarchy of processing timescales. For areas with short integration windows, such as early auditory cortex, the differences in neural responses between the two stories were relatively small. In contrast, in areas with the longest integration windows at the top of the hierarchy, such as the precuneus, temporal parietal junction, and medial frontal cortices, there were large differences in neural responses between stories. Furthermore, this gradual increase in neural difference between the stories was highly correlated with an area’s ability to integrate information over time. Amplification of neural differences did not occur when changes in words did not alter the interpretation of the story (e.g. “sobbing” to “crying”). Our results demonstrate how subtle differences in words are gradually accumulated and amplified along the cortical hierarchy as the brain constructs a narrative over time.