In our study, we compare the three dimensional (3D) morphologic characteristics of Earth's first reef-building animals (archaeocyath sponges) with those of modern, photosynthetic corals. Within this repository are the 3D image data products for both groups of animals. The archaeocyath images were produced through serial grinding and imaging with the Grinding, Imaging, and Reconstruction Instrument at Princeton University. The images in this repository are the downsampled data products used in our study, and the full resolution (>2TB) image stacks are available upon request from the author. For the coral image data, the computed tomography (CT) images of all samples are included at full resolution. Also included in this repository are the manual and automated outline coordinates of the archaeocyath and coral branches, which can be directly used for morphological study.
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.