The Shakespeare and Company Project makes three datasets available to download in CSV and JSON formats. The datasets provide information about lending library members; the books that circulated in the lending library; and lending library events, including borrows, purchases, memberships, and renewals. The datasets may be used individually or in combination site URLs are consistent identifiers across all three. The DOIs for each dataset are as follows: Members (https://doi.org/10.34770/ht30-g395); Books (https://doi.org/10.34770/g467-3w07); Events (https://doi.org/10.34770/2r93-0t85).
The Molino suite contains 75,000 galaxy mock catalogs designed to quantify the information content of any cosmological observable for a redshift-space galaxy sample. They are constructed from the Quijote N-body simulations (Villaescusa-Navarro et al. 2020) using the standard Zheng et al. (2007) Halo Occupation Distribution (HOD) model. The fiducial HOD parameters are based on the SDSS high luminosity samples. The suite contains 15,000 mocks at the fiducial cosmology and HOD parameters for covariance matrix estimation. It also includes (500 N-body realizations) x (5 HOD realizations)=2,500 mocks at 24 other parameter values to estimate the derivative of the observable with respect to six cosmological parameters (Omega_m, Omega_b, h, n_s, sigma_8, and M_nu) and five HOD parameters (logMmin, sigma_logM, log M_0, alpha, and log M_1). Using the covariance matrix and derivatives calculated from Molino, one can derive Fisher matrix forecasts on the cosmological parameters marginalized over HOD parameters.
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
The Magnetospheric Multiscale (MMS) mission has given us unprecedented access to high cadence particle and field data of magnetic reconnection at Earth's magnetopause. MMS first passed very near an X-line on 16 October 2015, the Burch event, and has since observed multiple X-line crossings. Subsequent 3D particle-in-cell (PIC) modeling efforts of and comparison with the Burch event have revealed a host of novel physical insights concerning magnetic reconnection, turbulence induced particle mixing, and secondary instabilities. In this study, we employ the Gkeyll simulation framework to study the Burch event with different classes of extended, multi-fluid magnetohydrodynamics (MHD), including models that incorporate important kinetic effects, such as the electron pressure tensor, with physics-based closure relations designed to capture linear Landau damping. Such fluid modeling approaches are able to capture different levels of kinetic physics in global simulations and are generally less costly than fully kinetic PIC. We focus on the additional physics one can capture with increasing levels of fluid closure refinement via comparison with MMS data and existing PIC simulations. In particular, we find that the ten-moment model well captures the agyrotropic structure of the pressure tensor in the vicinity of the X-line and the magnitude of anisotropic electron heating observed in MMS and PIC simulations. However, the ten-moment model has difficulty resolving the lower hybrid drift instability, which has been observed to plays a fundamental role in heating and mixing electrons in the current layer.
Yang, Yuan; Pan, Ming; Beck, Hylke; Fisher, Colby; Beighley, R. Edward; Kao, Shih-Chieh; Hong, Yang; Wood, Eric
Conventional basin-by-basin approaches to calibrate hydrologic models are limited to gauged basins and typically result in spatially discontinuous parameter fields. Moreover, the consequent low calibration density in space falls seriously behind the need from present-day applications like high resolution river hydrodynamic modeling. In this study we calibrated three key parameters of the Variable Infiltration Capacity (VIC) model at every 1/8° grid-cell using machine learning-based maps of four streamflow characteristics for the conterminous United States (CONUS), with a total of 52,663 grid-cells. This new calibration approach, as an alternative to parameter regionalization, applied to ungauged regions too. A key difference made here is that we tried to regionalize physical variables (streamflow characteristics) instead of model parameters whose behavior may often be less well understood. The resulting parameter fields no longer presented any spatial discontinuities and the patterns corresponded well with climate characteristics, such as aridity and runoff ratio. The calibrated parameters were evaluated against observed streamflow from 704/648 (calibration/validation period) small-to-medium-sized catchments used to derive the streamflow characteristics, 3941/3809 (calibration/validation period) small-to-medium-sized catchments not used to derive the streamflow characteristics) as well as five large basins. Comparisons indicated marked improvements in bias and Nash-Sutcliffe efficiency. Model performance was still poor in arid and semiarid regions, which is mostly due to both model structural and forcing deficiencies. Although the performance gain was limited by the relative small number of parameters to calibrate, the study and results here served as a proof-of-concept for a new promising approach for fine-scale hydrologic model calibrations.