This dataset includes information about approximately 6,000 books and other items with bibliographic data as well as summary information about when the item circulated in the Shakespeare and Company lending library and the number of times an item was borrowed or purchased.
The events dataset includes information about approximately 33,700 lending library events including membership activities such as subscriptions, renewals and reimbursements and book-related activities such as borrowing and purchasing. For events related to lending library cards that are available as digital surrogates, IIIF links are provided.
The Shakespeare and Company Project: Lending Library Members dataset includes information about approximately 5,700 members of Sylvia Beach's Shakespeare and Company lending library.
The Shakespeare and Company Project makes three datasets available to download in CSV and JSON formats. The datasets provide information about lending library members; the books that circulated in the lending library; and lending library events, including borrows, purchases, memberships, and renewals. The datasets may be used individually or in combination site URLs are consistent identifiers across all three. The DOIs for each dataset are as follows: Members (https://doi.org/10.34770/ht30-g395); Books (https://doi.org/10.34770/g467-3w07); Events (https://doi.org/10.34770/2r93-0t85).
Particle distribution functions evolving under the Lorentz operator can be simulated with the Langevin equation for pitch angle scattering. This approach is frequently used in particle based Monte-Carlo simulations of plasma collisions, among others. However, most numerical treatments do not guarantee energy conservation, which may lead to unphysical artifacts such as numerical heating and spectra distortions. We present a novel structure-preserving numerical algorithm for the Langevin equation for pitch angle scattering. Similar to the well-known Boris algorithm, the proposed numerical scheme takes advantage of the structure-preserving properties of the Cayley transform when calculating the velocity-space rotations. The resulting algorithm is explicitly solvable, while preserving the norm of velocities down to machine precision. We demonstrate that the method has the same order of numerical convergence as the traditional stochastic Euler-Maruyama method.
Bergstedt, K.; Ji, H.; Jara-Almonte, J.; Yoo, J.; Ergun, R. E.; Chen, L.-J.
Abstract:
We present the first statistical study of magnetic structures and associated energy dissipation observed during a single period of turbulent magnetic reconnection, by using the in situ measurements of the Magnetospheric Multiscale mission in the Earth's magnetotail on 26 July 2017. The structures are selected by identifying a bipolar signature in the magnetic field and categorized as plasmoids or current sheets via an automated algorithm which examines current density and plasma flow. The size of the plasmoids forms a decaying exponential distribution ranging from subelectron up to ion scales. The presence of substantial number of current sheets is consistent with a physical picture of dynamic production and merging of plasmoids during turbulent reconnection. The magnetic structures are locations of significant energy dissipation via electric field parallel to the local magnetic field, while dissipation via perpendicular electric field dominates outside of the structures. Significant energy also returns from particles to fields.
The Molino suite contains 75,000 galaxy mock catalogs designed to quantify the information content of any cosmological observable for a redshift-space galaxy sample. They are constructed from the Quijote N-body simulations (Villaescusa-Navarro et al. 2020) using the standard Zheng et al. (2007) Halo Occupation Distribution (HOD) model. The fiducial HOD parameters are based on the SDSS high luminosity samples. The suite contains 15,000 mocks at the fiducial cosmology and HOD parameters for covariance matrix estimation. It also includes (500 N-body realizations) x (5 HOD realizations)=2,500 mocks at 24 other parameter values to estimate the derivative of the observable with respect to six cosmological parameters (Omega_m, Omega_b, h, n_s, sigma_8, and M_nu) and five HOD parameters (logMmin, sigma_logM, log M_0, alpha, and log M_1). Using the covariance matrix and derivatives calculated from Molino, one can derive Fisher matrix forecasts on the cosmological parameters marginalized over HOD parameters.
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
F. M. Laggner, A. Diallo, B. P. LeBlanc, R. Rozenblat, G. Tchilinguirian, E.Kolemen, the NSTX-U team
Abstract:
A detailed description of a prototype setup for real-time (rt) Thomson scattering (TS) analysis is presented and implemented in the multi-point Thomson scattering (MPTS) diagnostic system at the National Spherical Torus Experiment Upgrade(NSTX-U). The data acquisition hardware was upgraded with rt capable electronics (rt-analog digital converters (ADCs) and a rt server) that allow for fast digitization of the laser pulse signal of eight radial MPTS channels. In addition, a new TS spectrum analysis software for a rapid calculation of electron temperature (Te) and electron density (ne) was developed. Testing of the rt hardware and data analysis soft-ware was successfully completed and benchmarked against the standard, post-shot evaluation. Timing tests were performed showing that the end-to-end processing time was reproducibly below 17 ms for the duration of at least 5 s, meeting a 60 Hz deadline by the laser pulse repetition rate over the length of a NSTX-U discharge. The presented rt framework is designed to be scalable in system size, i.e. incorporation of additional radial channels by solely adding additional rt capable hardware. Furthermore, it is scalable in its operation duration and was continuously run for up to 30 min, making it an attractive solution for machines with long discharge duration such as advanced, non-inductive tokamaks or stellarators.
The Magnetospheric Multiscale (MMS) mission has given us unprecedented access to high cadence particle and field data of magnetic reconnection at Earth's magnetopause. MMS first passed very near an X-line on 16 October 2015, the Burch event, and has since observed multiple X-line crossings. Subsequent 3D particle-in-cell (PIC) modeling efforts of and comparison with the Burch event have revealed a host of novel physical insights concerning magnetic reconnection, turbulence induced particle mixing, and secondary instabilities. In this study, we employ the Gkeyll simulation framework to study the Burch event with different classes of extended, multi-fluid magnetohydrodynamics (MHD), including models that incorporate important kinetic effects, such as the electron pressure tensor, with physics-based closure relations designed to capture linear Landau damping. Such fluid modeling approaches are able to capture different levels of kinetic physics in global simulations and are generally less costly than fully kinetic PIC. We focus on the additional physics one can capture with increasing levels of fluid closure refinement via comparison with MMS data and existing PIC simulations. In particular, we find that the ten-moment model well captures the agyrotropic structure of the pressure tensor in the vicinity of the X-line and the magnitude of anisotropic electron heating observed in MMS and PIC simulations. However, the ten-moment model has difficulty resolving the lower hybrid drift instability, which has been observed to plays a fundamental role in heating and mixing electrons in the current layer.
Employment of non-inductive plasma start-up techniques would considerably simplify the design of a spherical tokamak fusion reactor. Transient coaxial helicity injection (CHI) is a promising method, expected to scale favorably to next-step reactors. However, the implications of reactor-relevant parameters on the initial breakdown phase for CHI have not yet been considered. Here, we evaluate CHI breakdown in reactor-like configurations using an extension of the Townsend avalanche theory. We find that a CHI electrode concept in which the outer vessel wall is biased to achieve breakdown, while previously successful on NSTX and HIT-II, may exhibit a severe weakness when scaled up to a reactor. On the other hand, concepts which employ localized biasing electrodes such as those used in QUEST would avoid this issue. Assuming that breakdown can be successfully attained, we then apply scaling relationships to predict plasma parameters attainable in the transient CHI discharge. Assuming the use of 1 Wb of injector flux, we find that plasma currents of 1 MA should be achievable. Furthermore, these plasmas are expected to Ohmically self-heat with more than 1 MW of power as they decay, facilitating efficient hand-off to steady-state heating sources. These optimistic scalings are supported by TSC simulations.
Yang, Yuan; Pan, Ming; Beck, Hylke; Fisher, Colby; Beighley, R. Edward; Kao, Shih-Chieh; Hong, Yang; Wood, Eric
Abstract:
Conventional basin-by-basin approaches to calibrate hydrologic models are limited to gauged basins and typically result in spatially discontinuous parameter fields. Moreover, the consequent low calibration density in space falls seriously behind the need from present-day applications like high resolution river hydrodynamic modeling. In this study we calibrated three key parameters of the Variable Infiltration Capacity (VIC) model at every 1/8° grid-cell using machine learning-based maps of four streamflow characteristics for the conterminous United States (CONUS), with a total of 52,663 grid-cells. This new calibration approach, as an alternative to parameter regionalization, applied to ungauged regions too. A key difference made here is that we tried to regionalize physical variables (streamflow characteristics) instead of model parameters whose behavior may often be less well understood. The resulting parameter fields no longer presented any spatial discontinuities and the patterns corresponded well with climate characteristics, such as aridity and runoff ratio. The calibrated parameters were evaluated against observed streamflow from 704/648 (calibration/validation period) small-to-medium-sized catchments used to derive the streamflow characteristics, 3941/3809 (calibration/validation period) small-to-medium-sized catchments not used to derive the streamflow characteristics) as well as five large basins. Comparisons indicated marked improvements in bias and Nash-Sutcliffe efficiency. Model performance was still poor in arid and semiarid regions, which is mostly due to both model structural and forcing deficiencies. Although the performance gain was limited by the relative small number of parameters to calibrate, the study and results here served as a proof-of-concept for a new promising approach for fine-scale hydrologic model calibrations.
Explosive volcanic eruptions have large climate impacts, and can serve as observable tests of the climatic response to radiative forcing. Using a high resolution climate model, we contrast the climate responses to Pinatubo, with symmetric forcing, and those to Santa Maria and Agung, which had meridionally asymmetric forcing. Although Pinatubo had larger global-mean forcing, asymmetric forcing strongly shifts the latitude of tropical rainfall features, leading to larger local precipitation/TC changes. For example, North Atlantic TC activity over is enhanced/reduced by SH-forcing (Agung)/NH-forcing (Santa Maria), but changes little in response to the Pinatubo forcing. Moreover, the transient climate sensitivity estimated from the response to Santa Maria is 20% larger than that from Pinatubo or Agung. This spread in climatic impacts of volcanoes needs to be considered when evaluating the role of volcanoes in global and regional climate, and serves to contextualize the well-observed response to Pinatubo.
Dust and starlight have been modeled for the KINGFISH project galaxies. For each pixel in each galaxy, we estimate: (1) dust surface density; (2) q_PAH, the dust mass fraction in PAHs; (3) distribution of starlight intensities heating the dust; (4) luminosity emitted by the dust; and (5) dust luminosity from regions with high starlight intensity. The modeling is as described in the paper "Modeling Dust and Starlight in Galaxies Observed by Spitzer and Herschel: The KINGFISH Sample", by G. Aniano, B.T. Draine, L.K. Hunt, K. Sandstrom, D. Calzetti, R.C. Kennicutt, D.A, Dale, and 26 other authors, accepted for publication in The Astrophysical Journal.
Force-driven parallel shear flow in a spatially periodic domain is shown to be linearly unstable
with respect to both the Reynolds number and the domain aspect ratio. This finding is confirmed
by computer simulations, and a simple expression is derived to determine stable flow conditions.
Periodic extensions of Couette and Poiseuille flows are unstable at Reynolds numbers two orders
of magnitude smaller than their aperiodic equivalents because the periodic boundaries impose
fundamentally different constraints. This instability has important implications for designing computational models of nonlinear dynamic processes with periodicity.
The data are 4554 light curves derived from images taken of the globular cluster M4 by the Kepler space telescope during the K2 portion of its mission, specifically during Campaign 2 of that mission, which occurred in 2014. A total of 3856 images were taken over approximately three months at a cadence of approximately half an hour. The purpose of these observations was to find stars and other objects that vary in brightness over time --- variable stars. Also included is a table with associated information for each of the 4554 objects and their light curves.