Whyjay Zheng

and 9 more

Supplemental material (SM; also known as supplementary information) comes with its associated research article and provides study details such as metadata, additional figures and text, multimedia, and code. Well-designed SM helps readers fully understand the underlying scientific analysis, reproduce the work, and even reuse the workflows for exploratory ideas. Thus, the concept of FAIR (Findable, Accessible, Interoperable, and Reusable), which is originally designed for data sharing guidelines, also matches these core qualities for SM.We evaluate different SM-preparation practices that are commonly found in Earth Science journal articles. These practices are classified into five tiers based on the FAIR principles and the narrative structure. We show that Jupyter Book-based SM belongs to the top tier and outperforms the other practices, despite being not as popular as the other SM-preparation practices as of 2022.We identify the advantages of the Jupyter Book-based SM as follows. Jupyter Book uses a narrative structure to combine different elements of SM into a single scholarly object, increasing readability. Jupyter Book's direct support of HTML publishing allows users to web host the SM using services such as Github Pages, improving the web indexing ranks and resulting in higher exposure of both the research article and the SM. The entire SM is also eligible to be archived in a data repository and receive a Digital Object Identifier (DOI) that can be used for citations. In addition, Jupyter Book-based SM lowers the threshold of reproducing and reusing the work by accessing an interactive cloud computing service (e.g., MyBinder.org) with all data and code imported if the content is available on a code-hosting platform (e.g., Github).These features summarize the core values of SM from the perspective of open science. We encourage researchers to use these good practices and urge journal publishers to be open to receiving such supplements for maximum effectiveness.

Facundo Sapienza

and 5 more

Sampling strategies used in paleomagnetic studies play a crucial role in dictating the accuracy of our estimates of properties of the ancient geomagnetic field. However, there has been little quantitative analysis of optimal paleomagnetic sampling strategies and the community has instead defaulted to traditional practices that vary between laboratories. In this paper, we quantitatively evaluate the accuracy of alternative paleomagnetic sampling strategies through numerical experiment and an associated analytical framework. Our findings demonstrate a strong correspondence between the accuracy of an estimated paleopole position and the number of sites or independent readings of the time-varying paleomagnetic field, whereas larger numbers of in-site samples have a dwindling effect. This remains true even when a large proportion of the sample directions are spurious. This approach can be readily achieved in sedimentary sequences by distributing samples stratigraphically, considering each sample as an individual reading. However, where the number of potential independent sites is inherently limited the collection of additional in-site samples can improve the accuracy of the paleopole estimate (although with diminishing returns with increasing samples per site). Where an estimate of the magnitude of paleosecular variation is sought, multiple in-site samples should be taken, but the optimal number is dependent on the expected fraction of outliers. We provide both analytical formulas and a series of interactive Jupyter notebooks allowing optimal sampling strategies to be derived from user-informed expectations.