AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

Preprints

Explore 39,046 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Public-Friendly Open Science
Matteo Cantiello

Matteo Cantiello

February 21, 2017
PREVIOUS “A “MODERN SCIENTIST” MANIFESTO” In the 21st century science is growing more technical and complex, as we gaze further and further while standing on the shoulders of many generations of giants. The public has often a hard time understanding research and its relevance to society. One of the reasons for this is that scientists do not spend enough time communicating their findings outside their own scientific community. Obviously there are some exceptions, but THE RULE IS THAT SCIENTISTS WRITE CONTENT FOR SCIENTISTS. Academia is often perceived as an ivory tower, and when new findings are shared with the outside world, this is not done by scientists, but by the media or even the political class. The problem is that these external agents do not have the necessary background to digest and properly communicate this knowledge with the rest of society. They often misunderstand, over-hype and in some case even distort the results and views of the scientific community. IT’S IRONIC AND SOMEWHAT FRIGHTENING THAT THE DISCOVERIES AND RECOMMENDATIONS FOR WHICH SOCIETY INVESTS SUBSTANTIAL ECONOMIC AND HUMAN CAPITAL, ARE NOT DIRECTLY DISSEMINATED BY THE PEOPLE WHO REALLY UNDERSTAND THEM. At the same time transparency and reproducibility are at stake in the increasingly complex world of research, which is still using old-fashioned tools when packaging and sharing content. This is not only a big problem for research itself, but can give science a bad name in front of the public opinion, which increasingly does not understand and trust the work of scientists. To the average tax-payer science is often cryptic, with most recently published papers behind a pay-wall and the majority of research virtually inscrutable. In this scenario it is hard for the public to access and capture the relevance of scientists’ work. I strongly believe that a society that does not trust its scientists is set on a dangerous course. ACTION ITEMS. To improve the situation 21st century scientists need to: 1. Learn to efficiently share and communicate their research with the public at large. 2. Make their research more transparent and reproducible, so that it can be trusted and better understood by their peers and the public at large. 21st century scientists need to produce “PUBLIC-FRIENDLY OPEN SCIENCE” (PFOS).
From Walras’ auctioneer to continuous time double auctions: A general dynamic theory...
Jonathan Donier
Awaiting Activation

Jonathan Donier

and 1 more

July 08, 2015
In standard Walrasian auctions, the price of a good is defined as the point where the supply and demand curves intersect. Since both curves are generically regular, the response to small perturbations is linearly small. However, a crucial ingredient is absent of the theory, namely transactions themselves. What happens after they occur? To answer the question, we develop a dynamic theory for supply and demand based on agents with heterogeneous beliefs. When the inter-auction time is infinitely long, the Walrasian mechanism is recovered. When transactions are allowed to happen in continuous time, a peculiar property emerges: close to the price, supply and demand vanish quadratically, which we empirically confirm on the Bitcoin. This explains why price impact in financial markets is universally observed to behave as the square root of the excess volume. The consequences are important, as they imply that the very fact of clearing the market makes prices hypersensitive to small fluctuations.
Quality of systematic review and meta-analysis abstracts in oncology journals
Chelsea Koller
Sarah Khan

Chelsea Koller

and 6 more

July 04, 2015
Abstract Purpose: The purpose of this study was to evaluate the quality of reporting in the abstracts of oncology systematic reviews using PRISMA guidelines for abstract writing. Methods: Oncology systematic reviews and meta-analyses from four journals - The Lancet Oncology, Clinical Cancer Research, Cancer Research, and Journal of Clinical Oncology - were selected using a PubMed search. The resulting 337 abstracts were sorted for eligibility and 182 were coded based on a standardized abstraction manual constructed from the PRISMA criteria. Eligible systematic reviews were coded independently and later verified by a second coder, with disagreements handled by consensus. One hundred eighty-two abstracts comprised the final sample. Results: The number of included studies, information regarding main outcomes, and general interpretation of results were described in the majority of abstracts. In contrast, risk of bias or methodological quality appraisals, the strengths and limitations of evidence, funding sources, and registration information were rarely reported. By journal, the most notable difference was a higher percentage of funding sources reported in Lancet Oncology. No detectable upward trend was observed on mean abstract scores after publication of the PRISMA extension for abstracts. Conclusion: Overall, the reporting of essential information in oncology systematic review and meta-analysis abstracts is suboptimal and could be greatly improved. Keywords: Review, Systematic; Meta-Analysis; Cancer; Medical Oncology; Abstracting as Topic; Funding
Utilization of clinical trials registries in obstetrics and gynecology systematic rev...
Michael Bibens
A. Benjamin Chong

Michael Bibens

and 2 more

June 29, 2015
ABSTRACT Objectives: We evaluated the use of clinical trials registries in published obstetrics and gynecological systematic reviews and meta-analyses. Methods: A review of publications between January 1, 2007, and December 31, 2015, from six obstetrical and gynecological journals (_Obstetrics & Gynecology, Obstetrical & Gynecological Survey, Human Reproduction Update, Gynecologic Oncology, British Journal of Obstetrics and Gynaecology, and American Journal of Obstetrics & Gynecology_) was completed to identify eligible systematic reviews. All systematic reviews included after exclusions were independently reviewed to determine if clinical trials registries had been included as part of the search process. Studies that reported using a trials registry were further examined to determine whether trial data was included in the analysis. Results: Our initial search resulted in 292 articles, which was narrowed to 256 after exclusions. Of the 256 systematic reviews meeting our selection criteria, 47 utilized a clinical trials registry. Eleven of the 47 systematic reviews found unpublished data, and added the unpublished trial data into their results. Conclusion: A majority of systematic reviews in clinical obstetrics and gynecology journals do not conduct searches of clinical trials registries or do not make use of data obtained from these searches.
Factual errors in a recent paper by Westerhof, Segers and Westerhof in Hypertension
Kim H. Parker
Alun Hughes

Kim H. Parker

and 1 more

June 16, 2015
But facts are chiels that winna ding An downa be disputed – from _A Dream_ by Robert Burns (1786) (But facts are fellows that will not be overturned, And cannot be disputed) _Wave separation, wave intensity, the reservoir-wave concept, and the instantaneous wave-free ratio (2015) N Westerhof, P Segers and BE Westerhof, Hypertension, DOI: 10.1161/HYPERTENSIONAHA.115.05567_ Hereinafter referred to as [WSW]. This paper by three distinguished workers in the field of cardiovascular mechanics, concludes that both the reservoir pressure and instantaneous wave-free ratio are ’... both physically incorrect, and should be abandoned’. These are very strong conclusions which, if they were opinions could only be debated. Reading the paper in detail, however, reveals that it contains numerous factual errors in their discussion of these two entities. Since facts are different from opinions, we believe that it is essential that these errors be corrected before they gain credence by repetition. False facts are highly injurious to the progress of science, for they often endure long; but false views, if supported by some evidence, do little harm, for every one takes a salutary pleasure in proving their falseness. – Charles Darwin (1871) Because we are naturally prejudiced about the validity of both the reservoir pressure (Pres) and instantaneous wave-free ratio (iFR), having been involved in the conception and development of both ideas, we will try to present our arguments as transparently and fairly as possible. As far as possible we will demonstrate the errors by direct quotations from the paper. The whole paper¹ is available from the Hypertension web site and should be consulted directly if there are any questions about our treatment of the text. Approximately two thirds of the paper is taken up with a discussion of wave separation and wave intensity from the point of view of the more usual Fourier-based methods of analysing cardiovascular mechanics, frequently called the impedance method. This part of the paper is, as far as we can see, both insightful and free of major errors. We found some of the discussion about wave intensity analysis thought-provoking and agree with most of their conclusions. We recommend the first two-thirds of this paper to anyone interested in arterial mechanics. In contrast, the last third of the paper, starting with the final sentence of the section ’Summary of Wave Separation and WIA’ is riddled with errors of interpretation and, more importantly, contains a number of mistakes (or in Darwin’s terms ’false statements of fact’) that need to be corrected. Instead of dealing with these errors chronologically, we will point out the fundamental errors first and then deal with their sequelae.
Participatory action research about Figshare user experiences at the University of Me...
Cobi Calyx
Awaiting Activation

Cobi Calyx

and 1 more

November 01, 2017
Participation & feedback are welcome! Please email me on cobi.smith@unimelb.edu.au (which is treated as private unless you explicitly consent to sharing) or tweet [@cobismith](https://twitter.com/cobismith) (public) if you'd prefer not to comment on this working paper using Authorea's features. Please note this is an open notebook and is intended to be part of an open science research project, which means if you choose to share information here your contributions are in the public domain. See the University of Melbourne research protocols for more information: http://www.orei.unimelb.edu.au/content/when-approval-needed
Transcranial Direct Current Stimulation: Theory, Treatment of Major Depressive Disord...
Shan H. Siddiqi

Shan H. Siddiqi

June 04, 2015
BACKGROUND AND THEORY The use of non-invasive brain stimulation for the treatment of various neuropsychiatric disorders, including major depressive disorder (MDD), has rapidly expanded recently. Transcranial direct current electrical stimulation (tDCS), variants of which have been used experimentally for psychiatric , neurologic , and physical rehabilitation applications, has garnered a great deal of attention. While it is not yet FDA-approved for any indication, its promise is related to its low cost and wide range of applications; although the breadth of its applicability has been questioned due to heterogeneous data , this heterogeneity has been attributed to methodological variability . The safety and tolerability of tDCS were outlined by an early study including 567 sessions in 102 patients. The most common adverse effects were mild tingling/itching at the stimulation site and moderate fatigue. Less frequent effects included headaches (11.8%), nausea (2.9%), and insomnia (0.98%), all of which were mild and transient . The underlying theory is that tDCS modulates the excitability of certain cortical regions by passage of a small electrical current through conducting pads applied to the scalp in a minimally painful manner. While the precise mechanism is not fully understood, it likely enhances cortical excitability at the anode and depresses it at the cathode . Proposed mechanisms have been based on data demonstrating relationships between tDCS stimulation and neuropharmacologic effects, cortical electrophysiology, and functional neuroimaging changes. Effects of tDCS on neuroplasticity and cortical excitability have been shown to be differentially modulated by agents affecting neurotransmission via serotonin (citalopram), dopamine (L-dopa), NMDA (dextromethorphan and d-cycloserine), and GABA (lorazepam). Electrophysiologic changes include differential modulation in the presence of agents that modulate sodium channels (carbamazepine) and calcium channels (flunarizine) . Active tDCS shows significant increases in prefrontal cortex activity as measured by functional near infrared spectroscopy (fNIRS), a technique used to measure cortical oxygenation, during and after stimulation – notably, fNIRS measurements may be limited by interference due extracranial blood flow and inability to assess deeper structures, so they merely approximate the functional magnetic resonance imaging (fMRI) signal in superficial structures . Stimulation also increases fMRI activation and connectivity of the underlying cortical regions and hippocampi, though the clinical significance of this is uncertain given that this same study found no behavioral changes .
A Framework for Mitigating the Biases in Barometric Dust Devil Surveys...
Brian Jackson

Brian Jackson

May 27, 2015
BACKGROUND Dust devils are small-scale (few to many tens of meters) low-pressure vortices rendered visible by lofted dust. They usually occur in arid climates on the Earth and ubiquitously on Mars. Martian dust devils have been studied with orbiting and landed spacecraft and were first identified on Mars using images from the Viking Orbiter . On Mars, dust devils may dominate the supply of atmospheric dust and influence climate , pose a hazard for human exploration , and they may have lengthened the operational lifetime of Martian rovers . On the Earth, dust devils significantly degrade air quality in arid climates and may pose an aviation hazard . The dust-lifting capacity of dust devils seems to depend sensitively on their structures, in particular on the pressure wells at their centers , so the dust supply from dust devils on both planets may be dominated by the seldom-observed larger devils. Using a martian global climate model, showed that observed seasonal variations in Mars’ near-surface temperatures could not be reproduced without including the radiative effects of dust and estimated the dust contributes more than 10 K of heating to the heating budget. Thus, elucidating the origin, evolution, and population statistics of dust devils is critical for understanding important terrestrial and Martian atmospheric properties and for in-situ exploration of Mars. Studies of Martian dust devils have been conducted through direct imaging of the devils and identification of their tracks on Mars’ dusty surface \citep[cf.][]{Balme_2006}. Studies with in-situ meteorological instrumentation have also identified dust devils, either via obscuration of the Sun by the dust column or their pressure signals . Studies have also been conducted of terrestrial dust devils and frequently involve in-person monitoring of field sites. Terrestrial dust devils are visually surveyed , directly sampled , or recorded using in-situ meteorological equipment . As noted in , in-person visual surveys are likely to be biased toward detection of larger, more easily seen devils. Such surveys would also fail to recover dustless vortices . Recently, terrestrial surveys similar to Martian dust devil surveys have been conducted using in-situ single barometers and photovoltaic sensors . These sensor-based terrestrial surveys have the advantage of being directly analogous to Martian surveys and are highly cost-effective compared to the in-person surveys (in a dollars per data point sense). In single-barometer surveys, a sensor is deployed in-situ and records a pressure time series at a sampling period ≲1 s. Since it is a low-pressure convective vortex, a dust devil passing nearby will register as pressure dip discernible against a background ambient (but not necessarily constant) pressure. Figure [fig:conditioning_detection_b_inset] from shows a time-series with a typical dust devil signal.
How structure-directing agents control nanocrystal shape: PVP-mediated growth of Ag n...
Tonnam Balankura

Tonnam Balankura

May 25, 2015
KINETIC WULFF PLOT Away from equilibrium, the NC shape is governed by the kinetics of inter- and intrafacet atom diffusion, as well as by the kinetics of deposition to various facets. At nonequilibrium growth conditions, the resulting shapes are expected to be different from the thermodynamic shapes. Examples of well-known kinetic shapes include nanowires and highly branched (bi- and tripods) structures . When NCs grow beyond a critical size, the relative atom deposition rate to various facets becomes a major influence in the NC shape. In this kinetically-controlled growth regime, the kinetic Wulff construction can predict the shape evolution of faceted crystal growth based on the surface kinetics . Using 3-dimensional shape evolution calculation method , we correlate the relative flux of Ag atom deposition to {111} and {100} facets $}{F_{100}}$ and the resulting kinetic Wulff shape in the reversible octahedron-to-cube transformation. This transformation is observed in the seed-mediated growth of Ag NCs , in which the shape-controlling parameter is the concentration of poly(vinylpyrrolidone) (PVP) in the solution. The constructed kinetic Wulff plot is shown in Fig. [fig:kinetic-wulff]. The construction of the kinetic Wulff plot is described in the supporting information. When the relative flux to {111} facets is less than half of the flux to {100} facets, the octahedra is predicted as the kinetic Wulff shape. As $}{F_{100}}$ increases, we observe a shape progression from octahedra to cubo-octahedra, then to truncated cubes, and eventually to cubes at $}{F_{100}} \geq $. To study the mechanism by which SDAs impart shape selectivity, we use the seed-mediated Ag polyol synthesis in the presence of PVP as our model. We utilize large-scale MD simulations to quantify F₁₀₀ and F₁₁₁ using _in-silico_ deposition and potential of mean force calculation.
Tools and pipelines for BioNano data: molecule assembly pipeline and FASTA super scaf...
Jennifer Shelton
Cassondra Coleman

Jennifer Shelton

and 7 more

May 20, 2015
BACKGROUND: Genome assembly remains an unsolved problem. Assembly projects face a range of hurdles that confound assembly. Thus a variety of tools and approaches are needed to improve draft genomes. RESULTS: We used a custom assembly workflow to optimize consensus genome map assembly, resulting in an assembly equal to the estimated length of the _Tribolium castaneum_ genome and with an N50 of more than 1 Mb. We used this map for super scaffolding the _T. castaneum_ sequence assembly, more than tripling its N50 with the program Stitch. CONCLUSIONS: In this article we present software that leverages consensus genome maps assembled from extremely long single molecule maps to increase the contiguity of sequence assemblies. We report the results of applying these tools to validate and improve a 7x Sanger draft of the _T. castaneum_ genome. KEYWORDS: Genome map; BioNano; Genome scaffolding; Genome validation; Genome finishing
A prevalence of dynamo-generated magnetic fields in the cores of intermediate-mass st...
Dennis
Matteo Cantiello

Dennis

and 4 more

May 17, 2015
_This is the author’s version of the work. It is posted here for personal use, not for redistribution. The definitive version was published in Nature on 04 January 2016, DOI:10.1038/nature16171_ Magnetic fields play a role in almost all stages of stellar evolution . Most low-mass stars, including the Sun, show surface fields that are generated by dynamo processes in their convective envelopes . Intermediate-mass stars do not have deep convective envelopes , although 10% exhibit strong surface fields that are presumed to be residuals from the stellar formation process . These stars do have convective cores that might produce internal magnetic fields , and these might even survive into later stages of stellar evolution, but information has been limited by our inability to measure the fields below the stellar surface . Here we use asteroseismology to study the occurrence of strong magnetic fields in the cores of low- and intermediate-mass stars. We have measured the strength of dipolar oscillation modes, which can be suppressed by a strong magnetic field in the core , in over 3,600 red giant stars observed by . About 20% of our sample show mode suppression but this fraction is a strong function of mass. Strong core fields only occur in red giants above 1.1 solar masses (1.1), and the occurrence rate is at least 60% for intermediate-mass stars (1.6–2.0), indicating that powerful dynamos were very common in the convective cores of these stars.
Reanalyzing Head et al. (2015): No widespread p-hacking after all?
C.H.J. Hartgerink

C.H.J. Hartgerink

May 06, 2015
Statistical significance seeking (i.e., p-hacking) is a serious problem for the validity of research, especially if it occurs frequently. Head et al. provided evidence for widespread p-hacking throughout the sciences, which would indicate that the validity of science is in doubt. Previous substantive concerns about their selection of p-values indicated they were too liberal in selecting all reported p-values, which would result in including results that would not be interesting to have been p-hacked. Despite this liberal selection of p-values Head et al. found evidence for p-hacking, which raises the question why p-hacking was detected despite it being unlikely a priori. In this paper I reanalyze the original data and indicate Head et al. their results are an artefact of rounding in the reporting of p-values.
The accretion histories of brightest cluster galaxies from their stellar population g...
Paola Oliva-Altamirano

Paola Oliva-Altamirano

May 04, 2015
_Sarah Brough, Jimmy, Kim-Vy Tran, Warrick J. Couch, Richard M. McDermid, Chris Lidman, Anja von der Linden, Rob Sharp_
Ontology-based Learning Content Management System in Programming Languages Domain
Anton Anikin
Alexander Dvoryankin

Anton Anikin

and 3 more

May 01, 2015
INTRODUCTION A learning content management system (LCMS) ,, is a computer application that allows creating, editing and modifying learning content, organizing, deleting as well as maintenance from a central interface. The LCMS provides a complex platform meant for developing learning content used in e-learning educational systems. Many LCMS packages available on the market also contain tools that resemble those used in learning management systems (LMS), and most assume that an LMS is already in place. The emphasis in an LCMS is the ability for developers to create a new learning content in accordance to learning objectives as well as cognitive peculiarities and experience of learner. Most content-management systems have several aspects in common: a focus on creating, developing, and managing content for on-line courses, with far less emphasis placed on managing the experience of learners; a multi-user environment that allows several developers to interact and exchange tools; a learning object repository containing learning materials, which are commonly used components that are archived so as to be searchable and adaptable to any on-line course. A new trend in LCMS development is using the Smart Learning Content (SLC) approach. Apart from adaptive personalization and sophisticated forms of feedback, smart learning content often also authenticates the user, models the learner, aggregates data, and supports learning analytics. That is especially important in computer science education because of expediency of using the program and algorithm visualization tools, automatic assessment, coding tools, algorithm and program simulation tools, problem-solving tools and other learning resources that process input data provided by the learner and generate customized output. The same approach can be used to generate adaptive learning content based on the some content elements. So the creation of SLC implies personalized search of learning resources and adaptive visualization of information retrieval . In this paper we describe the ontology-based learning content management system which allows to create a new smart learning content in programming languages domain in form of personal learning collection.
Use of the Temperament and Character Inventory to predict response to repetitive tran...
Shan H. Siddiqi

Shan H. Siddiqi

April 29, 2015
ABSTRACT OBJECTIVE: We investigated the utility of the Temperament and Character Inventory (TCI) in predicting antidepressant response to rTMS. BACKGROUND: Although rTMS of the dorsolateral prefrontal cortex (DLPFC) is an established antidepressant treatment, little is known about predictors of response. The TCI measures multiple personality dimensions (harm avoidance, novelty seeking, reward dependence, persistence, self-directedness, self-transcendence, and cooperativeness), some of which have predicted response to antidepressants and cognitive-behavioral therapy. A previous study suggested a possible association between higher self-directedness and rTMS response specifically in melancholic depression, although this was limited by the fact that melancholic depression is associated with a limited range of TCI profiles. METHODS: Sixteen patients in a major depressive episode completed a TCI prior to a clinical course of rTMS over the DLPFC. Treatment response was defined as ≥50% decrease in Hamilton Depression Rating Scale (HDRS). Baseline scores on each TCI dimension were compared between responders and non-responders via paired t-test with Bonferroni correction. Temperament/character scores were also subjected to regression analysis against percentage improvement in HDRS. RESULTS: Ten of the sixteen patients responded to rTMS. T-scores for Persistence were significantly higher in responders (48.3, 95% CI 40.9-55.7) than in non-responders (35.3, 95% CI 29.2-39.9) (p=0.006). Linear regression revealed a correlation between persistence score and percentage improvement in HRDS (R=0.65±0.29). CONCLUSIONS: Higher persistence predicted antidepressant response to rTMS. This may be explained by rTMS-induced enhancement of cortical excitability, which has been found to be decreased in patients with high persistence. Personality assessment that includes measurement of TCI persistence may be a useful component of precision medicine initiatives in rTMS for depression.
The human experience with intravenous levodopa
Shan H. Siddiqi
Natalia Abrahan

Shan H. Siddiqi

and 5 more

April 24, 2015
ABSTRACT OBJECTIVE: To compile a comprehensive summary of published human experience with levodopa given intravenously, with a focus on information required by regulatory agencies. BACKGROUND: While safe intravenous use of levodopa has been documented for over 50 years, regulatory supervision for pharmaceuticals given by a route other than that approved by the U.S. Food and Drug Administration (FDA) has become increasingly cautious. If delivering a drug by an alternate route raises the risk of adverse events, an investigational new drug (IND) application is required, including a comprehensive review of toxicity data. METHODS: Over 200 articles referring to intravenous levodopa (IVLD) were examined for details of administration, pharmacokinetics, benefit and side effects. RESULTS: We identified 144 original reports describing IVLD use in humans, beginning with psychiatric research in 1959-1960 before the development of peripheral decarboxylase inhibitors. At least 2781 subjects have received IVLD, and reported outcomes include parkinsonian signs, sleep variables, hormones, hemodynamics, CSF amino acid composition, regional cerebral blood flow, cognition, perception and complex behavior. Mean pharmacokinetic variables were summarized for 49 healthy subjects and 190 with Parkinson disease. Side effects were those expected from clinical experience with oral levodopa and dopamine agonists. No articles reported deaths or induction of psychosis. CONCLUSION: At least 2781 patients have received i.v. levodopa with a safety profile comparable to that seen with oral administration.
Orthostatic stability with intravenous levodopa
Shan H. Siddiqi
Mary L. Creech RN, MSW, LCSW

Shan H. Siddiqi

and 2 more

April 20, 2015
Intravenous levodopa has been used in a multitude of research studies due to its more predictable pharmacokinetics compared to the oral form, which is used frequently as a treatment for Parkinson’s disease (PD). Levodopa is the precursor for dopamine, and intravenous dopamine would strongly affect vascular tone, but peripheral decarboxylase inhibitors are intended to block such effects. Pulse and blood pressure, with orthostatic changes, were recorded before and after intravenous levodopa or placebo—after oral carbidopa—in 13 adults with a chronic tic disorder and 16 tic-free adult control subjects. Levodopa caused no statistically or clinically significant changes in blood pressure or pulse. These data add to previous data that support the safety of i.v. levodopa when given with adequate peripheral inhibition of DOPA decarboxylase.
An Atlas of Human Kinase Regulation
David Ochoa
Pedro Beltrao

David Ochoa

and 1 more

April 15, 2015
The coordinated regulation of protein kinases is a rapid mechanism that integrates diverse cues and swiftly determines appropriate cellular responses. However, our understanding of cellular decision-making has been limited by the small number of simultaneously monitored phospho-regulatory events. Here, we have estimated changes in activity in 215 human kinases in 399 conditions derived from a large compilation of phosphopeptide quantifications. This atlas identifies commonly regulated kinases as those that are central in the signaling network and defines the logic relationships between kinase pairs. Co-regulation along the conditions predicts kinase-complex and kinase-substrate associations. Additionally, the kinase regulation profile acts as a molecular fingerprint to identify related and opposing signaling states. Using this atlas, we identified essential mediators of stem cell differentiation, modulators of Salmonella infection and new targets of AKT1. This provides a global view of human phosphorylation-based signaling and the necessary context to better understand kinase driven decision-making.
Stochastic inversion workflow using the gradual deformation in order to predict and m...
Lorenzo Perozzi
Gloaguen

Lorenzo Perozzi

and 2 more

April 07, 2015
ABSTRACT Due to budget constraints, CCS in deep saline aquifers is often carried out using only one injector well and one control well, which seriously limits infering the dynamics of the CO_2 plume. In such case, monitoring of the plume of CO_2 only rely on geological assumptions or indirect data. In this paper, we present a new two-step stochastic P- and S-wave, density and porosity inversion approach that allows reliable monitoring of CO_2 plume using time-lapse VSP. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian cosimulations. Each realization within a set of static models are then iteratively combined together using a modified gradual deformation optimization technique with the difference between computed and observed raw traces as objective function. In the second step, this statics models serves as input for a CO_2 injection history matching using the same modified gradual deformation scheme. At each gradual deformation step the CO_2 injection is simulated and the corresponding full-wave traces are computed and compared to the observed data. The method has been tested on a synthetic heterogeneous saline aquifer model mimicking the environment of the CO_2 CCS pilot in Becancour area, Quebec. The results show that the set of optimized models of P- and S-wave, density and porosity showed an improved structural similarity with the reference models compared to conventional simulations.
The Resource Identification Initiative: A cultural shift in publishing
Anita Bandrowski
Matthew H. Brush

Anita Bandrowski

and 14 more

March 25, 2015
ABSTRACT A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or answer basic questions such as “How did other studies use resource X?”. To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the methods sections of papers and thereby improve identifiability and reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (i.e. software and databases). RRIDs are assigned by an authoritative database, for example a model organism database, for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central web portal (scicrunch.org/resources). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 papers have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40. Here, we present an overview of the pilot project and its outcomes to date. We show that authors are able to identify resources and are supportive of the goals of the project. Identifiability of the resources post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on reproducibility relating to research resources.
Rapid Environmental Quenching of Satellite Dwarf Galaxies in the Local Group
Andrew Wetzel
Erik Tollerud

Andrew Wetzel

and 2 more

March 06, 2015
In the Local Group, nearly all of the dwarf galaxies ($\mstar\lesssim10^9\msun$) that are satellites within $300\kpc$ (the virial radius) of the Milky Way (MW) and Andromeda (M31) have quiescent star formation and little-to-no cold gas. This contrasts strongly with comparatively isolated dwarf galaxies, which are almost all actively star-forming and gas-rich. This near dichotomy implies a _rapid_ transformation after falling into the halos of the MW or M31. We combine the observed quiescent fractions for satellites of the MW and M31 with the infall times of satellites from the ELVIS suite of cosmological simulations to determine the typical timescales over which environmental processes within the MW/M31 halos remove gas and quench star formation in low-mass satellite galaxies. The quenching timescales for satellites with $\mstar<10^8\msun$ are short, $\lesssim2\gyr$, and decrease at lower $\mstar$. These quenching timescales can be $1-2\gyr$ longer if environmental preprocessing in lower-mass groups prior to MW/M31 infall is important. We compare with timescales for more massive satellites from previous works, exploring satellite quenching across the observable range of $\mstar=10^{3-11}\msun$. The environmental quenching timescale increases rapidly with satellite $\mstar$, peaking at $\approx9.5\gyr$ for $\mstar\sim10^9\msun$, and rapidly decreases at higher $\mstar$ to less than $5\gyr$ at $\mstar>5\times10^9\msun$. Thus, satellites with $\mstar\sim10^9\msun$, similar to the Magellanic Clouds, exhibit the longest environmental quenching timescales.
Ebola virus epidemiology, transmission, and evolution during seven months in Sierra L...
Daniel Park
Gytis Dudas

Daniel Park

and 22 more

March 02, 2015
SUMMARY The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission, and offered important information for outbreak response. Here we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission.
Top-quark electroweak couplings at the FCC-ee
Patrick Janot
Patrizia Azzi

Patrick Janot

and 3 more

February 26, 2015
INTRODUCTION The design study of the Future Circular Colliders (FCC) in a 100-km ring in the Geneva area has started at CERN at the beginning of 2014, as an option for post-LHC particle accelerators. The study has an emphasis on proton-proton and electron-positron high-energy frontier machines . In the current plans, the first step of the FCC physics programme would exploit a high-luminosity ${\rm e^+e^-}$ collider called FCC-ee, with centre-of-mass energies ranging from below the Z pole to the ${\rm t\bar t}$ threshold and beyond. A first look at the physics case of the FCC-ee can be found in Ref. . In this first look, the focus regarding top-quark physics was on precision measurements of the top-quark mass, width, and Yukawa coupling through a scan of the ${\rm t\bar t}$ production threshold, with $$ comprised between 340 and 350GeV. The expected precision on the top-quark mass was in turn used, together with the outstanding precisions on the Z peak observables and on the W mass, in a global electroweak fit to set constraints on weakly-coupled new physics up to a scale of 100TeV. Although not studied in the first look, measurements of the top-quark electroweak couplings are of interest, as new physics might also show up via significant deviations of these couplings with respect to their standard-model predictions. Theories in which the top quark and the Higgs boson are composite lead to such deviations. The inclusion of a direct measurement of the ttZ coupling in the global electroweak fit is therefore likely to further constrain these theories. It has been claimed that both a centre-of-mass energy well beyond the top-quark pair production threshold and a large longitudinal polarization of the incoming electron and positron beams are crucially needed to independently access the ttγ and the ttZ couplings for both chirality states of the top quark. In Ref. , it is shown that the measurements of the total event rate and the forward-backward asymmetry of the top quark, with 500${\rm fb}^{-1}$ at $=500$GeV and with beam polarizations of ${\cal P} = \pm 0.8$, ${\cal P}^\prime = \mp 0.3$, allow for this distinction. The aforementioned claim is revisited in the present study. The sensitivity to the top-quark electroweak couplings is estimated here with an optimal-observable analysis of the lepton angular and energy distributions of over a million events from ${\rm t\bar t}$ production at the FCC-ee, in the $\ell \nu {\rm q \bar q b \bar b}$ final states (with $\ell = {\rm e}$ or μ), without incoming beam polarization and with a centre-of-mass energy not significantly above the ${\rm t\bar t}$ production threshold. Such a sensitivity can be understood from the fact that the top-quark polarization arising from its coupling to the Z is maximally transferred to the final state particles via the weak top-quark decay ${\rm t \to W b}$ with a 100% branching fraction: the lack of initial polarization is compensated by the presence of substantial final state polarization, and by a larger integrated luminosity. A similar situation was encountered at LEP, where the measurement of the total rate of ${\rm Z} \to \tau^+\tau^-$ events and of the tau polarization was sufficient to determine the tau couplings to the Z, regardless of initial state polarization . This letter is organized as follows. First, the reader is briefly reminded of the theoretical framework. Next, the statistical analysis of the optimal observables is described, and realistic estimates for the top-quark electroweak coupling sensitivities are obtained as a function of the centre-of-mass energy at the FCC-ee. Finally, the results are discussed, and prospects for further improvements are given.
A new method for identifying the Pacific-South American pattern and its influence on...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

February 24, 2015
The Pacific-South American (PSA) pattern is an important mode of climate variability in the mid-to-high southern latitudes. It is widely recognized as the primary mechanism by which the El Niño-Southern Oscillation (ENSO) influences the south-east Pacific and south-west Atlantic, and in recent years has also been suggested as a mechanism by which longer-term tropical sea surface temperature trends can influence the Antarctic climate. This study presents a novel methodology for objectively identifying the PSA pattern. By rotating the global coordinate system such that the equator (a great circle) traces the approximate path of the pattern, the identification algorithm utilizes Fourier analysis as opposed to a traditional Empirical Orthogonal Function approach. The climatology arising from the application of this method to ERA-Interim reanalysis data reveals that the PSA pattern has a strong influence on temperature and precipitation variability over West Antarctica and the Antarctic Peninsula, and on sea ice variability in the adjacent Amundsen, Bellingshausen and Weddell Seas. Identified seasonal trends towards the negative phase of the PSA pattern are consistent with warming observed over the Antarctic Peninsula during autumn, but are inconsistent with observed winter warming over West Antarctica. Only a weak relationship is identified between the PSA pattern and ENSO, which suggests that the pattern might be better conceptualized as preferred regional atmospheric response to various external (and internal) forcings.
← Previous 1 2 … 1619 1620 1621 1622 1623 1624 1625 1626 1627 Next →
ESS Open Archive

| Powered by Authorea.com

instution-link instution-link instution-link instution-link instution-link instution-link
  • Home
  • About Us
  • Advisory Board
  • Editorial Board
  • Submission Guide
  • FAQs