This time the focus is on issues in journal article publication, centring on what to include concerning your methods section so that readers can have enough information for the replication of your results. This, of course, appeals more in some discipline areas than others. Phenomenology or social constructionism, for instance, is not embraced by materials scientists. The theme this time goes to the core of detailed empirical documentation for experimental research, especially outside the social sciences, although some points will be relevant within them as well.
Since 2017, C. Glenn Begley has been leading BioCurate, a joint initiative of Melbourne and Monash Universities and worked for the research arms of biomanufacturers based in the US before this. He has posed six questions which aid in the replicability of experimental research. There is a balance, of course, between open science and confidential commercial information. If the research is aimed to produce a product which can be traded, some information will be closely guarded. Given that, these suggestions are still valid.
Were experiments performed blinded?
Experiments can and should all be done, or at the very least reviewed, by an investigator blinded to the experimental versus control groups.
Were basic experiments repeated?
If reports fail to state that experiments were repeated, be sceptical.
Were all the results presented?
No selection of the best data. Frank discussion of outlier removal.
Were there positive and negative controls?
Often in the non-reproducible, high-profile papers, the crucial control experiments were excluded or mentioned as ‘data not shown’.
Were reagents validated?
(Or to put it more broadly, is it fair to attribute the cause of the change observed to a particular variable?) To take an example, Begley points to experiments with small-molecule inhibitors. Investigators choose to attribute the desired effect to what he calls “their favourite molecule, ignoring the multiple other targets affected by the inhibitor, or consign the key experiments that allegedly demonstrate their lack of relevance to ‘data not shown’ ”.
Were statistical tests appropriate?
In a way, this is similar to the point above, but approached from a different angle. More details are available in the original article here.
Jon Brock followed up Begley’s concerns in a later interview where he commented on what he saw as a “replication crisis”, as numerous published scientific results cannot be reproduced independently scientists and so the credibility of the findings is called into question.
“If the result is the one we like, we call it a positive study. If it’s one we don’t like, we call it a negative study. But the only truly failed study is one that’s uninterpretable. What we’re trying to do in this business is improve human health. A negative study, if it’s well done, might tell us that that whole area of research should stop. It’s a really valuable contribution.” Blinding, where the scientists do not know the particular condition or variable which each sample represented, means they cannot be led down a path to predict results based on this knowledge. “If I was king of the world,” he says, “the one thing I’d insist is that all experiments are performed blinded.”
David S. Sholl of the School of Chemical and Biomolecular Engineering, Georgia Institute of Technology made some similar recommendations, in which the last couple relate specifically to material science:
Show the extent of your data’s variability
If the range of variation in the data is not given in the writing up of your test results, Sholl argues that readers might presume your experiment was just conducted once. This seems like a basic point so it is surprising that explicitly mentioning the number of experiments and the data variability needs to be emphasised.
Using standard materials, or working towards standardising materials, get a special mention.
Show data from calibration/validation tests using standard materials
One intriguing suggestion was to suggest working with researchers in other subdisciplines across laboratories if a standard does not yet exist. If adopted, this standard would then reap more citations. “Even if clear standards do not (yet) exist, clearly reporting how you have characterized any purchased materials will help future readers in examples where sample-to-sample variation in materials might exist.”
Report observational details of material synthesis and treatment
Sholl encourages incorporating videos or photos to supplement the methods section because, for instance, even the kind of vessel used or the method for agitation or stirring of the substance turn out to be significant for successful reproduction of results. This information can be placed in an appendix of supporting details. Sholl give more details on tips to include about on your data collection to get your research published.
Even though these points may seem obvious, it is still surprising what can be left out and still make it to publication. The suggestions I have offered here are a kind of checklist to give you the best chance possible for reviewers of scientific journals NOT to reject your article.