Assessing quality in quantitative studies

While it is easy to find guides for assessing the quality of research via the net or generated by AI, as we all know, we need to assess the appropriacy of these guides for our specific purposes. This is what I look at briefly this time.

One rather comprehensive guide for quantitative studies (see page 4) comes from Leanne Kmet, Robert Lee and Linda Cook (2004). As quality withstands the test of time, this guide cannot be considered “out of date” now. Some questions are particularly apt for observational studies or ones which evaluated a particular treatment or intervention. We tend to use the word “participant” instead of “subject” when referring to humans, but this checklist is relevant for studies not involving humans as well for humans.

1 Is the question / objective sufficiently described?

2 Is the study design evident and appropriate?

3 Is the method of subject/comparison group selection or source of information/input variables described and appropriate?

4 Are subject (and comparison group, if applicable) characteristics sufficiently described?

5 If intervention and random allocation was possible, is it described?

6 If intervention and blinding of investigators was possible, is it reported?

7 If intervention and blinding of subjects was possible, is it reported?

8 Are the outcome and (if applicable) exposure measure(s) well defined and robust to measurement / misclassification bias? Is the means of assessment reported?

9 Is the sample size appropriate?

10 Are analytic methods described/justified and appropriate?

11 Is some estimate of variance reported for the main results?

12 Is it controlled for confounding?

13 Are results reported in sufficient detail?

14 Are conclusions supported by the results?

Just out of interest, I asked Chat GPT to generate a list to assess the quality of experimental research and it came up a similar list which was more generally phrased. It included additional points related to ethical concerns for research involving humans, peer review, the contribution of the research to the field, its limitations and recommendations for further research. This basically echoes the fact that you would find such sections in a published article which had been peer reviewed.

I asked it to compare its list with the list above. Its first attempt was a list of compared headings. Its second attempt offered a list with a heading from the original list which it stated “corresponded to” a heading from its list. The fifth attempt generated this:

In essence, the first list appears to be more streamlined and concise, while the second list provides a more comprehensive overview of factors to consider when evaluating the quality of experimental research.”

The AI therefore told me it had given “the more comprehensive overview”. While this is a start, it would be ultimately inconvenient to rely on the convenience of an AI generated list alone. An overview does not produce precision but in this case, it did offer some additional information.