Studies with statistically significant findings are more likely to be published than those with non-significant findings. This publication bias is a threat to consumers of meta-analyses, because it can exaggerate the evidence supporting treatment and hence mislead decision-making. Some statistical tests have been developed in an attempt to detect and adjust for this bias.

In this study, Lin et al. assessed for the presence of publication bias in 28,655 trials using 7 publication bias tests.1 Consistent with previous findings, they found that Egger’s regression test more frequently detected publication bias than other tests.2 The authors also found that, while there was strong agreement of the results among Tang’s, Macaskill’s, Deeks’, and Peters’ regression tests for binary outcomes, the agreement among the remaining comparisons for publication bias tests was only weak or moderate. This suggests that meta-analysts cannot completely rely on a single publication bias test, and reminds researchers and consumers of meta-analyses to carefully interpret the results even when there is the absence of statistical evidence of publication bias.

What can we do given that no test perfectly assures publication bias? There is no single, magical solution. First, systematic reviewers should resort to non-statistical approaches, as the authors suggest. These represent an exhaustive search for gray literature and a contact for unpublished details through clinical trial registries and drug approval agencies. Second, researchers of original studies need to register trials and primary outcomes on clinical trial registries, and to report their pre-specified outcomes whether or not they are significant.3 This will prevent non-significant results from being buried. Third, reviewers and editors need to endorse the reporting guidelines (ex, CONSORT) for original studies.4 This allows confirmation of or request for the otherwise unpublished results by looking through study protocols or trial registries. Finally, consumers of meta-analyses need to remain cautious because it is impossible to identify all unpublished data. As Albert Einstein once said, “Unthinking respect for authority is the greatest enemy of truth.”