Peer reviewers, editors, experts, and statisticians—do we need them?
My views on statistical relevance and peer review have evolved over the years. At a recent Zoom session of the BCMJ’s Editorial Board we discussed the topic of peer review. Peer review has been defined as a process of subjecting an author’s scholarly work or research to scrutiny by other experts in the same field.
I, like Richard Feynman (“Science is the belief in the ignorance of experts”), have become aware of the dangers of believing in experts, and I have acquired some reservations regarding editorial peer review.
Early in my medical career I was anxious to publish in peer-reviewed journals. In Britain, my promotion as a junior doctor in a university centre required that I “publish or perish.” I was fortunate that, within 2 years of graduation, I had published in the two premier British journals, namely the British Medical Journal and the Lancet. I was pursuing specialty training in both internal medicine and general surgery at the Hammersmith Hospital in London.
Since then I have published over 200 articles, mostly in peer-reviewed journals. Statistical validations in surgery are difficult, and I was proud to co-author what has been referenced as the first-ever randomized prospective blinded study in the field of general surgery. I believed that validated scientific studies were the only ones worthy of publication, and that peer review was reliable.
I became even more involved in the publication and review of articles by serving on the governing board, or as editor or editorial board member, of international journals including eight medical journals. I have always been aware of Mark Twain’s famous 1906 comment on editors: “How often we recall, with regret, that Napoleon once shot at a magazine editor and missed him and killed a publisher. But we remember with charity, that his intentions were good.”
Despite my involvement on both sides of the peer-review process, I recognize its flaws and limitations. The trend in many prestigious journals, such as Nature and Science, is to rapidly evaluate the work using a few experienced reviewers, and then expose it quickly and openly to feedback, and possible rebuttal, by other researchers in the field.
Many of the major scientific discoveries in history were rejected by established journals. Only one of Einstein’s over 300 publications was peer reviewed. Important landmark papers have been rejected based on bias, or personal disagreement with the results or conclusions. All members of the BCMJ Editorial Board are opinionated, and therefore, at risk of displaying bias.
Nobel Prize–winning studies, such as Krebs’ work on the citric acid cycle, work on scanning probe microscopy, and radioimmunoassay were initially rejected for publication. Another Nobel Prize paper, “The market for lemons: Quality uncertainty and the market mechanism,” was rejected by three journals.
Mistakes occur in the opposite direction as well. A serious example is the early-2000s tragedy when Vioxx was approved for general distribution because the complications and deaths in pre-release studies were “not statistically significant.” Statistically significant studies may be insignificant.
According to the Economist (despite lacking trust in many economists, I am a subscriber), of 53 previously so-called landmark cancer studies, only six had reproducible results. Another group could validate just a quarter of 67 similarly rated research papers. Post-publication evaluation is now the trend in physics and mathematics. As a United States Supreme Court justice once stated, sarcastically: “This statistical significance always works and always doesn’t work.”
Journals prefer positive results. Negative results can be more important, but account for relatively few published papers. In the era of Donald Trump, knowing what is not true (“fake news”) is as important as knowing what is true. However, if a study with positive results is accepted and published by a journal, there may be less enthusiasm for publishing a subsequent article that fails to replicate the results.
If lightning struck and destroyed a major ancient monument, the event would be front-page news. If it were later discovered that there had been a mistake, and the lightning bolt had missed the monument, the follow-up report would likely be hidden deep inside the newspapers.
When a prominent medical journal editor asked experts to review research papers that she had deliberately riddled with mistakes, she found that almost all of the reviewers failed to spot most of the mistakes.
This is the era of predatory journals, where desperate authors pay to have their papers published. In the past, authoritative journals have published fake research. German physicist Jan Hendrik Schön was a world leader on semiconductors until Nature, Science, and Physical Review retracted 21 of his papers.
My faith in the peer-review process has waned over time but, like democracy as a system of government, it’s perhaps better than most of the alternatives.
Note, this editorial has no statistical validity, is written by a non-expert, and has not been peer reviewed.
—Brian Day, MB