Saturday, October 14, 2017

Publish or Perish!


How an Excess of Money & Ego Are Ruining Scientific Publishing & Peer Review


 Once upon a time, the peer review process was relatively simple: After years and years of scientific research, experimentation, and hard work, scientists (and Institutions of Scientific Endeavor) presented their hard-won discoveries to their peers for review and validation. If their premise was sound, the data concrete, and the (lab) work reproducible, the results were published for the consumption of all the other facilities and institutions in the land, and the other kingdoms-at-large.

Afterward, after many award banquets where heartfelt acceptance speeches had been made, goblets had been hefted in toast, and golden trophies (along with monetary treasures) had been munificently bestowed to the humble practitioners of the Scientific Arts, revolutionary new discoveries in medicine, technology and industry soon followed, whereupon all proceeded to live happily (and prosperously) ever after, or at least until the next breakthrough.


However, in the real world, the process is seldom that simple, having been tainted by other, less noble influences, such as money, prestige, ego, and competition among scientific peers for  the reputation and the grants and subsidies that come to those who ‘rush to publish’ first.


 This combination of persuasive factors creates a potential situation highly conducive to disaster, both for scientific research (and ongoing credibility). Also at stake are all the subsidiary industries (medicine, research & development, industry, technology, and so forth) that depend on the data that ‘trickles down’ from the research (reliable, faulty, or just plain false) eventually used to create and sustain societies and improve quality of life.

The scientific publishing industry has morphed into a multimillion-dollar business. In addition, millions of dollars are at stake (both governmental and private) for institutions that publish leading edge research, and publish it first. Because of that powerful incentive (cash), and another, almost equally persuasive (individual and institutional ego), the term ‘rush to publish’ has taken on an insidious undertone in the scientific community. It means publish first, credibility (and reliability) be damned. Those who publish first get first crack at governmental research dollars in the form of grants, and similar funds from powerful lobbies that depend on that research, and perhaps equally importantly, the publishing process itself. The (scientific) institutions (and scientists) themselves gain fame, prestige, notoriety, and, again, the enhanced status that brings even more money from those previously mentioned monetary sources.

This triumvirate of incentives (funds, prestige, and ego) has created a machine that more often than not produces questionable or faulty data, and results or conclusions that cannot be reproduced in another lab.

Further, scientists (perhaps the world over) are, mostly unwilling to release their work to their peers, especially if it is from a failed experiment (or years worth of questionable or sloppy, less-than-optimal research that led to that failure) fearing bruised egos, their standings in the scientific community, and their institution’s ability to attract future funds.

This results in their ‘muted failures’ being pointlessly reproduced in other research facilities, sometimes to the tune of wasted years by those institutions and scientists whom, had they known of such, would have been able to avoid valuable resources expended on a approach that has already been proven to be fruitless.

The showing of clinical data has become increasingly crucial to research as scientific methodologies and technologies have advanced, and as the need for more groundbreaking and lifesaving technologies have increased. However, the unwillingness or inability to show one’s data and work (especially critically important failed experiments) has also increased. Too many scientists are unwilling to do it, for fear of showing flaws in their techniques, their missed mistakes, and perhaps more frightening, what could be conceived or interpreted as a failure of a more intellectual nature. (Read: No one wants to look like a dummy in front of his or her peers.)

With all those powerful influences in play, institutions and their research fellows are more reluctant than ever to follow up on what has become a scientific version of that old high school dare ‘I’ll show you mine if you show me yours.’ The work that they do ‘rush to publish’ is often faulty, false, irreproducible, and even dangerous.

The more valuable body of data, the failed scientific research, is often safely hidden, or worse, destroyed, dooming other facilities, that could have benefited from that data, to spend time and money pursuing false leads and failed experiments down blind alleys which could have been avoided altogether in the pursuit of more fruitful research.

This must stop, if only to keep things like this from occurring.


Yet the powerful influences listed above (money, ego, reputation), are extremely difficult to overcome, especially in an industry (science) where credibility (or the perception of it) is everything. Ironic, then, that that same asset (credibility) is most often damaged by the early publishing of faulty data and experiments (or findings) that cannot be reproduced in any other lab by any other scientist.

Solutions to the Problem:

Allowing for redundancies and procedures, which may already be in place.

1.       Corral the check writer: Funds for scientific research should be withheld until all the following criteria (and time frames) have been met. (The money could be kept in an interest-bearing account separate from other funds.)

2.       Lock Patents, Grants, & Copyrights: While a body of work is being vetted and reviewed, a lock should be placed on the work in question to protect the rights of the individual and his/her institution. The duration of this ‘lock’ should exceed past the review process either until the work is proved to be faulty or false by a board (or boards) of peers such as those listed here, OR until the work has survived ‘in the wild’ for a predetermined amount of time that has shown it to be valid and viable. Any other organization (or individual) who has done, produced, or invested a considerable amount of work that greatly exceeds or contributes to the work in question should either share in any such patents, OR be granted such in full, but only if the work has been abandoned by the original entity.

3.       In-house verification before publication: Organizations should publish among their in-house peers first, before releasing their findings to the next (but not final) stage of verification outside their institution’s walls. This first step of verification could be used to check for proper procedures and protocols, proofing, lab work, initial findings, and reproducibility.

4.       Non-partisan Preview Board: A non-partisan, multi-institution review board which does not include any members from the submitting/publishing organization should vet and review the work next, with a critical eye toward predefined parameters of quality and procedure on which an overwhelming governing body of ALL institutions (academic, industrial, and otherwise) can all agree. The members of this panel (either institutions or individuals) should change regularly, and, again, never include members of the submitting or publishing organization.

5.       Valid and Viable: Where possible (and when possible), the research should be applied to real-world situations (ideally those situations for which the work was especially created in the first place). The final stage of verification should be to prove that the research/work/data actually addresses, supports, and solves the problem it was designed to tackle, and can be reproduced anywhere at any time by any other organization with the proper means to do so.

6.       Publish all Data: All data, experiments, and procedures, including and especially failed procedures, should be published on an open (inter-institutional) forum, and categorized by field of research, institution, and time frame. This data, once published in this fashion, should be considered community property (by all involved entities), leaving it open to any other organization to pursue, even the organization that had initially pursued it and given up on it. This data could be published anonymously, if the originating organization or individual desires it. (However, it should not be seen as a necessity.)

7.       Release the check writer: Release of funds should occur only after a thorough and reasonable amount of time has been allowed for all the previous steps to occur, perhaps 3-4 years. During that time, government offices (and organizations) are free to continue to fund research facilities as they see fit, however no unproven work should be released until it is fully verified, and re-created (‘x’ amount of  times) under all relevant conditions.