The current model of the scientific academic progress includes a standard method for sharing and evaluating a scientist’s contribution – this method is outdated, inefficient, misguided, and even harmful. It can, and should, be improved.
In summary the method is the following: a scientist finishes a project; she then has to write a “paper” explaining what she did and what conclusions to draw from it; she (the author) submits her paper to a “journal”; the journal asks 2 to 5 other scientists (the reviewers) to decide whether or not to publish this paper; journals often reject papers and/or demand corrections. There is no other place for a “paper” to go, and throughout her career the scientist will continue to submit papers to journals who will continue to decide if they get published or not, and she will be largely judged as a scientist by the number of accepted to papers and what journals accept them. On this judgement depends her employment and funding for research.
The first thing to note in the process is that the people who decide if the paper gets accepted (the “reviewers”) are a very small group, and often not the intended audience of the paper. Writing a paper to an audience of thousands of potentially interested scientists and having only 3 people decide on its fate is obviously far from an optimal system. It’s hard to believe that 3 people can make a judgement that would reflect the wider opinion of the community if it was allowed to see the publication. In fact, if these 3 people didn’t find the scientist’s work relevant, who’s to say others wouldn’t?
Next consider the amount of information and time lost in the decision making process. It’s a binary decision. After months of exchanges between the author and the reviewers, the paper either gets published in the journal, or it does not. Discussions, considerations, reasons for publication or rejection, the potential initial objections, the discussion process, etc, all gets lost after the binary decision is taken. Time is also wasted: this process of decision making can last up to several months, and is a drain on all those involved (which could otherwise focus on useful work). It makes for a generally inefficient way of publishing content in today’s world.
This framework for evaluating scientific achievements also has more general and systemic problems. Those relate to the economic nature of the publication process, as well as those whom it benefits.
Although the scientific research is publicly funded, the “journals” who publish the papers and to whom the scientists are expected to submit their papers, are (for the most part) for-profit enterprises. This creates the current absurd situation that to access publicly funded science, universities, libraries or the general public have to pay the journals who monopolize the content of science. They monopolize it by claiming to be only relevant means of publication – they are not! In economic terms, private journals are “rent seekers”. In more plain terms: parasites. Expensive, inefficient, unnecessary middlemen between those who produce science and those who want to learn.
Furthermore, the current journal system disproportionately benefits a particular set of scientists and reviewers. Journals provide a platform for the expression of views, biases, opinions and interests of the few who manage them: those are the most powerful players in the scientific publication game. Conversely, by being run by those with more power, journals pose a barrier to those who have less weight in the game. A decision maker in a journal is in a very powerful position to decide what science gets to be widely seen or not. By putting so much power in the hands of the owners of journals and such a small group of reviewers, one is likely going to end up with a biased process (even if the reviewers and owners mean well…). This not only biases the whole scientific field, it further empowers the more established figures who are the the final decision makers in the process. Naturally, once such position of power and privileged is obtained, those who obtain it will defend it (and it’s importance) at all costs. This shouldn’t make the rest of us willing to accept it.
It follows that another problem of this system is that it resists change and improvement. This happens because, not surprisingly, those who decide what to fund and who to hire, are also those at the top of major journals deciding what to publish. Once one succeeds within the system, the more control one has over it, and the less incentive to change it and open it. Once the system is set up to work this way, it is hard to break from it. Hiring and funding criteria relies heavily on outsourcing judgement to this method (counting the number of publications and respective journals). Therefore it becomes rational (in the short term) for any individual scientist to focus more on getting published than to focus on following his or her creative capacities. This creates a short-term focus, and biases individual scientists and the scientific process. Because the number of accepted papers into journals is limited, this also creates elements zero-sum dynamic between scientists.
Science done in universities, by scientists fighting for employment and funding, is deteriorating into a system for paper-production as opposed to a method of inquisitive exploration and applied research. Some of my own experience confirms this: everyone feels pressured to publish papers, and they sacrifice creativity and efficiency in their work. Friends (and myself) have expressed that the publications system counts as reason to consider abandoning academic science.
A perfect system is hard to design but simple improvements are not. Namely we should move in the direction of open (all can publish) and free (all can read) systems of communication. In such a system, power is less likely to concentrate, but rather will spread over the community of scientists. The usual objection to this is that the current journal system serves as a filter keeping bad quality science out. This is a weak argument, simply because the function of selection and filtering can be done better and in different ways. In fact an open and free model does not by any means preclude the existence of filtering systems: anyone who has used a search engine knows that information can be pooled to highlight the most relevant results, and furthermore no one prevents the emergence of work of selection, filtering, commenting, and ranking, after the publication. Post-publication review is in fact much more informative than the binary decision of publication or rejection. In this way, with minimal quality controls, scientific content can more easily be shared, communicated, commented, and improved. Peer-review does not need to equal closed, private, and inefficient.
There is also a necessary change of attitudes, both in funding and in judgement. The pinnacle of scientific achievement is not the number (or even the perceived quality) of the publications one can make pass trough the current system. Science is a process with many dimensions, it is not a pack of articles published in journals. Publications are relevant, but they should not be the goal of science – and anyone who talks to academics (students or professors) will see how biased the process has become. We need to take a step back and put the focus of science on producing innovation, insight and progress. Science, learning, and teaching are the priorities, not publications.