I think it was in either 2017 or 2018 when I had lunch with a visiting speaker and mentioned that I was finding it hard to see any value in publishing in journals. With bioRxiv and similar projects, we could just get rid of journals all together. He said that journals still served an important filtering function in letting him know where to find the good research and, without journals, we would be overwhelmed by poor science. I now want to respond to that idea because I think there is a possible benefit to getting rid of journals and I think the problems that he feared can be easily overcome.
A journal-free world can break homogenous thought
When someone talks about finding the best science by looking at specific journals, this often means the high-impact factor journals. Impact factor is a terrible thing to use for evaluations and there is even some evidence that journals with high impact factor publish less reliable science, something which I believe makes sense but I’m not going to discuss now. I want to bring up a different problem.
If everybody (or at least the majority) is using the same criteria to evaluate journals and everyone wants to read the “best” science then everybody is reading the same science. Even if it is the best science, I think there is a huge danger that we will become homogenised. Everyone is reading the same science so everyone knows the same things, everyone follows the same trends, uses the same techniques and approaches problems in the same way. If we don’t have journals then perhaps we won’t all read the same articles, we will think about problems in more diverse ways and investigate unusual pathways. I think that could hold a lot potential.
There are better ways to sort through information than in journals
In some cases, journals can serve to bring together research on a specific topic. If you are interested in a particular sub-discipline and there is a journal devoted to that sub-discipline, then you know that the content will be relevant for you. I, myself, use this benefit of journals. But it’s not an ideal way to sort through information. If you are doing a literature review, then you will not browse specific journals but use a search engine to look for specific keywords. Some platforms even allow you to set up alerts when articles contain your desired keywords.
That is a huge improvement but still not ideal. A simple search like that might miss articles from journals that do not make full text available or that is in a format which can’t easily be indexed. Without journals we can more easily set up a single platform, such as bioRxiv, which can have much more advanced features. I have seen lots of articles that have keywords but its never quite clear what their purpose is as they are very limited. But if we tagged and searched for articles using a system of detailed, categorical keywords, we could accomplish a lot. It would be possible to search for all papers about organism x, using experimental approach y and with sample size >z. This is not possible with the way keywords are used currently and certainly not with them split across multiple journals. Potentially, AI could make it possible to do such searches in the full text but, until then, such an approach could make scientific knowledge far more usable.
I only use bioRxiv above as a potential example of how such a thing could be done. I know one of the big criticisms of bioRxiv and preprints in general is that they are not peer reviewed. This doesn’t have to be the case. There are possibilities of post-publication peer-review which offers certain benefits and there is no reason that bioRxiv could not implement peer-review. EMBO and ASAP Bio are preparing to launch a journal-independent peer review platform with participation from bioRxiv. Although I think there are more important changes that need to be made to peer review and a friend and I have almost finished a short article on that topic.
We do not need journals to identify good science
Maybe you agree with me that using a broader set of literature will have benefits and that journals are not the best way to search for information but you still think they are necessary to find the quality papers. I would ask you to consider books or music. Do we judge the quality of the music or a book by the publisher? No. I’ve never met anyone who says “I only listen to music published by Sony” or “I only read books published by Penguin” as a way to get the best ones. If anyone said something like that, we would all be completely stunned because we realise that there are better ways to identify quality.
We can go by best seller lists which will be the equivalent of article level citations. This tells us what is being read, what is popular, what is relevant. But the most popular is not necessarily the best. So we also pay attention to curated lists of the best artists or the most important books which are assembled by critics. These are especially important for less popular genres which do not make best seller lists. This is a role that journals could fill; not as publishers but as entities which collect and update lists of the important papers that we should read. If they are able to do that in a way that truly adds value, then people will be prepared to pay for that but we should be letting journals hold the world’s scientific knowledge for ransom.
It would also be possible to make a coordinated and systematic attempt to identify the best papers. For example, imagine that, depending on the research amount, one person in every province or city made a yearly assessment of every article in their field published in that region and chose the top three papers. Those papers were then sent to someone who was responsible for choosing the top three papers in a specific country from those that had been submitted, the process repeated for the best in the continent and, finally, the world. I could imagine such a system being set up by national scientific societies for important topics and it would provide a way to get an assessment on the relative strengths of all research.
Perhaps a large-scale sorting process would be laborious but there are still other approaches. The Zurich-based publisher ScienceMatters has reviewers assess each article on a scale of 1-10 for both “Technical quality” and “Conceptual advance and Impact.” If this were adopted more broadly and opened for post-publication review, perhaps linked to an ORCID account, it would allow articles to be sorted according to their quality. The quality would not derive from the company an article keeps, as with journal-based assessments, but from the article itself.
A bright future awaits
A journal-free world would not mean that we will be overrun by poor science and unable to find relevant information. It will require us to rethink the infrastructure that we use and how we engage with scientific literature but there are possibilities, not just to keep working at the same level as we do now but to transcend our current limitations.