Brave New World of Scientific Publishing

Jüri AllikJüri Allik, Professor of Experimental Psychology at the University of Tartu, belongs to the top one per cent of the world’s most cited scientists in his field. His recipes for becoming a top researcher were among the top 10 most popular posts on our blog in 2013.

This is the first post in Professor Allik’s revealing three-part series on scientific publishing.

To publish, or not to publish?

scientific journals

Few people could imagine the normal working day of a scientist. A scientist is imagined as wearing a white boiler suit and thoughtfully examining a test tube, with all kind of gadgets in the background. Actually, even an experimenter spends most of his or her time at the computer, answering emails or filling out often pointless write-ups and forms.

The older one gets, the less time that will go into research, and more time each day is spent making the results of the research public — also known as publishing. One of the ways to publish something is by going to a conference and presenting results there, be it an oral or visual presentation.

But one can seldom be a key performer at a big conference with thousands or more in the audience. Usually, just a couple of good acquaintances come to listen to your report, in addition to some diligent Japanese or Chinese, to whom you could have said everything in an evening with a glass of wine. It’s also bad that sometimes it’s possible to pretend to have message when, in fact, there isn’t one.

The most efficient way to report results is still through an article in a good, well-read journal. It’s possible that those in the humanities might be an exception, as they may consider anything less than a book not even to be a text; however, I have my well-grounded doubts about this as well.

Every scientist can name the most influential journals of his or her field on the fly. The main characteristic of these is that everyone takes a glance at the table of contents in the new issues. When an article has a striking title and the main message described in the abstract is important enough, then there is hope that some colleagues notice the article and maybe even read it. It’s common to send a friendly reminder about the publication of one’s article to some key persons, while asking their opinion.

In short, making the results public, publishing them, can be seen as the whole reason for the existence of science. In this respect, the life of a scientist is not much different from the life of a writer or a composer. No one writes a book just to have a nice cover and beautiful layout. Even the most selfless writer longs for his or her books to have readers. Only a severely possessed composer would enjoy a situation wherein nobody would like to perform his or her work.

The goal of creative endeavour, and especially science, is to find one’s way into other people’s heads and to programme their thinking and feelings in a certain direction. Feelings? Of course, because in science many things are being done based on affinity or some vague inner gut feeling, not emotionless logic. Every discovery, even a simple finding, becomes a part of common knowledge only when it is shared with others — repeated, confirmed, over-calculated, renowned, disproved, etc.

I hope that I’ve reached this point without great losses and haven’t turned away those who think that putting scientific publications and the references in them into numbers is evil. Due only to the fear of such possibility, some local amateur scientists have begun to say that the real, profound intellectual giant is the one publishing very little — ideally not writing anything at all before he or she has something important to say.

I even appreciate this approach but it has a small shortcoming. How should bankrollers or institution leaders discover that the message, prepared for a long time and kept in secret as it is, is really worth anything? Should they believe someone’s verbal confirmation? Not considering this little detail, it certainly is a nice alternative to the asylum known as ‘Publish or Perish’, where researchers are forced to live.

The world of closed publications

When two academicians — I mean, two persons of the academic variety — meet up, they start immediately by cursing arrogant journal editors and belligerent reviewers who have once again rejected some manuscript of theirs.

Editors and reviewers aren’t a different breed from scientists. With a few articles published in renowned journals comes the obligation to start reviewing others’ articles. I remember that I strove for a week to write my first review, leaving everything else aside. I don’t exactly remember the results of my review, but in the best case it might have been a slight mention at the end of the reviewed work that the author was grateful to the anonymous reviewers for the work they’d done.

If you get enough publications, it’s hard to say no to propositions to join the editing board of some journal (which, in reality, doesn’t mean much) and, after some time has passed, to become an editor. It’s wrong to think that famous and reputable scientists have it easy getting their texts published. Mostly, the system is egalitarian enough, so the baggage doesn’t account for much. Sometimes it’s even the other way around. Everybody wants to spot an error in the maestro’s work and make a big deal of it.

A bit of information for those unfamiliar with it — a manuscript submitted to a journal (There are over 15,000 scientific journals in the world that can bear such a title without any doubt) is handed to the editor in chief, who, as a rule, doesn’t receive any compensation for his or her work.

The editor’s work is voluntary pro bono publico, and the public is served at the expense of personal leisure time. More affluent journals offer symbolic compensation to the editors, not exactly enough for the work and time invested in the work. The editor checks the manuscript, making sure it’s compatible with some simple criteria. The journal might have its own policy against certain types of contributions. For example, there are journals that (most of the time) don’t publish original research. Thus, when the description of an experiment and tables of results are spotted, there’s already enough ground for rejection.

An example would be a journal devoted to research in personality, which has a policy against studies conducted using clinical sampling. So, if it turns out that the guinea pigs were patients of a psychiatric clinic with a certain diagnosis, there’s already a good reason to reject the manuscript. There are other journals for publishing clinical work.

But the most frequent reason is the academic quality of the study. The number of individuals under exploration may be too small, the methods used for analysis old and outdated, the topic of the work might have lost relevance about 20 years ago, and so on. Just one of these reasons is good enough to reject the study immediately.

Additionally, it’s not out of question that the editor checks, apart from other things, the author’s address. If the scope of the author’s geographical and political knowledge doesn’t seem exactly wide, the editor might think that the manuscript has come from some land where formal education is given in some exotic language and people don’t wash themselves often. As it is hard to believe that people who don’t brush their teeth regularly have thinking which is up to date with academic standards, this too can be a good motive to reject the manuscript.

The better the self-evaluation of a journal, the more likely an immediate rejection would be. The two most influential weekly scientific journals — Science and Nature — reject at least 80 per cent of manuscripts they receive.

Criteria related to quality are not always relevant. The most frequent reason for rejecting a study is that it doesn’t seem ‘sexy’ enough, with ‘sexiness’ meaning whether the manuscript could find its way into the ‘swimsuit round’ — a nice news story in some leading newspaper, such as The New York Times or Economist.

This means that the work must definitely be politically correct. For example, if you have stumbled onto sensational results implying that women are worse at math than men, there’s no hope that your work will pass. Such a manuscript would rather fit in a niche journal with print run of 200 and no media interest. An article making the claim that if women only wanted, then they could be as good at math as men would stand a good chance.

It’s probably this kind of editorial policy that made Svante Pääbo, the first scientist to figure out the draft sequence of the Neanderthal genome, decline to send his breakthrough article to both Science and Nature. Instead, he chose the really professional-oriented Cell Journal, which hadn’t published many works in this specific field before. Svante Pääbo writes about it in his excellent book Neanderthal Man, which will certainly become a classic, comparable to Watson’s Double Helix.

If the editor has decided that the article would suit his or her journal, then what’s next? Depending on the size of the journal, which is tied to the number of manuscripts published annually, the publication has a number of assistant editors doing all the dirty work. A small journal can have two or three assistant editors, and a larger one could have ten times as many. A journal can be divided into three or four smaller editing boards as well, with each one operating virtually independently. No matter how the labour is structured, the article goes to the assistant editor, assigned by the editor in chief, who arranges the review. Usually, a manuscript is sent to two or three reviewers.

How does an assistant editor choose the reviewers? There are many ways to make the choice. Every editor has certain trustees who are experts in some fields or good at some specific jobs. An electronic editing system, used by all journals nowadays, keeps account of previous authors and reviewers. The system knows which articles have been written by whom and which ones they have reviewed. Often it gives a hint as to who would be the suitable person to read and review a submitted article.

The editor can figure out from the manuscript who is the most important person related to the topic — someone who has done something important in the field before. A logical decision would be to send the article to those with whom it disputes or upon whose work it builds.

Often some search keywords are chosen, based on the article, and entered into some scientific database. For example, I as an editor most often use a database called Web of Science (WoS), currently belonging to the giant Thomson Reuters corporation. At the moment, there are over 45 million articles or books, published since 1980 in the WoS.

Usually, it’s not hard to find relevant articles that float up after entering a certain combination of keywords. If there are too many results, one can look up which of them have left bigger academic footprints. This can be measured by the number of subsequent works that have responded to them. And these authors are the ones worth turning to.

An outsider might find it strange that such an obstructive system has been built when it comes to publishing scientific articles. Why is a manuscript often given to jealous colleagues to read who might recommend changes or even rejection? The publishing houses that own the journals like to stress that it all takes place in the name of good scientific practice. The point of peer review is to weed out works of low quality that do not progress the science. Lacking or misguided articles can hinder progress or lead it onto the wrong path.

Weaknesses of the peer-review system can be demonstrated through cases where good, even great, works have been rejected. One of the indicators of a journal’s quality is the per cent of publications that have never been referenced. Only in rare cases has an article turned out to be a sleeping beauty, noticed and widely referenced after a long period of lethargy. Such sleeping beauties do exist, but there are very few of them.

As a rule, most references are made in two or three years after the work is published. If the references are not there by then, it’s highly likely they’ll never be. Although the published article can have a hidden impact, the lack of references is still a sign of the influence in a certain field varying only marginally from zero. Thus, the barrier does not always work in the intended way. Good articles sometimes won’t make it, while works of no interest to anyone (including the author him-/herself) end up being published.

Eugene Garfield, the creator of the WoS, analysed the database when there were only 38 million entries, and found that nearly half of these have never been referenced. Certainly, there are good, even great, journals where the percentage of articles never referenced remains marginal, but there are indeed many journals with very little influence. It’s not unusual for the impact factor of a journal to be barely over 0.1, meaning that 90 per cent of the articles published there have not been referenced even once.

There’s no point in denying that the barrier constituted by reviewing influences quality, but the main reasons for it to exist are monetary. Scientific journals, physical or digital, have much less space than requested by those who desire or need to publish their works (especially doctoral students who otherwise could not defend their thesis). WoS alone accepted 2,209,985 works dated 2012.

From the viewpoint of a scientist, it’s not rare that a manuscript could be published no sooner than in the fourth or fifth journal to which it has been sent after rejections. This implies that in addition to the 2 million articles published during the year, there must be at least same amount of those not yet published. The owners of journals explain that their journals have a limited capacity and it just is not possible to publish over a certain quota per year. ‘Possible’ mostly means profitability: More articles mean higher expenses on workforce, paper, etc.

The first scientific journal was published in 1665. As usual, there was a competition between the French and the English. Journal des Scavans was first published in January 1665, and the English Philosophical Transactions of the Royal Society launched in March of the same year.

One might think that the costs for paper, printing, binding, and distributing are not that large. During the Soviet regime, paper was really scarce, and there was an on-going fight over the limit, so universities published articles that no one would read and using really low-quality paper. Those who were politically ‘loyal’ could acquire paper for their works, in addition to other privileges. But even today the paper and printing ink have their price. Preparing the printing and editing the manuscript are also jobs and require payment. The question is: Who will pay for all those expenses? For a long time, scientific publication worked using the principle that the reader pays for the costs, similarly to newspapers. If someone wants to read a newspaper, he or she has to buy it.

Surely, it’s clear that a person from the street would not read Philosophical Transactions of the Royal Society. Indeed, there would be no need. As is the case with medicines, the subscription price of scientific journals includes more than just the production costs. For example,the Wiley-Blackwell publishing house’s European Journal of Personality, which I had the chance to edit for about four years, wants approximately 2,000 dollars for a yearly subscription. As the journal publishes about 60 articles per year, the price of reading a single article is a little over 30 US dollars.

This is too much to pay even for someone who frequently publishes in the same magazine or whose livelihood depends on the works published there. That’s why journals offer large discounts for single individuals who subscribe and keep the prices for institutional subscriptions sky high. But even such a relatively slim and cheap journal could only be acquired by a university library that’s large enough. So, eventually the cost of publishing journals is covered by the taxpayer, as universities are mostly financed by the state.

There is another important issue. When a scientist has finished an article, he or she starts thinking about which journal to choose for submission. One of the properties taken into account is the speed of the journal’s publishing cycle. This means the time passing from the delivery of the manuscript until the moment the issue in which it appears will be published.

Only journals of the ‘softer’ kind can afford for this cycle to be slow. In the fields of physics, molecular biology, and chemistry, no one would send his or her work to a journal where publication is known to take several years. It’s no joke.

I have first-hand experience of two years passing between accepting the article for publication and actually publishing it. Most journals supply each published article with information about when the first version and revision were submitted, and when the decision to publish it was made. The owners of journals know it well and keep putting pressure on their editors, so the publishing cycle is not too long. This could start influencing the reputation of the journal, and thus the number of subscriptions. The editor in chief very quickly makes the assistant editors understand really simple arithmetic truths.

It is usually public knowledge how many manuscripts are submitted to a journal in a year. Let’s consider the number to be roughly 300. The owner of the publishing house has told the editor that the journal has space for 60 articles maximum, no more, with their length often strictly limited as well. This makes things really simple. If the editor doesn’t want the manuscripts to pile up, resulting in a queue of a year or even more, then 60 articles must be accepted and the other 240 rejected. Editors have even learnt to mention this argument in their rejection letters.

The author is comforted through praise of his or her work, stating that it didn’t have any major shortcomings. But then the lack of space is mentioned, as well as the sad fact that as the journal is only able to accept 20% of all submitted articles, and the lucky one has to exceed simply being good and relatively flawless — it has to be exceptional and offer a great contribution to the knowledge we already have.

A keen reader might put together the couple of facts I’ve just described, and easily come to the following conclusion: A journal accepting or rejecting an article is not only (or even mainly) an indicator of the manuscript’s quality, but rather many other things that aren’t even directly related to the quality. The number of articles published is mainly related to the economic considerations of the publishers. When publishing a greater number of articles is financially harmful, some good or even great works might not cross the threshold.

I have met and spoken to many representatives of major publishing houses specialised in science. Each of them was a very pleasant person. Elsevier, one of the biggest science publishers, owns and publishes 2,800 journals, with over 250,000 articles published each year. Springer, with its 2,400 journals, comes close. To mention slightly smaller publishers, Sage publishes about 700 science journals, while Wiley-Blackwell has approximately 1,500.

All of these publishers love to stress that the quality of journals depends on the editors and harsh selection, conducted with the help of the peer-review system. The thing publishing houses don’t like to mention too often is that they basically use slave labour to achieve these goals.

As a rule, the editors – but even more often tens of thousands of reviewers who read and comment on the manuscripts – don’t receive any compensation for their work. Thus, the publishers can’t write their work off as expenses. But quality is most definitely not the first or sole priority of publishers. Lately, there has been a lot of proof that scientific publishers are no philanthropists. It is a big industry instead, with the main objective being to make profit for the owners.

For example, in 2011 the turnover of scientific publishing as an industry was 9.4 billion US dollars (in comparison, the total turnover of the Republic of Estonia in 2013 was 7.7 billion euros), used for producing two million scientific articles in English. With a simple division operation we can see that the average cost of a single article was 5,000 dollars.

The profit margin is surprisingly substantial for the scientific publishers. Elsevier, for one, received 37 per cent of its turnover of about two billion dollars as profit in 2011 — nearly 740,000,000 dollars. Other major scientific publishing houses have profit margins that are not much smaller. For Springer, the number was 34%, approximately 300 million, while Wiley-Blackwell had 42% — 106 million. Estimably, every published scientific article brings 1,000-1,500 US dollars of pure profit to the publishers.

For example, I published 12 articles in 2011, thus becoming mentally richer myself, and, in a good case, helping the owners of the publishing houses make 18,000 American dollars in pure profit (The scale would change if we take into account that in 2012 Estonian scientists participated in approximately 2,000 articles that were published in journals indexed by the WoS. Thus, if a single publication costs 5,000 dollars, then the publisher-capitalists made nearly 3 billion dollars of pure profit from Estonian scientists).

George Monbiot published an article in the Guardian with a witty title: Academic Publishers Make Murdoch Look Like a Socialist. Indeed, with such an astronomical profit margin, there’s no point in pretending that publishers are really concerned about the well-being of science and scientists – no more than the owners of a chicken farm worry about the hens laying eggs for them.

This is the first post in Professor Jüri Allik’s three-part series on scientific publishing. Coming up next: The Crazy World of Peer Review and The Future of the Worst Possible Science World.

This entry was posted in Career, General, Research and tagged , , , , , , , , , . Bookmark the permalink.