Tuesday, 8 September 2015

Predatory Publishing: A Modest Proposal

What many now refer to as predatory publishing first came to my attention 7 years ago, when I interviewed a publisher who — I had been told — was bombarding researchers with invitations to submit papers to, and sit on the editorial boards of, the hundreds of new OA journals it was launching. 

Since then I have undertaken a number of other such interviews, and with each interview the allegations have tended to become more worrying — e.g. that the publisher is levying article-processing charges but not actually sending papers out for review, that it is publishing junk science, that it is claiming to be a member of a publishing organisation when in reality it is not a member, that it is deliberately choosing journal titles that are the same, or very similar, to those of prestigious journals (or even directly cloning titles) in order to fool researchers into submitting papers to it etc. etc.

As the allegations became more serious I found myself repeatedly telling OA advocates that unless something was done to address the situation the movement would be confronted with a serious problem. But far too little has been done, and so the number of predatory publishers has continued to grow, and the cries of alarm are becoming more widespread.

Initially, the OA movement responded by saying that it was not a real problem because most so-called predatory journals had few if any papers in them, so there could be very few researchers affected.

Nevertheless, the number of publishers listed by Jeffrey Beall as “potential, possible, or probable predatory scholarly open-access publishers” has grown year by year. Since 2011 Beall’s list has increased from just 18 publishers to 693. One has to ask: why would there have been a 3,750% increase in this number if only a handful of people ever use the journals?

When it became harder to sweep the problem aside, OA advocates shifted ground, and began to argue that while there may be an issue it was only a problem for researchers in the developing world.

But is that response not simply another way of trying to suggest that there isn’t really a problem? Either way, why would the problem be any less important if the only victims were researchers based in the developing world?

In any case, I do not believe it to be an accurate characterisation. When a recent ABC Background Briefing examined the activities of one suspect publisher’s operations in Australia it concluded that there was a real problem down under. And Australia can hardly be described as a developing country.

Call me a sceptic


My own personal experience likewise suggests that the problem is somewhat more widespread and worrying than is generally acknowledged. I am regularly contacted by researchers who have fallen foul of dubious OA publishers. Yes, some of these researchers are based in the developing world, but a good number are based in the developed world, and some are even based in prestigious North American universities.

So call me a sceptic over claims that predatory publishing is not a serious issue, or that it is only impacting on those based in the developing world.

I’d also have to say that when I contact universities where those who have asked me for help are based, or big publishers whose journal titles have been used as bait to gull researchers into submitting to a predatory journal, I don’t get the feeling that there is much willingness to help the victims, to tackle the problem, or even to confront it.

For their part, OA advocates often also resort to arguing that subscription publishers are also predatory, so why does not Beall include them in his list as well? While this may be true, it is not particularly helpful, or relevant, in the context of seeking a solution to the problem of predatory OA journals.

So we are left with a growing problem but little effort being put into resolving it.

What we do have is a white list run by the Directory of Open Access Journals (DOAJ), and a blacklist run by a single individual (Jeffrey Beall).

One problem with the white list approach is that it can too easily become an exclusive club (excluding, say, journals based in the developing world). Moreover, the management of DOAJ has not been trouble free. Last year, for instance, it had to remove over 650 journals from its database after it decided it needed to tighten up its selection criteria and ask publishers to re-apply for inclusion. This was necessary because it had become clear that predatory journals were finding their way into the database. But as predatory journal buster John Bohannon has pointed out, the real problem is that DOAJ doesn’t have sufficient resources to be very effective. DOAJ is, he says, “fighting an uphill battle to identify all of literature’s ‘fake journals’.”

As a lone individual, the challenge for Beall is that much greater. It is no surprise therefore that he and his blacklist are frequently (and often bitterly) criticised for including publishers without sufficient evidence that they are indeed predatory. In any case, add OA advocates, Beall is “anti-OA”, and so his list should be completely ignored. Of course, it is always much easier to criticise someone who is trying to solve a problem than to do something about it yourself.

So what is the solution? Personally, I think the problem needs to be approached from a different direction.

What is surely relevant here is that in order to practise their trade predatory publishers depend on the co-operation of researchers, not least because they have to persuade a sufficient number to sit on their editorial boards in order to have any credibility. Without an editorial board a journal will struggle to attract many submissions.

This suggests that if a journal is predatory then all those researchers sitting on its editorial and advisory boards are to some extent also predatory, or at least they are conspiring in the publisher’s predatory behaviour. After all, if members of the editorial board of a journal that was engaging in predatory activity wanted to end or curtail that activity they could join together and resign, or threaten to resign.

Yes, I know some researchers have their names listed on journal editorial boards without their permission, or perhaps even knowledge. But the majority do so because it looks good on their CV. And in accepting an invitation to be associated with a journal most ask far too few questions about the publisher, and do far too little research into its activities, before saying yes. ABC found over 200 Australian researchers sitting on the editorial boards of just one predatory publisher. I am confident that most if not all of these agreed to sit on the boards.

So my question is this: Do these researchers not have some responsibility for any predatory behaviour the publisher engages in? Personally, I think the answer is yes!

What to do?


So what to do? Here I have a modest proposal. I don’t know whether it is practical or feasible, but I make the proposal anyway, if only to try and get people to think more seriously about solutions rather than excuses.

Why does the OA movement not create a database containing all the names of researchers who sit on the editorial and/or advisory boards of the publishers on Beall’s list, along with the names of the journals with which they are associated? Such a database could perhaps serve a number of purposes:

·         It could be used as a way of cross checking the appropriateness of a publisher/journal being listed on Beall’s site. It would at least surely focus minds, and hopefully encourage editorial boards to demonstrate (if they can) that their publisher/journal has been inappropriately placed on Beall’s list, or do something about it, if only by resigning. To help trigger this process researchers listed in the database could be contacted and told that their name was in it.

·         The database could help those thinking of submitting to a journal listed in it to more easily find and contact members of its editorial board, and before submitting ask them to personally vouch for the quality of the review process. If things then went wrong the submitting researcher could take the issue up with those board members s/he had contacted. There is nothing quite like personal recommendation, and the personal responsibility that accompanies it.

·         Researchers could also search on the database before agreeing to sit on an editorial board as part of a due diligence process. If the publisher/journal is listed in the database they could contact board members and ask them to personally vouch for the quality of the journal.

·         Researchers could search the database for their own names in order to establish whether they have been listed on an editorial board without their permission or knowledge.

·         Such a database could also quickly reveal how many journals on Beall’s list a particular researcher was associated with.

·         If editorial board members’ institutions were included in the database regular Top 10 lists could be published showing the institutions that had the greatest number of board members of journals in Beall’s list. Would that not also focus minds?

·         And if countries were included Top 10 lists of those could be published too.

I am sure people would also come up with other uses for such a database.

As I say, I don’t know how practical my proposal is, or whether anyone would be willing to take it on — but it is worth noting that ABC has already produced a list of board members of the journals of one publisher (although without the name of the relevant journal attached). This suggests that it is feasible. In fact, creating such a database would be a great candidate for a crowdsourcing project.  

Above all, such an initiative would make an important point: responsibility for predatory behaviour needs to be pushed back to the research community.

As Cameron Neylon points out, we need to move beyond the point of seeing researchers as “hapless victims”.  They are active agents in scholarly communication, and when the publishing practices of journals with which they are associated turn out to be inadequate or deceptive researchers ought to take responsibility, not just point the finger at rogue publishers.

In any case, it is surely past time for the research community to step up and grasp this nettle.

On a more general note, creating public databases of researchers on the editorial and advisory boards of journals (both those considered predatory and those not considered so) would make the point that agreeing to be associated with a journal comes with responsibilities, that it is not just a way of padding a CV. 

Saturday, 15 August 2015

When email marketing campaigns go awry: Q&A with Austin Jelcick of Cyagen Biosciences

Earlier this week I received an unsolicited email messagefrom a company called Cyagen Biosciencesinviting me to cite its “animal model services” in my scientific publications. By doing so, I was told, I could earn a financial reward of $100 or more. And since the amount would be based on the Impact Factor (IF) of the journal in question, the figure could be as high as $3,000 — were I, for instance, to cite Cyagen in Science (IF of 30). 
Austin Jelcick
The email surprised me for a number of reasons, not least because I am a journalist/blogger not a scientist. As such, I have never published a research paper in my life, and have no plans to do so. Moreover, I have only the vaguest idea of what an “animal model service” is, let alone how I would cite a company selling such a service in a scientific paper.

But mostly I was surprised that — at a time when thousands of researchers are calling for the abandonment of the Impact Factor — any company would want to tie its reputation to what is widely viewed as a sinking ship.

Curious as to why I had received such a message I searched on the Web for the company’s name, only to find that the link from Google to Cyagen’s home page delivered an error message.

Eventually locating an email address I contacted the company and asked if it could confirm that the message that I received had been sent on its behalf (It appeared to have come from a direct marketing company called Vertical Response).

The next day I received a reply from Cyagen product manager Austin Jelcick, who explained that I had received the message “as part of our marketing campaign which is currently seeking to raise awareness within the scientific community for our citation rewards program.”

As I was associated with “several blogs and articles related to open access journals and publishing” he added, it was assumed I would be interested in “our newly launched campaign to actively reward scientists for citing us in their materials and methods section while simultaneously encouraging them to submit into higher impact journals for increased awareness of both their study and our services offered.”

He added: “we felt that it would be beneficial to the researcher to receive a sort of ‘store credit’ for doing something they already must do as part of the publication process.”

Now intrigued, I invited Jelcick to do an email Q&A so that he could explain in more detail who the company was and why it had launched this campaign.

Very surprised by the offer


While I was swapping questions and answers with Jelcick by email the company’s campaign was starting to attract a good deal of commentary on the Web.

Yesterday, for instance, high profile physician and science writer Ben Goldacre published a blog post entitled, “So this company Cyagen is paying authors for citations in academic papers”.

Goldacre concluded, “Perhaps my gut reaction — that this feels dubious — is too puritanical. But I am certainly very surprised by the offer.”

Goldacre’s intervention also sparked a postover on Retraction Watch entitled, “Researchers, need $100? Just mention Cyagen in your paper!”

By now there was also a steady stream of comments from scientists on Twitter, expressing everything from puzzlement to outrage — see thisfor instance.

By late yesterday Cyagen clearly felt the need to make a public statement, which it did by means of a Q&A on Facebook, explaining: “Please find below some of the questions which were asked of us and our response which should help clear up the misunderstanding which has occurred about this promotion.”

The post went on to list seven questions and answers. What the company did not explain, however, is that these had been extracted from the interview I was still in the process of doing with Jelcick. That is, Cyagen did not cite me!

What has become clear is that the company believes that its email invitation has been misunderstood. Linking to the Facebook post from a comment on Goldacre’s blog, Jelcick went so far as to complain that Cyagen has become a victim of “some gross miscommunication”.

Richard Van Noorden appears to agree, saying on Twitter that the story has been “gleefully badly reported”. He explained: “you can’t get $100 by citing them. You get a discount voucher for their products”. He nevertheless suggests that Cyagen should withdraw the offer “pronto”.

It would seem that the mistake Cyagen made was to link its promotion to the much-maligned Impact Factor, which has become a red rag to many scientists. (See also the first comment below).

Anyway, below is the full list of 17 questions and answers that make up the interview I did with Jelcick. Some of the answers are a little repetitive, but given the confusion surrounding Cyagen’s email I have chosen not to edit them.

See what you think.

The interview begins …


RP: Can you say something briefly about Cyagen Biosciences and your role in the company?

AJ:Founded in 2005, Cyagen Biosciences Inc. is a 200-employee contract research organization and cell culture product manufacturer with offices in Silicon Valley, California and China, and production facilities in China.

As a leader in custom animal models and molecular biology tools, Cyagen prides itself on the mission of bringing outstanding-quality research reagents, tools, and services to the worldwide biological research community at highly competitive prices. 

My role at the company as Product Manager ranges from marketing materials development, product/service seminars and presentations, technical support and scientific support on our animal model projects as well as our vector construction projects, and also general customer service to ensure our customers receive the best support while simultaneously delivering the highest quality products and services.

RP: In which country/state is Cyagen registered and who owns the company?

AJ:Cyagen is a privately held company with a US office and base of operations in Santa Clara, CA as well an Asia Pacific office and base of operations in China, with production facilities in China as well.

RP: Are you not able/willing to say who owns the company or where it is registered (as in where it files its accounts)?

AJ:Cyagen is registered in both the US (California) as well as China. I am unable to provide further details of ownership aside from it being privately held.

RP: On 11th August I received an unsolicited email message that appeared to come from a direct marketing company called Vertical Responsesaying that Cyagen was offering to pay me to cite its “animal model services” in my scientific publications. If I did, I was told, I would get a payment based on the Impact Factor of the journal in question. In the case of a journal like Science, it added, this would be $3,000. Can you point me to a web page containing the full details of the offer?

AJ:The Citation Reward Program does not offer payment for citations, but rather a store credit voucher good for future purchases of products and/or animal model services from Cyagen for researchers citing us in their publications.

We are not asking researchers to do something that they will otherwise not do. In fact, when a researcher publishes a paper, they are ethically required to disclose in their publication the outside reagents and services they received that contributed to their research findings, including those purchased from commercial entities. Our voucher is just a way of thanking them for using our products and services that ultimately led to their published scientific findings.

It appears that our email has caused misunderstanding as to the nature of, and how we wish the researchers to respond to, the voucher. As such, we extend our full apologies to you and other researchers for the misunderstanding caused by insufficient clarity in our email. Full details on the promotion can be found here.

RP: Is there a marked difference between offering to pay an individual for citing Cyagen and offering them a store credit for citing the company?

AJ:We believe so for two reasons. The first being that citing us in their publication is a standard ethical requirement for any researcher should they have chosen to utilize our products or services in their study initially and so they are being rewarded by means of a discount on future purchases rather than a personal incentive.

Expanding on this, the second reason is that we offer an incentive that allows for future research/studies to be performed rather than a personal incentive; this way it is the research institution and their studies which are benefitted.

RP: How many scientists have shown interest in/taken up Cyagen’s offer so far, and what are your expectations here (presumably you have set aside a sum of money to cover the payments)?

AJ:Since the promotion’s launch in July, we have had only a few researchers contact us regarding the program. Again, no money is set aside as researchers are not being given financial compensation but rather store credit for future purchases.

Given that researchers who utilize our services are required to disclose the source(s) of all reagents and services used in their research when publishing a journal article, we would hope that our existing customers would take advantage of the promotion and become return customers, while new customers may be more likely to choose us over a competitor for being rewarded for something they already must do as part of the publication process. 

Why did I receive this invitation?


RP: I am actually a journalist/blogger rather than a scientist and so have never written a peer-reviewed paper. Why did I receive this invitation?

AJ:Our marketing and business development teams do their best to find researchers or associated individuals (i.e. Lab managers, purchasing personnel, etc.) who are actively seeking and/or purchasing reagents and laboratory services for their research.

Due to the nature of your blog (which covers peer reviewed publications and open source journals) our staff assumed that you were involved in active publications or research leading to peer-reviewed publications.

RP: What are “animal models” by the way? Is it that Cyagen breeds rats and mice and sells them to labs for experimentation, or am I misunderstanding?

AJ:Animal models are just this: model organisms for the study of diseases and underlying biological mechanisms for the development of novel therapeutics for both humans and animals (domestic, livestock, etc.). The animal models we produce are custom engineered for the researcher based on their study.

For example, a researcher may be studying the effects of a drug on weight loss during a high fat diet and may think that Gene X may cause some sort of increased/decreased response to the drug. The researcher would then look for a mouse/rat model where Gene X is missing or overexpressed, and after contacting us, we would engineer a genetically modified mouse/rat lacking Gene X or over expressing Gene X for their studies.

RP: So my understanding was right: When you say “The animal models we produce are custom engineered for the researcher” you are talking euphemistically. An animal model is a live rat or mouse, and when you say you “engineer” animal models you are referring to a process that a layperson would call “breeding” an animal?

AJ:Yes and no. “Model” is not a euphemism but a standard term in the field, which refers to modelling a disease state or condition.

Breeding the animals is a step of the process however. To initially develop the animal models a variety of genetic engineering techniques are used including constructing synthetic DNA fragments and subsequently injecting these into an early embryonic stage of development so that they integrate with the animals’ DNA, subsequently creating an animal with foreign DNA or with portions of its DNA altered.

Also, the embryos and animals which are originally used are inbred laboratory strains such as ones you would find at a variety of animal providers or university core facilities. If you would like more information on these animal models, there are a variety of good articles on Wikipedia among others on transgenic and CRISPR mice.

RP: What is your understanding of the Impact Factor, how it is calculated, and what it measures?

AJ:Generally speaking, an impact factor (IF) is a metric used to gauge the amount of times articles in a given scientific journal are cited, thereby serving as a proxy for the journal’s importance within its field of study.

This can be calculated in two ways. The first is the number of citations in a given year to articles published in the previous two years (A) divided by the total number of articles published in the prior two years (B). A/B = IF.

The second is similar, but removes the number of self-citations (C) from (A) prior to division by (B). (A-C)/B = IF.

For example, if in 2012 Nature was cited 250 times for articles published in 2010-2011, and it has cited itself 75 times, and a total of 100 articles were published in 2010-2011, then its corrected IF would be (250-75)/100 = 1.75.

Citing products not papers


RP: To clarify: Cyagen is asking researchers to cite its products, not papers written by its employees? If that is right, then presumably this is not about traditional citation boosting , but paying scientists for what amounts to “product placement”?

AJ:The goal is not product placement but rather: (1) to increase the number of publications featuring Cyagen as a product or service provider to add strength to our company’s reputation and visible experience; (2) to reward researchers for performing a task that is already required of them as an extra way of saying thanks for choosing us as a service provider to begin with.

Researchers already must (ethically) disclose in their publications any references cited, any sources of reagents and services, and any collaborations. Should a researcher purchase cell media from Company X, and mouse breeding services from Company Y, while collaborating with Professor Z, they would have to disclose this in their publication so that all entities receive credit for their contribution to the study.

Additionally, most studies are not “endpoint” studies so to speak, with many having follow up studies to determine further details.

As a result, it is beneficial to a researcher and their laboratory budget if they receive a store credit for their next purchase as a reward for their initial purchase and subsequent published paper as this decreases the cost of their next study. This is similar to receiving coupons after a purchase at a retail store good for use on the next visit.

RP: Do you feel that there are any ethical issues involved in offering to pay researchers to cite products in their papers?

AJ:Again, we are not offering to pay researchers to cite our products or services, but rather we are offering them an incentive to become a repeat customer. Rather than offering a discount on a current order (which we have done as part of other promotions) we are offering a discount by means of a store credit on future purchases.

Because of this, we do not feel that any more ethical considerations are raised than if we were to offer discounted services from the beginning to gain customers’ interest in choosing us over a competitor.

RP: When I contacted you about the message I was sent you replied that Cyagen was encouraging researchers “to submit into higher impact journals for increased awareness of both their study and our services offered”. The ethics of paying people to cite Cyagen products in research papers aside, do you not feel it to be a somewhat retrograde step to encourage researchers to chase after Impact Factors at a time when many are calling for the downgrading or extinction of the Impact Factor? For instance, over 12,000 researchers and nearly 600 organisations have signed the San Francisco Declaration on Research Assessment recommending that the research community should not “use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

AJ:The encouragement of researchers to seek higher impact factor journals is based on the existing academic thought that impact factor still is a somewhat relevant metric for assessing the importance of a given journal or article, simply because it is based on the number of times a paper/journal is cited by other publications.

This being said, impact factor is just that: a metric. The number of times a paper has been cited does not necessarily reflect the quality or integrity of a publication or journal, as many researchers are aware of pioneering and ground-breaking studies which were looked over at the time of publication but years later came into popularity and prevalence.

As many researchers still actively look to publish in large, popular journals simply due to their exposure and subsequent citations by other researchers in order for their own studies to gain exposure and relevance, we felt that offering store credits to researchers for something they already were doing or attempting to do was a sort of easy and simple method for researchers to obtain discounts on future orders as opposed to discounts requiring minimum purchases, quantities, etc.

Unscientific and deceptive?


RP: The point critics of the IF make I think is that while it may say something about some of the papers that a journal has published over a specific time period, it says very little (or nothing) about an individual paper published in that journal. Is it not the case, for instance, that a paper published in a high IF journal may receive no more — and possibly fewer — citations than if it were published in a small specialist journal with a low or no IF? Is that not the reason why there is now a strong movement against the use of the metric: it tells you little or nothing about the quality of an individual paper? If so, is not Cyagen encouraging researchers to engage in a practice that it is now widely held to be unscientific and deceptive. Would you agree with that?

AJ:You are correct in your statement regarding a given article within a given high (or low) impact journal. Ultimately the impact factor of the journal does not directly relate to the quality or number of citations any given article in it may have or receive.

As someone who has previously published in open source journals (PLOS ONE) I would also add that there are a number of high quality articles and studies in smaller open source journals, and that a higher impact factor journal may have the same number of high quality studies as any other journal.

The IF metric however is not deceptive or unscientific, it is simply one measurement of a journals’ relevance to the field it is a part of. It may be going too far to say it has no relevance to the quality of the paper as high impact journals tend to have a higher “level of entry” but is not a perfect metric. Similar to any survey or poll which attempts to measure a general trend, opinion, or importance, it must be taken with a grain of salt and in conjunction with other metrics for a true observation to be drawn.

Our citation rewards program in its current state is our first attempt at rewarding researchers for performing an aspect of research which they normally do already as an attempt to gain repeat customers. As we receive more feedback on the program, we may update and alter the program so that the promotion details are clear and that all previous customers (who actively publish studies) receive credit towards future purchases.

RP: Here is what I still don’t quite get: you said that researchers are “ethically required to disclose in their publication the outside reagents and services they received that contributed to their research findings, including those purchased from commercial entities.” You also said “We are not asking researchers to do something that they will otherwise not do”. Why then would you go to the effort of emailing them to say, “We are giving away $100 or more in rewards for citing us in your publication!” And why would you say to me when I contacted Cyagen that the aim is to “increase the number of publications featuring Cyagen as a product or service provider”.  I hear what you say about wanting to thank people for being customers and encourage them to buy again, but would it not be better to focus on improving your products and your customer service than paying them for doing something you say they are bound to do in any case? More pertinently perhaps, is it wise to link your offer to something as controversial as the Impact Factor? As I think you acknowledged, researchers have not responded too well to the offer. Here are some of responses on Twitter: one, twothreeand four

AJ:As I mentioned, we are constantly looking for ways to improve the promotion as well as new promotions. As we receive feedback from the research community, we will update promotions accordingly so that they are both beneficial and clear without cause for any sort of ethical concern. Impact factor was initially decided upon based on its historical usage by the research community. Based on feedback, we may alter the promotion to simply offer a store credit for each publication, independent of impact factor.

With regards to the previous statement about our aim to “increase the number of publications:” We frequently receive inquiries from new customers requesting references or citations from researchers who have utilized our services in the past, and do our best to list all publications (found through periodic PubMed queries) citing us on our website as a reference. By offering this promotion researchers will actively reach out to us to inform us of their latest study and publication allowing us to update our citation/reference database more quickly, thereby increasing the number of publications in which we are cited available for viewing and reference by new customers.

Additionally, although researchers are ethically required to disclose all sources of reagents, services, and collaborators in their materials and methods, they may not always remember to this. So the promotion serves a dual purpose as a reminder to do so.

Error message


RP: I received the marketing email two days ago. When I searched on your company name I found the home page but when I tried to access it I was told it was down. It is still down as I write this. Were you aware that your home page was giving an error message?

AJ:Our staff has been unaware of any down time suffered by our website as we have not received any reports of downtime by either customers or staff members utilizing the website this week including today.

It may have been the result of an improper link or URL. The website is available at: http://www.cyagen.com/us/en/

RP: Perhaps it is because I am based in Europe, but if you search “Cyagen Biosciences” on Google and click the first hit that comes up you get the following result:


That is how it has been for the past few days, and I am still getting the message as I write this follow-up question. I am not sure this could be described as an improper link because the URL that Google has indexed (and which is not working) is www.cyagen.com/. This would seem to me to be likely to conspire against Cyagen attracting many new customers to its services. Would it not be better to focus on improving the way that customers find you than mass mailing them and offering to pay them for something they have to do anyway?

AJ: This is the first report of any downtime of our website in Europe, and we thank you for providing a screenshot of the error which occurred (we will forward this to our IT team to solve the issue as soon as possible). This may be a regional DNS error as well as we have not had any reports of errors within the US, Canada, or the Asia Pacific region (see below for screenshot from US based inquiry).

Mass emailing is only one small aspect of our marketing strategies. We are actively engaged in optimizing our website as the newest revision just launched this summer, including SEO and relevant keywords, in addition to various promotions.

The citation rewards program is just one of several current promotions including free vector designs for nuclease based animal projects, reduced pricing for more popular mouse strains, and more.

Additionally, we have regional sales representatives who visit universities and companies on site for seminar presentations as well as attend vendor shows to enhance our customer exposure, and are constantly looking for new and innovative ways to expand our visibility.





Wednesday, 12 August 2015

Open peer review at Collabra: Q&A with UC Press Director Alison Mudditt

Earlier this year University of California Press (UC Press) launched a new open access mega journal called Collabra. Initially focusing on three broad disciplinary areas (life and biomedical sciences, ecology and environmental science, and social and behavioural sciences), the journal will expand into other disciplines at a later date.

One of the distinctive features of Collabra is that its authors can choose to have the peer review reports signed by the reviewers and published alongside their papers, making them freely available for all to read — a process usually referred to as open peer review.

This contrasts with the traditional approach, where generally the reviewers names are not disclosed to the authors, the authors names are not disclosed to the reviewers, and the reviewers reports are not made public (commonly referred to as “double-blind” peer review).

Since Collabra is offering open peer review on a voluntary basis it remains unclear how many papers will be published in this way, but the signs are encouraging: the authors of the first paper published by Collabra opted for open peer review, as have the majority of authors whose papers are currently being processed by the publisher. Moreover, no one has yet refused to be involved because open peer review is an option, and no one has expressed a concern about it.

Collabra’s first paper—Implicit Preferences for Straight People over Lesbian Women and Gay Men Weakened from 2006 to 2013was published on 23rdJuly, and the reviewers’ reports can be found here.

So how does open peer review work in practice and what issues does it raise? To find out I emailed some questions to UC Press Director Alison Mudditt, whose answers are published below.
Alison Mudditt


RP: Presumably both the author and all the reviewers have to agree to open peer review before Collabra can publish the reviews? What percentage of the papers it publishes does Collabra expect will have the reviews published alongside?

AM: Authors choose open peer review as an option upon submission, so it is always their decision and as such they have already agreed in advance. Reviewers are made aware that authors have chosen this option and could opt to decline the review if they are unwilling to have their review comments made publicly available.

As a secondary option, whether or not open review has been chosen by the author, reviewers can sign their reviews. So it is possible to have reviewer comments be open, but the identity of the reviewer remain anonymous. Or, for that matter, have closed review, but reviewers sign their reviews. This is all described here.

With only one published article it is hard to project what the percentage will be, but at this point the majority of authors—for the papers currently being processed in our system—have opted for open review.

We are not targeting certain percentages, but rather want to put new options in front of people, especially given the numerous critiques of traditional closed peer review systems. This will not be for everyone, but we believe there’s much to be learned from experimentation with new models.

RP: Will Collabra make any effort to seek out reviewers who are comfortable with open peer review?

AM: The academic editors are selecting reviewers, and their top consideration will of course be the reviewer’s expertise for any given paper.

We make all the options and elements of Collabra clear when inviting external editors to be involved. Some editors are particularly interested in the open review option, and other editors have not commented on it.

No one has refused to be involved because it is an option or expressed a concern about this option.

RP: I assume that not all the correspondence is shared when Collabra publishes the reviews, and perhaps they might be edited in some way first (at least sometimes)? If so, what considerations/editorial rules are applied before making reviews public?

AM: Currently, the “open review file” is constituted by the reviewers’ comments on the reviewer form, the editor’s comments to the author based on the reviewers’ comments, and the author’s response—all as captured in our editorial system.

It is clear on the review form that there is an area for confidential comments to the editor that would not be shown to the author nor included in the openly available comments. But, for the remainder of the form, it is made clear that comments may be seen by the author and used without editing.

What is not currently shown is any earlier version of the paper and any comments or tracked changes on that. We will continue to monitor this policy and will consider other options, if it seems that useful or important elements are being omitted by not including earlier versions/changes.

And, obviously, if any discussion occurs outside of the editorial system between a reviewer and an editor, that will not be captured.

As regards editorial rules and considerations for any edits or omissions, we would discuss that with the editors as they came up. It is hard to say in advance what that might be (other than any information which is confidential and not even being revealed in the paper), so we’ll deal with that on a case by case basis.

Naturally, we would opt to be transparent about this happening should it occur beyond normal confidentiality considerations. For now we will see how it goes with it’s being clear on the form that comments may be used as written.

RP: Having started down this road (and so given concentrated thought to the matter), what would Collabra say were the pros and cons of open peer review?

AM: Speaking on behalf of UC Press (I’m not sure it’s appropriate to speak as “Collabra” in this context), we think that the inner workings of the peer review process are, purely and simply, interesting for any reader, but in particular for people who would like to see more transparency in this process.

There is clearly an argument to be made that making things open (rather than, for example, the double blind process) will help to reduce biases, problematic opinions, or hierarchical sensitivities that can affect the review process.

Equally importantly, open review starts to demonstrate the value added by the review process and to recognize the contributions of reviewers to scholarship

Finally, we all know that traditional peer review has not put a stop to whole disciplines being rocked by scandals of fabricated data and unquestioned results, and it’s possible that open peer review will actually help to improve the scholarly record.

On a related note, one of our other aims with Collabra is to get rid of the phrase “peer review lite” which has plagued the type of review that Collabra (and other OA titles) employs.

We characterize our review criterion as being “selective for credibility only”—checking for the scientific, methodological, and ethical rigor of a paper, and removing, as much as humanly possible, more subjective reviewing criteria for novelty or anticipated impact. Open reviews will support this mission—to show that there is nothing “lite” about this kind of review (and in fact, sometimes quite the opposite).

It’s too early for us to be able to identify specific problems with open peer review for Collabra, although we are aware of studies suggesting that it may be harder to get reviewers and it may lengthen the review time. Our limited experience so far does not support either of these concerns.

The other cons of open peer (as opposed to double blind) review are clearly to do with concerns about bias, the highly variable nature of peer review, and the additional costs it could impose on an already overtaxed system.

For example, a reviewer might be worried about openly and critically reviewing a more senior author and believe there could be a negative effect on her own career.

Our hope is that a more open system will improve the integrity of the peer review process, but the reality is that any system will be subject to the biases of human nature—we just think that this is more likely to be surfaced through greater transparency.

RP: Does Collabra think that there are occasions when open peer review is inappropriate? If so, when and why?

AM: Anything raised in peer review of a confidential nature which does not make it into the published article should be carefully removed from any open peer review comments that get published during open review.

That said, we (UC Press) are not really the drivers of how open peer review will evolve in Collabra or elsewhere. Since Collabra works only with external editors, editorial policies should emerge that are firmly based on the standards of each research community that publishes in Collabra.

If a community-driven majority standard emerged which stated that, in certain situations, open peer review was inappropriate, then we would respect such a decision.

RP: Are there any other learning points that have emerged as Collabra has sought to implement open peer review?

AM: It’s too early in the launch of Collabra to really be able to comment, although we have been pleasantly surprised at authors’ and reviewers’ willingness to consider the option of open peer review. That seems to be a great start for this concept.


An earlier Q&A with Alison Mudditt can be read here.



Wednesday, 22 July 2015

Emerald Group Publishing tests ZEN, increases prices: what does it mean?

When in July 2012 Research Councils UK (RCUK) announced its new open access (OA) policy it attracted considerable criticism.
Photo courtesy of swiftjetsum626
Initially this criticism was directed at RCUK’s stated preference for gold OA, which universities feared would have significant cost implications for them. In response, RCUK offered to provide additional funding to pay for gold OA, and agreed that green OA can be used instead of gold (although RCUK continues to stress that it “prefers” gold).

At the same time, however, the funder doubled the permissible embargo period for green OA to 12 months for STM journals and 24 months for HSS journals. This sparked a second round of criticism, with OA advocates complaining that RCUK had succumbed to publisher lobbying. The lengthened embargoes, they argued, would encourage those publishers without an embargo to introduce one, and those who already had an embargo to lengthen it.

There was logic in the criticism, since one rational response to the adjusted RCUK policy that profit-hungry publishers would be likely to make would be to seek to dissuade authors from embracing green OA (by imposing a long embargo before papers could be made freely available), while encouraging them to pick up the money RCUK had put on the table and pay to publish their papers gold OA instead (which would provide publishers with additional revenues).

It was therefore no great surprise when, in April 2013, Emerald Group Publishing — which until then had not had a green embargo — introduced one. Nor was it a surprise that it settled on the maximum permitted period allowed by RCUK of 24 months.

It was likewise no surprise that Emerald’s move also attracted criticism, not just from OA advocates but (in May of that year) from members of the House of Commons Business, Innovation and Skills (BIS) Committee, which was at the time conducting an inquiry into open access.

When taking evidence from the then Minister of State for Universities and Science David Willetts, for instance, the MP for Northampton South Brian Binley said “We have received recent reports of a major British publisher revising its open access policy to require embargoes of 24 months, where previously it had required immediate unembargoed deposit in a repository.” Binley went on to ask if Willetts could therefore please have someone contact the publisher and investigate the matter.

At the time I also contacted Emerald. I wanted to know the precise details of its new policy and to establish who would be impacted by it. This proved a little difficult, but it turned out that Emerald had introduced a “deposit without embargo if you wish, but not if you must” policy — an approach pioneered by Elsevier in 2011, but which it recently abandoned.

While the wording of the Emerald policy may have changed a little since it was introduced, at the time of writing it appeared to be the same in substance: authors are told that they can post the pre-print or post-print version of any article they have submitted to an Emerald journal onto their personal website or institutional repository “with no payment or embargo period” — unlessthe author is subject to an OA mandate, in which case a 24 month embargo applies.

ZEN = “Zero Embargo Now”


Embargoes have been contentious for as long as researchers have been self-archiving their papers on the Web. Publishers have always maintained that green OA threatens their revenues. Their claim is that libraries will inevitably cancel the subscription of any journal whose contents are freely available elsewhere. As Elsevier’s Alicia Wise put it recently, “an appropriate amount of time is needed for journals to deliver value to subscribing customers before the manuscript becomes available for free. Libraries understandably will not subscribe if the content is immediately available for free.”

Open access advocates refute this, arguing that there is no evidence to suggest that embargoes have a negative impact on journal subscriptions. Consequently, they say, there is no need to embargo self-archiving. Speaking at a conference celebrating the tenth anniversary of the Berlin Declaration on Open Access in 2013, therefore, Glyn Moody called for “the ZEN approach” to open access — as in “Zero Embargo Now”.

Given this background, I was intrigued by a recent news item on Library Journal’sinfoDOCKET reportingthat Emerald has decided to undertake what it calls a Zero Embargo trial.

The trial, which will involve 21 Library and Information Science and Information and Knowledge Management journals, will allow researchers submitting to these journals (even if the author is subject to an OA mandate) to deposit the post-print versions of their articles “into their respective institutional repository immediately upon official publication, rather than after Emerald’s 24 month embargo period for mandated articles”.

It is an interesting development. But what impact is it likely to have? That we do not know, not least because — somewhat ironically given that they have historically been some of the most vociferous advocates for OA — librarians have not been goodat walking the talk on open access. We also do not know how many librarians are subject to an OA policy, and those who are not are already free to self-archive immediately.

Explaining in its press release why it has introduced the trial the publisher said: “Emerald made the decision to trial the zero month embargo period following consultation with its newly formed Librarian Advisory Group (LAG). The group is made up of leading editors and authors from Emerald Library Studies and Information Management journals, alongside other key academics in Library Studies and adjacent disciplines. The group discusses and advises Emerald on issues of common interest in the LIS field including Open Access policy and editorial best practice.”

Price hike


What Emerald has not been trumpeting, however, is that it is simultaneously increasing the article-processing charge of 32 Engineering and Technology journals, from £995 ($1,595), to £1,650 ($2,695) per paper — a rise of nearly 70%.

This is a hefty increase, and will doubtless spark a sense of déjà vu for some. In the 1990s Emerald became the target for heavy criticism for increasing the price of its journal subscriptions precipitously. Indeed, some believe that the opprobrium Emerald attracted at that time informed its later decision to change its name (the publisher was previously called MCB UP). It was assumed that the name change was intended to distance the company from the negative image it had acquired — although Emerald has denied that this was the reason.

Either way, the name change did not put an end to controversy. Emerald attracted further criticism in 2005, when Phil Davis, then at Cornell University, reported that the publisher had been covertly republishing hundreds of articles in its journals, and without citing the original source.

So why has Emerald chosen to trial ZEN with some of it library journals, what role did the LAG play in the decision, and what do members of the LAG feel about the associated 70% increase in the APCs of 32 engineering and technology journals?

In the hope of finding out I emailed Emerald and asked where I could find a list of advisory group members. It turns out that these are not publicly available. “The Librarian Advisory Group (LAG) are a newly formed international group who have not given Emerald permission to share their details so the list is not publically available,” an Emerald spokesperson told me. “The LAG advised us on issues relating to the zero embargo period for Library and Information Science and selected Information and Knowledge Management journals trial. The trial aims to find a sustainable path and we will be monitoring the impact as it progresses.”

I asked if Emerald could nevertheless put me in touch with the librarians privately. “I’m afraid I can’t share the details without their permission”, the Emerald spokesperson replied. “Hopefully you can appreciate we have to follow our data protection and confidentially procedures. However if you’re happy to leave it with me, we can ask the group if they consent to their names being shared. I can then let you know what details they consent to releasing but I can’t guarantee that I can get a response to you immediately.”

Two days later, out of the blue, I received a follow-up message from Emerald: “For the avoidance of doubt, our APC charges are not subject to discussion with the LAG,” this read, and added, “To maintain our agreed confidentiality we will not be able to provide you with contact details at this time.”

So what do other librarians think of the ZEN trial? When I pointed one (who is not on the LAG) to Emerald’s announcement he commented, “Well, I think this is a step in the right direction from Emerald, but I’m also not surprised that they did this with library journals, which are inexpensive and not cited much. They probably wouldn’t allow this with Chemistry journals for fear that self-archiving could harm downloads. They may be trying to outsmart librarians here!”

When I asked him what he meant he pointed me to a table in the Library Journal (#3 here) listing the average cost of journals by subject area. This reveals that the average library journal subscription is $493 per annum compared to $2,281 for engineering journals, and $1,876 for technology journals (The average cost of chemistry journals is $4,333). Clearly it would be less damaging to lose a few library journal subscriptions than to lose subscriptions to engineering and/or technology journals.

The follow-up message from Emerald provided me with the following additional quote on the ZEN trial: “[W]e think it represents an excellent opportunity to learn by working collaboratively with the community. Emerald will continue to work with its Librarian Advisory Group (LAG) to assess the impact of the trial, by monitoring the quality and volume of submissions, feedback from authors, and readership figures from both the Emerald platform and institutional repositories. Evaluation of this trial will help to inform Emerald's future Open Access policies and initiatives.”

It is perhaps important to note here that even if Emerald were to offer ZEN for all its journals, it would be doing no more than reverting back to its previous positon, a positon that when speaking to me in 2001 Emerald’s then business development director Kathryn Toledano had implied gave Emerald a competitive advantage. Self-archiving, she said, “is a realistic need for many authors, and we would rather allow this than miss out on the potential of high-quality articles that may be published elsewhere.”

The implication is that by offering ZEN a publisher can hope to attract authors who might otherwise publish elsewhere. Might it be, therefore, that Emerald has decided to test ZEN because it has experienced a fall in submissions from librarians in the wake of imposing its embargo?

What seems odd, however, is that Emerald insists on keeping the names of the LAG secret. After all, in its press release it made a point of saying that it had consulted with the group. Given that, why would it want to withhold their names? And if it is the librarians themselves who want to remain anonymous, we must wonder why they are so shy.

Based on market and competitor analysis


When I asked the Emerald spokesperson why the publisher had decided to increase the APCs for 32 of its journals, and why the rise was quite so precipitous, she replied: “The decision, based on market and competitor analysis, will bring Emerald’s APC pricing in line with the wider market, taking a mid-point position amongst its competitors. The increased price point will also enable the company to better support the author community as OA developments continue to evolve.”

This would seem to imply that Emerald’s pricing policy is based not on what it costs to publish an article or journal (plus an element of profit), but on what other publishers charge. When I put this to the Emerald spokesperson she replied, “Naturally Emerald took lots of different elements into consideration such as our own business costs and market analysis, however the decision also seriously considered how we can better position the portfolio to support the author community for the future.”

In her follow-up message two days later she added: “Emerald is fully committed to maintaining a fair price for its Gold OA option for authors and funders. With this in mind, we have been continually reviewing the level of APCs since introducing our Open Access option. We feel the APCs currently in place will support on-going OA initiatives that help the scholarly communities we work with to make a greater impact with the research they publish.”

But what does this all mean? It is worth remembering that OA advocates have always insisted that open access would act as a disruptive force in the scholarly communication market. For instance, they said, by lowering the cost of entry it would allow new publishers and new products to emerge. And by leveraging web technology these new entrants would completely reinvent scholarly publishing for the networked age. Amongst other things, this would increase the speed and efficiency with which research was shared, and so enable better and faster innovation to take place, to the benefit of the whole of society. Importantly, they added, it would lower the costs of scholarly publishing, and so resolve the affordability problem that has had the research community in its iron fist for several decades now.

To date none of these objectives has been realised. While we have seen a few experiments in alternative peer review practices, and new ways of trying to measure the quality and impact of research, the outdated journal model continues to dominate, the quality of papers has fallen, retractions have increased, and sharing research remains a slow and inefficient process.

As former CEO of scholarly publisher De Gruyter Sven Fund notedrecently, OA has not changed the game in any meaningful way. “While it has achieved remarkable change within the system, this has not led to a paradigmatic change”, he said, adding that this is partly because “its disruptive potential has been rather fenced during the past years.”

Meanwhile, traditional publishers are in the process of capturing open access in order to exploit it for their own ends, with the result that costs are rising rather than falling — as evidenced by the fact that large subscription publishers appear to be hoovering up most of the money that research funders like the Wellcome Trustand RCUK are making available in order to fund gold OA (and this is in addition to the subscription revenues they continue to earn).

It is no surprise, therefore, that large publishers continue to enjoy operating profits of around 34% to 40%, a level widely felt to be far too high. (See here, here, here, and here for example).

Emerald’s recent price increase would seem to confirm that the trajectory for prices is up rather than down. And for so long as publishers set the price of their OA services at a level intended to preserve their historical revenues (or at a level that matches what their competitors charge), the much-anticipated cost savings OA was expected to deliver are unlikely to be realised.

Consolidation rather than disruption


But there is more to explore here. The year before I spoke to Toledano (2001) Emerald’s profits had risen by 47%, to £7.5 million. Importantly, its operating profit as a percentage of turnover was comparable to that of the large scholarly publishers — 38%.

In order to compare this with Emerald’s current performance I took a look at the publisher’s recent financial figures. As these do not appear to be on its web site (perhaps because it is a private company) I downloaded several years-worth of the financial reports that Emerald has filed at UK Companies House.

These show that last year (2014) Emerald’s operating profits were just £219,401 higher than in 2001. Moreover, the year before (2013) they had been £334,543 lower than in 2001 (but £89,868 higher than 2001 in 2012).

Significantly, while Emerald’s turnover has increased from around £20 million in 2001 to £36.7 million today, its operating margin has declined to 21%. This may still be a margin many industries would envy, but what do we make of the fact that Emerald’s profitability over the past 13 years has declined? Has Emerald not been as canny as its larger competitors, or does it tell us something about the scholarly communication market? Does it, for instance, support the OA movement’s assertion that open access will inevitably exert downward pressure on publisher profitability, and so eventually resolve the affordability problem?

Clearly, we cannot generalise from just one company. Nevertheless, as noted earlier, Emerald’s recent hike in prices does not appear to suggest that the publishing costs incurred by the research community are on a downward path.

As also noted, the assumption made by the OA movement was that the web would allow a host of small, innovative new companies to enter the scholarly publishing market, and impose intense competitive pressure on incumbents. Amongst other things, they said, this would drive down prices. But while we have seen companies like PLOS and PeerJ emerge, it appears that the web has accelerated consolidation in the industry rather than disrupted it. This has allowed incumbents to control pricing. And as the big beasts get bigger so the affordability problem gets worse.

The extent to which just a few large companies now dominate the scholarly publishing market is clear to see if one reads a recent paper entitled The Oligopoly of Academic Publishers in the Digital Era, published in June in PLOS ONE.

As the paper’s abstract puts it, “The consolidation of the scientific publishing industry has been the topic of much debate within and outside the scientific community, especially in relation to major publishers’ high profit margins. However, the share of scientific output published in the journals of these major publishers, as well as its evolution over time and across various disciplines, has not yet been analyzed.”

Consequently, it says, “This paper provides such analysis [and] shows that in both natural and medical sciences (NMS) and social sciences and humanities (SSH), Reed-Elsevier, Wiley-Blackwell, Springer, and Taylor & Francis increased their share of the published output, especially since the advent of the digital era (mid-1990s). Combined, the top five most prolific publishers account for more than 50% of all papers published in 2013. Disciplines of the social sciences have the highest level of concentration (70% of papers from the top five publishers), while the humanities have remained relatively independent (20% from top five publishers).”

In other words, rather than curbing the power of large publishers, the digital environment and open access have conspired to increase their domination. This in turn is allowing them to set their own prices. Indeed, by introducing hybrid OA, they are now able to double chargeas well, gouging the public purse as never before (as the Wellcome Trust and RCUK figures cited above show). And as Emerald’s price increase demonstrates, smaller publishers look to their competitors when setting their prices. Essentially, scholarly publishing has become a pricing arms race.

However one looks at it, any expectation that open access will lower costs currently appears a forlorn one, and the affordability problem can only be expected to worsen going forward. Surprisingly, however, this looks to be bad news not just for the research community, but for smaller publisher too. Let’s see why.

A scale game


When I asked Claudio Aspesi, a senior research analyst at Bernstein Research specialising in scholarly publishing, about Emerald’s financial reports he confirmed that the company’s operating margin has fallen to 21%. But he added: “I am not shocked that a smaller publisher should have lower margins than Elsevier — in the end this is a scale game.”

He explained, “One way to think about this is that Elsevier achieves about £1.1 billion in revenues with about 2,500 titles (i.e. £440,000 per title), while Emerald has revenues of £36.7 million with about 290 titles, i.e. £126,000 per title (and this is a generous assumption, since their revenues also include books, while the Elsevier data includes only journals). Unfortunately, we do not get an annual article count (the web site mentions 80,000 articles — but it does not clarify whether this is an annual number)”

In other words, Aspesi added, “Elsevier gets £1.1 billion in revenues off 350,000 articles, which equates to about £3,100 per article, while — if we assume Emerald publishes 80,000 articles — it makes £460 per article. This is not unreasonable for a small publisher, but it also explains the gap in profitability.”

Emerald’s latest price increase, therefore, needs to be seen in this context. Faced with a falling operating margin, the publisher presumably feels compelled to keep up with its larger competitors on pricing. In doing so, however, it may be running to keep still.

Given this, Emerald’s (currently limited) return to ZEN would seem to make sense. As Toledano suggested in 2001, allowing immediate self-archiving could provide smaller publishers with a competitive advantage. This seems all the more likely in light of Elsevier’s recent tightening upof its self-archiving rules (introducing embargoes where they did not previously exist, amongst other things).

But here is the interesting question: is it more likely that Emerald will return to ZEN with all its journals, or that it will extend its recent price increase to all its journals?

Given RCUK’s preference for gold OA, and its current willingness to pay publishers’ asking price, the latter might seem more likely. RCUK’s policy is fuelling price inflation. And since the oligopolistsof scholarly publishing appear free to charge what they want, they will naturally seek to extract more and more money from the public purse each year, enabling them to get bigger and bigger.

As a result, the pressure on smaller publishers to increase prices in the slipstream of their larger competitors can be expected to grow. Not only do they need to keep up with the market price, but as the Big 5 get bigger we can anticipate that it will become more and more difficult for smaller players to maintain their operating margin, not least because they can only dream of the economies of scale enjoyed by the behemoths.

——

In the interests of fairness, on Monday I forwarded a draft copy of the above text to Emerald Group Publishing, indicating that I would be happy to post a response beneath my text. I had received no reply to my email from the company at the time of publication.