Wednesday 22 June 2011

Peer review: Still no practical alternative?

The UK House of Commons Science & Technology Select Committee is currently conducting an inquiry into peer review. It held its fourth oral evidence session on June 8th, taking evidence from both funders of scientific research and from Government. 

UPDATE: THE COMMITTEE’S REPORT HAS NOW BEEN PUBLISHED. THE DETAILS ARE AVAILABLE HERE.

While the Committee gave no specific reason for launching the current inquiry it seems evident from the questions MPs have been asking that two particular incidents have been exercising their minds: the long-running saga over Andrew Wakefield and the MMR vaccine scare, and the so-called Climategate incident.

In the June 8th session the Committee asked questions not just about the efficacy of peer review, but about scientific fraud, bias, the willingness of universities to investigate allegations of misconduct, and the need to make research data freely available so that others can access, examine and test them.

MPs seemed particularly concerned that universities may be unwilling to investigate claims of misconduct. As MP Graham Stringer put it in one of the questions he asked the witnesses, “There is a certain amount of evidence that very little fraud is detected in universities and major research institutions in this country. Do you think we should be doing more to try and detect that, because in one sense there is an interest within those bodies not to discover or expose the problems they have, to sweep it under the carpet, isn’t there? If you are running a university and you find you have a researcher who just writes down his figures without doing the work, which has happened in one or two cases, the university doesn’t want to say that it has been employing a fraudster for 10 years, does it?”

Politicians also probed the witnesses about the use of journal impact factors as a “proxy measure for research quality” when assessing the performance of academics, and whether “the growth of online repository journals” like PLoS ONE is a “technically sound” development.

Robust defence

For their part the witnesses put up a robust defence of current practices. They denied that universities would cover up fraud; they dismissed suggestions that the impact factor is used as a proxy measure of quality; and they insisted that, while it might not be perfect, there is no practical alternative to traditional peer review.

In support of the latter claim they repeated the oft-made analogy with Winston Churchill’s description of democracy. Churchill famously described democracy as “the worst form of government, except for all those other forms that have been tried from time to time.” Thus it is with peer review, averred the witnesses: no one has come up with anything better.

Those with any experience or knowledge of how peer review works in practice might have been tempted to conclude that analogising peer review with democracy is to obfuscate the issue. At the very least, it appears oxymoronic.

Such a conclusion was all the more likely in light of the opening question and answer. The Chair suggested that it might be helpful to conduct some research into the efficacy of the current system — on the grounds that “evaluation of peer review is poor”. To this Wellcome Trust director Sir Mark Walport replied: “Peer review is no more and no less than review by experts. I am not sure that we would want to do a comparison of a review by experts with a review by ignoramuses.”

Sir Mark’s statement can only have served to remind the audience that peer review is more oligarchic than democratic in effect. Rather than encouraging egalitarianism, it promotes elitism, and all the privileges one might associate with an old boy’s club (appositely perhaps, there was not a single female witness called to give evidence on June 8th).

Of course the Churchillian analogy is not really meant to suggest that peer review is a democratic process. Nevertheless the witnesses’ repeated claims that the current peer review system is “good enough” would surely be challenged by many junior researchers, who frequently complain that scholarly journals tend to be controlled by small elite groups of insiders, invariably senior researchers.

As one researcher pointed out to me recently, this is particularly problematic for those working outside North American and Europe. As he put it, “Peer-reviewed journals with a high impact factor are either dominated by certain gangs, or groups, or the editors rely on the opinion of reviewers too much.”

The upshot, he added, is that “a small guy from Russia, Brazil or Thailand will never get published, even with excellent results, unless he or she has a prominent Western colleague as a co-author.”

In fact, it is not just researchers in less privileged parts of the world who can struggle to get published in scientific journals today. Nor is it only junior researchers who complain about the peer review system. In 2009, for instance, 14 leading stem cell researchers wrote an open letter to journal editors highlighting their disquiet at the way in which the system operates.

Speaking to the BBC about the letter Professor Lovell-Badge commented: "It's turning things into a clique where only papers that satisfy this select group of a few reviewers who think of themselves as very important people in the field is published.”

Responding in a (separate) BBC interview, Sir Mark downplayed the criticism. Scientists, he said, “are always a bit paranoid” about peer review. And to make his point Sir Mark again used the analogy with democracy — peer review is not perfect, but it is the best system that the research community has been able to come up with.


Risks

As indicated, the June 8th witnesses also denied that the journal impact factor is used to evaluate researchers’ performance. “[W]e are very clear that we do not use our journal impact factors as a proxy measure for assessing quality,” David Sweeney, the director for research, innovation and skills at the Higher Education Funding Council for England (HEFCE) told MPs. “Our assessment panels are banned from so doing.”

Once again, however, we could expect researchers to refute claims that journal impact factors are not used for promotion and tenure purposes. Whatever funders do or do not do, they might say, universities (that is, their employers) make it quite clear that being published in high impact journals is one of the best ways to advance an academic career.

Indeed, some universities (perhaps not in the UK, but certainly in other countries) operate schemes in which researchers are given cash bonuses if they succeed in being published in a high-impact journal. Universities in China, The Netherlands and Egypt are said to provide such incentives. In the case of Cairo University the details are publicly available on the institution’s web site. The accompanying table shows that faculty members published in a prestigious journal can earn a lump sum payment of between 2,000 and 100,000 Egyptian pounds (£5,000 to £10,000), with the amount paid directly related to the impact factor of the journal.

But it would be wrong to imply that the June 8th witnesses were averse to new developments. They talked positively about the ways in which the Internet could improve the system, and they spoke enthusiastically about new online journals like PLoS ONE. “PLoS ONE has very good peer review”, said Sir Mark. They also cited with approval the development of web-based post-publication services like Faculty of 1000.

They were far less enthusiastic, however, about social networking tools like blogs, Twitter and Facebook. In contrast to PLoS ONE and Faculty of 1000 — which have been developed by, and are managed by, traditional publishers — the former tools were viewed as dangerously anarchic, uncontrolled, and uncontrollable.

Describing the risks inherent in such services Sir Mark commented, “You have only got to look at the world of blogs, Twitter or anything else. Openness brings its own risks. If anyone can comment, then they can all say what they want, so of course there are risks like that.”

Professor Rick Rylance, chair-elect for Research Councils UK appeared to agree: “You could end up in the rather ludicrous receding world of having to peer-review the post-review and the rest of it to find out whether it has worth.”

It is essential, added Rylance, for journals to continue to act as a quality filter “Clearly, if those filters are removed, there is a danger that people will be relatively unbuttoned about things.”

Why this was problematic was not entirely clear, particularly as Sir Mark had earlier pointed out that in the humanities, “there is a long tradition of writing book reviews where one academic is scathingly rude about another academic.” Was the suggestion that, while it might be fine for humanities scholars, scientists should be preserved from scathing criticism?

“I hope they are a threat”

Not every witness evinced such a conservative view. Sweeney appeared to represent a more radical school. A little anarchy, he implied, might not be such a bad thing. “I think those risks exist but there are benefits,” he suggested. “We will have to adjust to the use of social networking in this area.”

And when the witnesses were asked if article-level metrics posed a threat to high impact journals, Sweeney commented: “I don’t care if they are a threat to the base journals because the journal ecology will develop based on competition and alternative ways of doing things. I am sure they will respond. In some ways, I hope they are a threat.”

Nonetheless, the overall impression created by the witnesses was that any change to the current system should be avoided, and that traditional peer review should be treated as sacrosanct — on the grounds that, whatever its drawbacks, it is the only practical way of ensuring the quality of published research. As indicated, there even appeared to be resistance to testing its efficacy.

Again, many would disagree that published research achieves the levels of quality implied by the witnesses. They would also challenge the proposition that traditional peer review remains the only acceptable way of doing things.

And they might add that, as a result of the intolerable pressure put on researchers to publish as many papers as possible, the quality of peer-reviewed research is actually falling. Finally, they might point out that since practically all papers are eventually published anyway (in one journal or another), quality is a relative concept in the world of scientific publishing today.

It is for these reasons that there are growing calls for change. Moreover, many believe that, far from being a threat to the quality of published research, social networking offers a viable alternative.

Richard Smith, the former editor of the prestigious journal BMJ, for instance, has reached the conclusion that the very notion of pre-publication peer review is flawed. The best way of assessing a paper, he suggests, is after publication, not before.

As Smith puts it, “The problem with filtering before publishing, peer review, is that it is an ineffective, slow, expensive, biased, inefficient, anti-innovatory, and easily abused lottery: the important is just as likely to be filtered out as the unimportant. The sooner we can let the ‘real’ peer review of post-publication peer review get to work the better.”

And today, he suggests, there is an effective way of doing this. “For journal peer review the alternative is to publish everything and then let the world decide what is important. This is possible because of the internet.”

In January Smith developed his views further on the BMJ blog. In a post entitled “Twitter to replace peer review?” he suggested that bloggers and tweeters may be more effective at assessing research than the traditional peer review system.

Smith related how the blogosphere had drawn attention to a major flaw in a paper published (and peer reviewed) by the high-impact journal Science. This claimed that scientists were now able to predict human longevity with 77% accuracy. “One week after the paper was published the authors acknowledged that they had made a technical error,” said Smith, “and shortly afterwards Science issued an “expression of concern,” meaning ignore this paper.”

Revolution underway

It is worth noting that the current inquiry into peer review is by no means the first such inquiry. It is further worth noting that all previous inquiries reached similar conclusions to those expressed by the June 8th witnesses: traditional peer review is as good as it gets.

In 1989, for instance, the UK Boden Report concluded there are, "no practical alternatives to peer review for the assessment of basic research.” It added, however, that this was no cause for complacency. "Rather the reverse. Peer review does have problems both in principle and in practice."

The same sentiment was echoed in a 1995 Royal Society report on peer review.

The current inquiry, however, is taking place in a web-enabled world. Given this, and the increased pressure on researchers to publish, the Committee is confronted with two new questions: 1) Have peer review practices deteriorated to the point where the status quo can no longer be justifiably supported and, 2) does the Internet, and the new social networking tools developed for it, offer a viable alternative to traditional peer review? Might these new tools indeed prove a superior way of reviewing, filtering and adjudicating on the quality and value of new research?

The most likely outcome, of course, is that the Committee will once again conclude that traditional peer review remains the only practical solution. And in doing so, it may even cite Churchill’s views on democracy.

But whatever the Committee concludes, we should not doubt that a revolution in peer review is underway. Today the status quo is under attack from young Internet-savvy researchers keen to shake the establishment's tree.

Smith cited one example. There was a similar incident last year, when another Science paper became the target of criticism in the blogosphere. The authors of this paper, which claimed to have demonstrated that arsenic-based life forms are possible, were accused of publishing flawed and inadequate research.

The firestorm of criticism was sufficiently compelling, and vocal, that Science subsequently published eight papers criticising the original research, and the paper’s authors appear to be back-pedaling. They have also agreed to release the bacteria so that other groups can try and reproduce their results.

Incidents like this are becoming commonplace, and some now believe that we are witnessing the beginning of the end for traditional peer review. Interestingly, the new approaches emerging on the Web appear to hold out the promise of providing a more democratic way of reviewing papers, one moreover that could prove both more efficient, and more cost-effective.

In short, there may now by a practical alternative to traditional peer review.

These developments are inevitably viewed by the establishment as threatening, not least because it means giving up some of the control it has traditionally enjoyed. But to resist these new forces would be to sit like King Canute ordering the tide to retreat.  

Some of the issues explored by the committee on June 8th are highlighted in the edited questions and answers below the video. The asterisked headings are mine. 

The Chair of the Science & Technology Committee is Andrew Miller, Labour MP for Ellesmere Port and Neston. Other politicians to pose the questions below were Graham Stringer, Labour MP for Blackley and Broughton, Roger Williams, Liberal Democrat MP for Brecon and Radnorshire, Gavin Barwell, Conservative MP for Croydon Central, Pamela Nash, Labour MP for Airdrie and Shotts, Stephen McPartland, Conservative MP for Stevenage, David Morris, Conservative MP for Morecambe and Lunesdale, Stephen Mosley, Conservative MP for the City of Chester, and Stephen Metcalfe, Conservative MP for South Basildon and East Thurrock. (A full list of Committee members is available here).

Details of the first three oral sessions can be found here, here, and here. 

The June 8th hearing was split into two sessions.
#####
 == FIRST SESSION ==

Those giving evidence in the first session were Professor Rick Rylance, Chair-elect, Research Councils UK, David Sweeney, Director for Research, Innovation and Skills, HEFCE, and Sir Mark Walport, Director, Wellcome Trust.
===

* On whether research funders should fund research on the effectiveness of peer review… 

Q250 Chair: … Evaluation of editorial peer review is poor. Should you, as funders of research, contribute towards a programme of research to, perhaps, justify the use of peer review in publication and find out how it could be optimised?… 

Sir Mark Walport: It all depends what you mean by "research". It is quite important to have a very straightforward understanding of what peer review is. Peer review is no more and no less than review by experts. I am not sure that we would want to do a comparison of a review by experts with a review by ignoramuses. 

Q251 Chair: That’s not very nice, is it? 

Sir Mark Walport: Having said that, we do conduct studies of peer review. The Wellcome Trust published a paper in PLoS ONE a couple of years ago in which we took a cohort of papers that had been published. We post-publication peer-reviewed them and then we watched to see how they behaved against the peer review in bibliometrics. There was a pretty good correlation, although there were differences. Experiments of one sort or another are always going on, but the fundamental question of whether you should compare expert review with just randomly publishing stuff I don’t think is something that anyone would be very keen to do. It lacks equipoise. 

David Sweeney: Through our funding of JISC and through our funding of the Research Information Network, much work has been carried out in this area and we remain interested in further work being carried out where the objectives are clear. 

Professor Rylance: Yes. We, too, would be open to trying to think about how that might be researched. We have to bear in mind that peer review is not a single phenomenon. It is peer review in relation to publication, grant awards, REF and so on. Again, there are differences between the natural sciences, the social sciences and the humanities. You would have to define the task a bit more carefully. We do, from time to time, undertake research on, for example, the influence of bibliometrics and its relationship to peer review, so work is going on in that way. 

* On whether peer review limits the emergence of new ideas… 

Q252 Chair: The Wellcome Trust highlighted a common criticism of peer review by saying: "It can sometimes slow or limit the emergence of new ideas that challenge established norms in a field." Do the others agree and what can be done about this? 

Professor Rylance: Churchill once said that democracy was the worst system in the world apart from all the others. I think the same about peer review. Peer review is absolutely crucial, but, of course, it carries limitations of one kind or another in that it can slow down things. The volume of work load and so on and so forth is increasing but, none the less, we need to remain committed to the principle of doing peer review because, in the end, it is always the first and last resort of quality. 

David Sweeney: We think that there is a risk, but we also look at the many experiments that are going on with social networking and modern technological constructs. We hope that the broad view that is taken of those will mitigate the risks which the Trust identified. 

Sir Mark Walport: To be clear, the Wellcome Trust, in our submission, said: "Other commonly raised criticisms of peer review are…" We didn’t say that we agreed with that criticism. The issue is that peer review or expert review is as good as the people who do it. That is the key challenge. It has to be used wisely. It is about how the judgment of experts is used. It is about balancing one expert opinion against another. The challenge is not whether peer review is an essential aspect of scholarship because there is no alternative to having experts look at things and make judgments. 

* On whether the growth of “online repository journals” like PLoS ONE is “technically sound”… 

Q253 Chair: If that common criticism has validity, is the growth of online repository journals like PLoS ONE technically sound? 

Sir Mark Walport: It is entirely sound. PLoS ONE has very good peer review. Sometimes there is a confusion between open access publishing and peer review. Open access publishing uses peer review in exactly the same way as other journals. PLoS ONE is reviewed. They have a somewhat different set of criteria, so the PLoS ONE criteria are not, "Is this in the top 5% of research discoveries ever made?" but, "Is the work soundly done? Are the conclusions of the paper supported by the experimental evidence? Are the methods robust?" It is a well peer-reviewed journal but it does not limit its publication to those papers that are seen to be stunning advances in new knowledge. It is terribly important to put to bed the misconception that open access somehow does not use peer review. If it is done properly, it uses peer review very well. 

Professor Rylance: It is important to distinguish between peer review that is looking at a threshold standard, i.e. "Is this worthy of publication?" and peer review that is trying to say, "What are the best?" when you are over-subscribed in terms of the things you can publish. 

* On whether the impact factor of the journal a paper is published in should be used as a proxy measure of quality when evaluating researchers… 

Q255 Stephen Mosley: We have heard that the quality of journals, often determined by the impact factor of those journals, is becoming a proxy measure for research quality. Would you tend to agree with that assessment? 

David Sweeney: With regard to our assessment of research previously through the Research Assessment Exercise and the Research Excellence Framework, we are very clear that we do not use our journal impact factors as a proxy measure for assessing quality. Our assessment panels are banned from so doing. That is not a contentious issue at all. 

Sir Mark Walport: I would agree with that. Impact factors are a rather lazy surrogate. We all know that papers are published in the "very best" journals that are never cited by anyone ever again. Equally, papers are published in journals that are viewed as less prestigious, which have a very large impact. We would always argue that there is no substitute for reading the publication and finding out what it says, rather than either reading the title of the paper or the title of the journal. 

Professor Rylance: I would like to endorse both of those comments. I was the chair of an RAE panel in 2008. There is no absolute correlation between quality and place of publication in both directions. That is you cannot infer for a high-prestige journal that it is going to be good but, even worse, you cannot infer from a low-prestige one that it is going to be weak. Capturing that strength in hidden places is absolutely crucial. 

* On whether the 2014 Research Excellence Framework will use impact factor as a measure of quality … 

Q256 Stephen Mosley: We have had some very good feedback about the RAE process in 2008 and the fact that assessors did read the papers, did understand them and were able to make a subjective decision based on that. But we have had concerns. I know that Dr Robert Parker from the Royal Society of Chemistry has expressed a concern that the Research Excellence Framework panels in the next assessment in 2014 might not operate in the same way. Can you reassure us that they will be looking at and reading each individual paper and will not just be relying on the impact? 

David Sweeney: I can assure you that they will not be relying on the impact. The panels are meeting now to develop their detailed criteria, but it is an underpinning element in the exercise that journal impact factors will not be used. I think we were very interested to see that in Australia, where they conceived an exercise that was heavily dependent on journal rankings, after carrying out the first exercise, they decided that alternative ways of assessing quality, other than journal rankings, were desirable in what is a very major change for them, which leaves them far more aligned with the way we do things in this country. 

Q257 Stephen Mosley: That is a fairly conclusive response, is it not? Lastly, you were talking about PLoS ONE in answering the Chair’s questions. From what you were saying, there is a difference in standard between papers in PLoS ONE that might not be in that 5% most excellent bracket, but just so long as the work is technically sound and correct, they are in there without being excellent. With the impact factor of those repository journals gradually increasing, does it mean that the proxy use of peer-reviewed publications is even a less valid approach to assessing the quality of research in institutions in the future? 

David Sweeney: I think we just don’t do that. We are not keen to do that. We want to assess-all the time we do work every few years-on how much we can use bibliometrics in a robust way, particularly as you aggregate the information over a large number of publications. At present we do not feel that the role that that should play is beyond informing the expert judgments that are made by panels. We are very conscious of the fact that our research assessment exercise has to go across all disciplines. There would be little argument that the use of metric information is really quite difficult in many disciplines. We are trying to have a consistent way of doing things. We are very keen to be abreast of the latest research but confident that peer review should remain the underpinning element. 

Sir Mark Walport: If you are assessing an individual, there is simply no substitute for looking at their best output. If you are assessing a field, that is when you can start using statistical measures. You can start using things like the number of citations. If you look at most funders, they are very focused on asking people to tell them what their best publications are, sometimes limiting the numbers. For our Investigator Awards, we limit the number of publications to people’s best 20. 

Professor Rylance: Following on from David’s point, in my field, in the humanities, the majority of publications are not in journals. They are in other forms like books or chapters in books and so on. There simply is not the bibliometric apparatus to derive sound conclusions for that reason. 

* On whether formal training in conducting peer review should be compulsory… 

Q258 Pamela Nash: Given the importance of peer review in both academic research and publishing, do you think that formal training in conducting peer review should become a compulsory part of gaining a PhD? 

Sir Mark Walport: Part of the training of a scientist is peer review. For example, journal clubs, which are an almost ubiquitous part of the training of scientists, bring people together to criticise a piece of published work. That is a training in peer review. Can more be done to train peer reviewers? Yes, I think it probably can. PhD courses increasingly have a significant generic element to them. It is reasonable that peer review should be part of that. People sometimes talk about the opportunity cost of peer review. Peer review is a form of continuous professional development. It forces people to read the scientific literature and it gives a privileged insight into work that is not yet published. Most laboratories would involve, if not their PhD students, their early post-docs in peer review work. 

Professor Rylance: I would echo and support that. It seems to me that research is a collective enterprise and that anyone who wishes to enter that field either as an academic or in some other capacity needs to understand that. So an engagement with the work of others of a judgmental or other kind is really quite important as part of that process. 

Q259 Pamela Nash: I am aware that the "Roberts funding" provided training for PhD students until recently. Would any of you have any ideas on who could be responsible for continuing that funding for that training? 

Sir Mark Walport: That funding is available. For example, the Wellcome Trust funds four-year PhD programmes, so we are providing funding for a longer period. The research councils can speak for themselves, but the four-year model of the PhD is becoming well established and that gives universities the opportunity to provide that transferable skills training. 

Q260 Pamela Nash: But should specific peer review training be recommended when that funding is given? 

Sir Mark Walport: We are not prescriptive in what universities teach. As I said, that would be a reasonable component of it. 

* On reducing the burden on referees… 

Q261 Pamela Nash: … Both Research Councils UK and the Wellcome Trust mentioned in their contributions to this inquiry that it would be favourable to reduce the burden — the bulk of the work — on referees of the peer review process. What would each of you propose to help streamline that process and reduce the burden on referees? 

Professor Rylance: … One thing you can do is demand manage. If the burden is increasing, and we recognise that it is just in terms of volume and the complication of frequency, if you start to reduce the number of applications, that work load starts to reduce and the quality of peer review goes up, presumably, how do you demand manage in that situation? You could do it in a draconian way. You could, for example, say, "The quota for this university is whatever it is", based on historic performance. You could do it developmentally working with universities to filter their own application processes, such that ones which are not going to go anywhere in any reasonable scheme are filtered out at an early stage, or you could go for what, in the jargon, is called "triage" processes when you receive them. So you do a relatively light-touch first stage application and then you reduce others.

My personal view — there are differences of opinion about this — is that measures like quotas have quite significant downsides, of which probably the most significant is that they would discourage adventurous, speculative, blue skies applications because, naturally, if you have a quota, people tend to be conservative about what they are putting in in order to try and gain the best advantage… 

David Sweeney: For us it is a volume problem. Obviously, more research is being done and more findings are being produced. We think that the amount that needs to go through the full weight of the peer review system need not continue to increase. Indeed, we are seeing initiatives in that. As part of our assessment exercise, we require four pieces of work over seven years from academics. In most disciplines, they will publish much more than that, but they do not submit it to the exercise because we are interested in selectively looking at only the best work. We would want to encourage academics to disseminate much of their work in as low burden a way as possible, but submit the very best work for peer review both through journals and then, subsequently, to our system. That is the only way to control the cost of the publication system. We must look for variegated ways of disseminating and quality-assuring the results. 

Sir Mark Walport: The first thing is that the academic community is still highly supportive of the fact that peer review is an intrinsic part of the scholarly endeavour. To put some numbers on it, between 2006 and 2010 the Wellcome Trust made about 90,000 requests for peer review. We got about 50% usable responses. The response rate was a bit higher but not every referee’s report added value. That is a pretty good response rate, and much of that was international. We used the global scientific community to help review and they do that very willingly. People who are in environments where they know they cannot themselves get a Wellcome Trust grant are, nevertheless, willing to referee for us… 

* On the withdrawal of funding for the UK Research Integrity Office (UKRIO)… 

Q264 David Morris: Professor Rylance, have Research Councils UK and Universities UK withdrawn funding from UKRIO, and, if so, why? 

Professor Rylance:… The original RIO was set up primarily with a remit for the biomedical sciences. It was set up on a fixed-term basis through a multi-agency system, which I am sure you are aware of, that included not just the funding councils, research councils and the Department of Health, but Wellcome were involved and other bodies. When that came to the end of its term, we had to make a decision about whether to continue. In other words, funding had stopped. It was not a question of withdrawing it. Do we continue that funding or do we not? There was a sense of two things. One is that it was really important to establish a body that had a remit and that that body should cover a broader range of disciplines than was the case with the original RIO. Secondly, we needed to disentangle various sorts of functions which were caught up within that original body. Could one be, for example, both a funder and an assurer of it, because you are clearly in quite a complicated relationship? Also, could you be both an assurer and an adviser, because, clearly, if you are giving advice which then turns out to be wrong, you would then be policing your own mistake at some level…

…The general conclusion was that, in its current format at that stage, RIO was not going to meet the sorts of needs that I have just described. We continued its funding for a little and we are now thinking about different ways in which we can put together a collective agreement on it, largely through, probably, a concordat style arrangement. The key player in this, just to complete the story, will be Universities UK. The reason why Universities UK are key to this is because they are not funders themselves of research. 

Q265 David Morris: Are you saying that it is moving more towards the subscription funding model? Is this a necessary change? 

Professor Rylance: It will be a subscription model in the sense that it will involve a series of agencies that will participate in the funding of it...

…There is a genuine sense among the bodies that I have just described that we need a cross-disciplinary organisation to provide assurance in tandem to link up the various sorts of assurance mechanisms that each funder has, to look at consistency and so on and so forth. That will be done, as I have described, through a concordat arrangement largely run through UUK, but that is as far as we have got at the moment. 

Sir Mark Walport: … The Wellcome Trust was fully supportive of Research Councils UK on this matter. Research integrity is important. There is no argument and no debate about that. The question is where the responsibilities lie for ensuring that it happens. We believe very strongly that the responsibility for the integrity of researchers lies with the employers, so by and large that is the universities for university academics. It is clearly the research institutes for people employed by research institutes. That is why we support moving to a concordat between research funders and the employers whose researchers we fund that it is their responsibility, in the same way that health and safety is a responsibility that is delegated to employers. Frankly, we did not believe that UKRIO in the form that it was constituted was delivering what we needed. 

* On whether universities are doing enough to address scientific fraud… 

Q273 Graham Stringer: … At our last evidence session we had the Pro-Vice-Chancellor responsible for research at Oxford here — I could give you the exact quote but I will not read it — who, basically, said that in his experience there had not been an occasion when they had had to investigate somebody for fiddling their results for fraudulent practices in research. On the other hand, we had another witness who told us that, if research institutions had not sacked at least one person, then they were not trying. Taking Oxford as an example, if you take your assertion that it should be the employers, that indicates that the employers are not carrying out that job. Certainly, in the case of Wakefield with the MMR scandal, the employers of Wakefield did nothing. I will now come to my question. Doesn’t that mean to say that there has to be a huge change in employers’ practices if your view was to be maintained? 

Sir Mark Walport: Employers are responsible for the integrity of their employees in all sorts of aspects of life. They are responsible in business for making sure that they do not commit fraud and that the accounting is done well. I can’t possibly comment on whether individual universities are immune from the malpractice of their employees. I do not think it alters the fact that, as in health and safety, and all sorts of other aspects, such as the good behaviour of employers in respect of how they deal with students, this is an employer’s responsibility. Increasingly, universities are taking this very seriously. Of course, you can pick examples of where things go wrong. You can pick examples of where peer review hasn’t worked well. The Wakefield sad story is a very good example of that. That paper should never have been published. But that is not an argument against organisations doing it well. In a sense, the importance of the concordat will be that it sets out in extremely clear terms what the relationship is and what the roles and responsibilities of universities as employers are for the integrity of their employees. 

Q274 Chair: It is clear that the universities would have responsibilities, but, taking your two examples of health and safety or fraud in conducting their business, in both of those instances there is an external regulator with statutory powers.

 

Sir Mark Walport: The question is what those statutory powers should be. Ultimately, it is clear that a scientist who has committed some form of scientific fraud, if I can put it that way, should lose their job. Does that then fall under some other regulator? Is it something that the courts should deal with? Probably not very often. In the case of medical research, Andrew Wakefield eventually met his come-uppance at the General Medical Council. There are ways of doing this. 

Q275 Graham Stringer: But he did not, did he? He was struck off for bad ethical practice. The General Medical Council did not deal with whether his research was fraudulent or not. In a sense that is a bad example. If I can repeat Andrew’s point, yes, it is the employers’ responsibility, but who is going to keep the employers good? 

Sir Mark Walport: That is where the funders will play a very serious role. We take research integrity very seriously as well. It is a grant condition that the work is done properly. From our perspective, in relation to an institution that failed to manage the research integrity properly, we would have to question whether that was an institution at which we could fund research. It is not that we don’t take it seriously, but we believe that the mechanism for dealing with this has to be through the employer. Frankly, if the employer is unaware of things going wrong in the research, it is difficult to see how others would be aware and the employer would be completely unaware. They are doing it in whistleblowing procedures… 

Q276 Pamela Nash: If I could take up that point, without an external regulator — you have just said that funders have a responsibility here on who they fund — surely, that is then an incentive for an academic institution to keep things quiet so that they don’t lose funding. 

Sir Mark Walport: Not at all. It is the nature particularly of scientific research that errors are found out, and it can’t be in the interests of any good university not to have the research done to the highest possible standard….There is no incentive to cover up. 

* On whether there is a need to provide greater openness and transparency in scientific data … 

Q277 Graham Stringer: … Can I … quote from last week’s Scientific American, which makes the point really well? … It is by John P.A. Ioannidis: "The best way to ensure that test results are verified would be for scientists to register their detailed experimental protocols before starting their research and disclose full results and data when the research is done. At the moment, results are often selectively reported, emphasising the most exciting among them, and outsiders frequently do not have access to what they need to replicate studies. Journals and funding agencies should strongly encourage full public availability of all data and analytical methods for each published paper." Do you agree with that and do you follow those policies?

 

Sir Mark Walport: This is one of the arguments in favour of good peer review, because a good peer reviewer when reviewing a scientific paper actually probes and says, "Where are the controls? Where is the missing data?" That is the first thing. Secondly, we do explicitly ask investigators when they are generating datasets how they will handle the data. In general terms, we do encourage openness. In fact, at the moment there is a Royal Society inquiry on openness in science which is looking at the whole issue of openness of data. One has to recognise that there are both real costs and opportunity costs. Data is not an unalloyed good, as it were. It is something that has to be interpretable. It is quite easy to bamboozle by just putting out billions of numbers. It is actually a question of presenting the data in a way that is usable by others. But the principles of openness in science, of making data available and open, are something that the Wellcome Trust and other funders of biomedical research around the world are fully behind and completely supportive of. 

Q278 Graham Stringer: Is what lies underneath that answer that you believe that codes, computer programs and all the data that would enable other researchers to replicate the work should be made available publicly? 

Sir Mark Walport: Bearing in mind the feasibility and garbage in/garbage out, one has to be careful that the data is usable. Yes, increasingly very large datasets are generated. We want to maximise the value of the research that we fund. Therefore, openness is a very important principle. There are some other issues that need to be dealt with as well, so if you are dealing with clinical material then the confidentiality of participants is paramount. You have to manage data so that they are appropriately anonymised and people cannot be revealed. It has to be in the general interest of the advancement of science and knowledge. As you say, science is validated by its reproducibility. If you cannot see the data, that is a problem. Of course, the revolution of the power of the internet to make data available has meant that it is possible to put out data in ways that were never possible before.

…Broadly, it makes complete sense to make as much data available in as usable a form as possible. That is something that we strongly support. It is why the funding of institutions like the European Bioinformatics Institute, which is housed at Hinxton, is so important. The UK Government has a good track record in supporting the EBI and funding has recently been announced for an extension there as part of the European ELIXIR project. Making data available is something that is incredibly important. 

David Sweeney: We believe in openness and efficiency in publicly funded research. Dr Malcolm Read took you through some of the issues at a previous hearing. We have funded and continue to fund projects that will push this area forward — UKRDS — and now some projects are looking at how cloud computing can help. Of course, we have learnt a lot from the research councils that the ESRC data archive has been a stunning success over many years…Technology is now allowing us to make advances, and through the work we fund we will learn a lot. Our objective is openness. 

Q279 Graham Stringer: Where research is publicly funded, if I can paraphrase what you say, you are saying that the data should be publicly available. If there are good reasons for it being confidential, do you think it should be made available in a confidential depository to the reviewers and, potentially, for other researchers so that it is available in some form? 

David Sweeney: That requires consideration of the particular circumstances and the sensitivity. Reviewers should have access to all the information. They need to assure themselves of the quality. 

Professor Rylance: You start from that principle and then you think why it is that you shouldn’t reveal that rather than thinking you should close it and then why you should reveal it. 

* On the costs of storing large amounts of data… 

Q280 Graham Stringer: You have mentioned that you could have a huge dataset. Some of it may be good data and some of it may be rubbish. Are there real problems of costs and, if there are, who should pay for those costs of storage? Are there any other practical problems of storing huge datasets? 

Sir Mark Walport: There are very major costs. For example, the Sanger Institute this year alone has generated 1,000 human genome sequences. That is a massive data burden. Indeed, the costs of storing the data may in the future exceed the costs of generating it. Who should be responsible for doing that? It is, ultimately, a research funder issue, because we fund the research and so we have to help with the storage. It is like all of these things. Our funding is a partnership between the charity sector and the Government and it is a shared expenditure. 

Professor Rylance: There are issues as well about obsolescence. At what point does this data become simply not relevant anymore? The length of time for that will be discipline- specific and so on. There are a whole host of practical issues about how you do this. IP — intellectual property — is one, particularly, in my area, to do with creative works, for instance. 

* On the degree to which article level metrics can measure the quality of research, and whether they pose a threat to high impact journals… 

Q281 Stephen Metcalfe: I would like to turn now to the importance of articles versus journals, if I may. As I know you are aware, PLoS ONE instituted a programme of article level metrics. Do you believe that that is a good way to judge a piece of published science and, therefore, you are judging it on its intrinsic merit rather than the basis of the publication that it is in? 

Professor Rylance: Yes, absolutely. To echo what we were saying earlier on, it is intrinsic merit that we are after. It is not reputational or associational value. 

David Sweeney: I am not entirely sure that I would say that article level metrics necessarily captured the intrinsic metric merit. We should look at metrics of all kinds and try and judge where the collection and development of the metric does add value. As you drill down to individual articles, some metrics really are not entirely helpful. We have seen that with certain solid evidence in bibliometrics. Equally, we can see, with some of the networking metrics, that they may provide helpful information. I remain of the view that there will be no magic number or even a set of numbers that does capture intrinsic merit, but one’s judgment about the quality of the work, which may well be, in any way, in the eye of the beholder, may be informed by a range of metrics. 

Sir Mark Walport: I complete agree with David Sweeney on that. You can alter the number of times that an article is downloaded by merely putting some words in the title. There is good evidence that the content of the title influences the number of times that something is downloaded, so measuring download metrics can be very misleading. Different fields have different types of usage. Methods papers, typically, are extraordinarily heavily cited. There can be a long time before the importance of a paper is picked up. It is like all of these things; at a mass scale the statistics are helpful. If you want to assess the value of an individual article, I am afraid that there is no substitute for holding it in front of your eyes and reading it. 

Q282 Stephen Metcalfe: You don’t see the article level metrics as a potential threat to the more established high impact journals. 

Sir Mark Walport: They are not a threat. Web-based publishing brings new opportunities, because it brings the opportunity for post-publication peer review and for bloggers to comment. There are things like the Faculty of 1000, which provides commentaries on papers. There are more and more ways for finding papers among a long tail of publications. This is a fast-evolving space. As the new generation of scientists comes through who are more familiar with social networking tools, it is likely that Twitter may find more valuable uses in terms of, "Gosh, isn’t this an interesting article?" All sorts of things are happening. It is quite difficult to predict the future. It can only be an enhancement to have the opportunity for post-publication peer review. It has turned out to be quite disappointing in that scientists have been surprisingly unwilling to put detailed comments. When the Public Library of Science started, it had plenty of space where you could comment. Academics are remarkably loath to write critical comments of each other alongside the articles. 

Q283 Stephen Metcalfe: Does anyone else want to add to that? … 

Professor Rylance: No. I, personally, do not think it is a threat. There are two issues here. One is the recognition of merit. I entirely agree with my colleagues that, in the end, you have got to read the bloomin’ thing to see whether that is true. Then there is the issue about how people gain access to the good and the strong. That is a slightly different question. 

David Sweeney: I don’t care if they are a threat to the base journals because the journal ecology will develop based on competition and alternative ways of doing things. I am sure they will respond. In some ways, I hope they are a threat. 

Q284 Stephen Metcalfe: You touched upon scientists being unwilling to get heavily involved in post-publication peer review. Philip Campbell from Nature told us that that may well be — I am summarising here — because there is no prestige or credit attached to that particular role and there is the risk of alienating colleagues by public criticism. Do you agree with that? Do you think that there should be a system of crediting people? 

Sir Mark Walport: There are two separate issues. There are some very interesting community issues here. In the humanities, there is a long tradition of writing book reviews where one academic is scathingly rude about another academic.

… In the case of the scientific world, that tearing apart is done at conferences and at journal clubs. The scientific community does not have a culture of writing nasty things about each other. This is an evolving world. 

Q285 Stephen Metcalfe: So introducing a system of credit- 

Sir Mark Walport: On credit, I think one has to be realistic. Are you going to promote someone on the basis of the fact that they wrote a series of comments on other scientific articles? The hard reality is that the core activities of an academic in terms of their promotion and pay recognition are going to be around their own scholarship and their own educational activities. It can only be at the margins that you will get brownie points for having done post-publication peer review. 

Q286 Stephen Metcalfe: Finally, if post-publication commentary were to grow, are you concerned about how you could ensure that there was no bias in that commentary, either positive or negative, either those wanting to build up someone’s reputation or those wanting to tear it down without anyone actually challenging them? 

Sir Mark Walport: It is quite clearly a risk. We see that in every other walk of activity on the internet. You have only got to look at the world of blogs, Twitter or anything else. Openness brings its own risks. If anyone can comment, then they can all say what they want, so of course there are risks like that. 

Professor Rylance: You could end up in the rather ludicrous receding world of having to peer-review the post-review and the rest of it to find out whether it has worth. Sir Mark was talking about the way humanities review each other’s things in print. Of course, one function for the journals that do that is to act as a quality filter to make sure that nothing defamatory, inaccurate or prejudiced is being said. Clearly, if those filters are removed, there is a danger that people will be relatively unbuttoned about things. 

Sir Mark Walport: It is self-correcting in that the scientific community is constantly scrutinising each other. A scientist who wrote something that was particularly egregious would be subject to the peer review of their own community. 

David Sweeney: I think those risks exist but there are benefits. We will have to adjust to the use of social networking in this area.
#####

== SECOND SESSION ==

Those giving evidence in the second session were Professor Sir John Beddington, Government Chief Scientific Adviser, and Sir Adrian Smith, Director General, Knowledge and Innovation, Department for Business, Innovation and Skills.

==

* On whether the peer-reviewed literature is fundamental to the formation of Government policy… 

Q287 Chair: … You are familiar with the piece of work that we are undertaking. We have heard that researchers perceive peer review to be "fundamental to scholarly communications". Is peer-reviewed literature also fundamental to the formation of Government policy? 

Professor Beddington: … The answer to that question is that science and evidence is clearly fundamental to Government policy and peer review is a fundamental part of science evidence. That is not meant to be a cute response, but it is absolutely clear that the process of science involves peer review, and properly so, and that scientific evidence is essential for being the evidence-based policy of the Government. 

* On whether there is a need for research into the efficacy of peer review… 

Q290 Chair: Evaluation of editorial peer review is poor. Do you think that there is a need for a programme of research in this area to test the evidence for justifying the use and optimisation of peer review in evaluating science? 

Sir Adrian Smith: The short answer is no. It is an essential part of the scientific process, the scientific sociology and scientific organisation that scientists judge each other’s work. It is the way that science works. You produce ideas and you get them challenged by those who are capable of challenging them. You modify them and you go round in those kinds of circles. I don’t see how you could step outside of the community itself and its expertise to do anything other. You have probably had it quoted to you already, but there was a paper in Nature in October 2010 when six Nobel Prize winners were asked to comment on how they saw the peer review process. Basically, it was the old Churchillian thing that there are all sorts of problems with it but it is absolutely the best thing we have. 

Professor Beddington: Peter Agre makes that point in that same article, saying: "I think that scientific peer review, like democracy, is a very poor system but better than all others." 

Q291 Chair: That is twice that that has come up today. 

* On the benefits of codifying the use of peer review, and whether UK scientific advisory groups are mandated to use peer review… 

Q292 Stephen McPartland: I would like to ask you about Government use of peer review research. The US Congress has codified the use of peer review in Government regulations using the "Daubert Standard". In the US, the Supreme Court codified their use in the courtroom. Have you had any discussions with your American counterparts regarding how this works and what any of the benefits are? 

Professor Beddington: … We would not see particular merit in excluding non-peer-reviewed information, because we have to recognise that there is a whole set of information that comes in as Government makes policy, some of it via the media, for example, evidence that is coming in to deal with emergencies. A basic decision on that I don’t think would be helpful. The issue is obviously going to be that, when we provide scientific advice to Government, there will be a weighing of that advice and the fact that certain advice is peer-reviewed and appropriately so, or indeed has been highly cited in a praiseworthy way, will go into the balance of that advice. I think I would advise against a piece of legislation saying that only peer review would be done. One would also have to question the definition of peer review and so on. I don’t think it would be something that I would be recommending to Government to think about adopting… 

Q294 Stephen McPartland: Do you believe that a test should be developed to identify whether or not peer review is reliable? This Committee recommended in 2005, in a report entitled Forensic Science On Trial, that a test for expert evidence should be developed, building on the US Daubert test, and the Law Commission has now built on that and published a draft Criminal Evidence (Experts) Bill. 

Professor Beddington: I would think that this has to be thought about on a case-by- case basis. Peer review is not a homogeneous activity. If one is starting to see that there are, for example, problems of peer review in a particular journal or in a particular area of science, that needs to be addressed by that journal and by the people who work in that particular area of science. If you posed the question, "Is the peer review process fundamentally flawed?" I would say absolutely not. If you asked, "Are there flaws in the peer review process which can be appropriately drawn to the attention of the community?" the answer is yes. From time to time that will happen and that’s the way to do it. 

Sir Adrian Smith: And there will, from time to time, be misjudgments in that system. You can distinguish the system from particular cases within the system. 

Q295 Stephen McPartland: Are UK scientific advisory groups mandated to use peer review? 

Professor Beddington: No, for the very reasons I gave in my answer to the Chair’s earlier questions. We would certainly always take into account peer-reviewed information in providing advice to Government. I don’t think we would ever exclude it, but that would not be the sole evidence. In fact, some of the evidence that would come in would depend on the area of science. For example, in a large part of social science the scholarship is developed by the production of books, quite often well after the event. Yet social research is extremely important to Government policy. We would have this but it would not necessarily have been published in a social research journal. By contrast, for example, if we are thinking in the context of some work on genomics, then one would be expecting that to have been peer-reviewed and that would be going into the evidence. Again, I just don’t think that one would seek to make regulation. I emphasise again that the evidence we use in scientific, including social research, evidence, will sometimes be peer reviewed. Obviously, we would not seek to exclude peer-reviewed material but we would not wish to exclude material that had not been peer reviewed for these sorts of reasons. 

* On the effectiveness of peer review in validating assertions made in articles submitted for publication… 

Q296 Roger Williams: … In your opinion how well does the peer review process validate the assertions made in articles put forward for publication? 

Professor Beddington: In a sense, both Adrian and I have answered that question earlier. Peer review does not guarantee that the results are correct. Science moves on by its use of scepticism and under challenge. We see all the time in the journals that are published this week that there will be people who have challenged peer-reviewed papers that were published some years ago and pointed out fundamental flaws in them or new evidence that undermines the conclusions of those papers. That is the progress of science. We can’t say that it is a guarantee, and manifestly not.

We can say that it is an awful lot better than bare assertion without evidence. Particularly when you are looking at scientific issues that are fundamental to policy — I have talked about this to this Committee before — the emergence of scientific consensus is very important. That is not to say you do not have sceptics or appropriate challenges, but peer review does not guarantee that and it never could… 

* On whether peer reviewers should assess the underlying data supporting a research article as well as the article itself, and whether the raw data should be made freely available… 

Q297 Roger Williams: Today, and increasingly, I guess, in the future, submissions in science will be accompanied by very large and complex sets of data. Do you think that the reviewers should be assessing that underlying data as well as the article that is being produced? 

Sir Adrian Smith: In an ideal world, but that is rather difficult, is it not, because data will come out of laboratories and field studies. As a reviewer, you can’t go off and replicate that. If you are trying to study somebody’s derivation of a mathematical formula, you can replicate. The difference between the scientific argument and the data is rather different, but the protocols that are in place for collecting data, for example, in medicine, in conducting proper clinical trials and all the rest of it, are in an environment where all the pressures and checks and balances are to get that right.

 

Q299 Roger Williams: Sir John, the Government is, obviously, a very substantial funder of science. Should it, as a matter of principle, require that all this raw data should be made available? 

Professor Beddington: Adrian has made a parallel point. With Government-funded science, the push is to have data out into the open. There are some areas, for example, shared data, which means you have a mix of data where some of the ownership of that data is outside the UK. You cannot make a hard and fast rule. In principle, though, the answer is that the more people who will look at the scientific problems from which we are wanting to get evidence the better. Therefore, transparency is, obviously, extremely attractive. From time to time, there will be timing issues, IP issues and so on, which will mean that transparency can be problematic. In the area we were looking at-the community of chief scientific advisers deals with this a lot of the time-we would be looking at material, and if it was not out in the open they would ask why not. If there is no good reason, they would urge that it would be put out into the open. Indeed, research councils push exactly along these lines. 

Sir Adrian Smith: There will always be issues of personal data protection, commercial interests and intellectual property and national security, so the situation is quite complex. I understand that the Royal Society will be doing a study sometime over the next 12 months that the Committee may well be interested in. 

Q300 Roger Williams: I think there is agreement that this data should be made available, subject to all the concerns that you have expressed about IP and commercial interests. Another matter is the cost of all this. Who should bear that cost if it is going to happen on a greater scale than it has in the past? 

Sir Adrian Smith: That is one of the issues that the Royal Society may well look at. Different communities, different cultures and different forms of data pose different issues, but there is a real problem… 

* On whether there should be a legal requirement on institutions to conduct a timely inquiry in cases of publication fraud or misconduct, and then publish details of the incident and the disciplinary action that has been taken… 

Q310 Gavin Barwell: In the past there has been a perception that publication fraud or misconduct has not always been investigated by the institutions in a timely fashion. Wakefield and MMR is an example. Should there be a legal requirement on institutions to conduct a timely inquiry and to publish the full findings of that inquiry and any disciplinary action that is taken? 

Sir Adrian Smith: I don’t know whether you need to go to what "legal" means, but, if you think of the funding that goes into universities, some of it will come through the Funding Council, for instance, through the QR stream and some through research grants. Both with the research councils and the Higher Education Funding Council conditions of grant are attached which make it clear what the expectations of behaviour are. I don’t think those are sufficient sanctions in themselves. An institution that would not follow up properly would be putting at risk its funding from HEFCE and the research councils. 

Q311 Gavin Barwell: Are there specific conditions relating to what institutions should do if there is a suggestion that misconduct was taking place? 

Sir Adrian Smith: Probably not. 

Q312 Gavin Barwell: Do you think there ought to be? 

Sir Adrian Smith: …My own view, having run a university for 10 years, is that the constraints you are under in terms of conditions from the many funders that one has are quite sufficient to frighten one into doing appropriate things. 

Professor Beddington: The RCUK’s code of conduct, too, is a good look guideline in terms of conflicts of interest and appropriate behaviour. In the sense that universities depend on a significant income from the research councils, then they would be extremely unwise not to take forward any issues very quickly where they had detected fraud. The media would be commenting on it and other people in the same scientific area would be commenting on it. There would be a very substantial incentive for the universities to take this forward rather quickly. 

Q313 Graham Stringer: … There is a certain amount of evidence that very little fraud is detected in universities and major research institutions in this country. Do you think we should be doing more to try and detect that, because in one sense there is an interest within those bodies not to discover or expose the problems they have, to sweep it under the carpet, isn’t there? If you are running a university and you find you have a researcher who just writes down his figures without doing the work, which has happened in one or two cases, the university doesn’t want to say that it has been employing a fraudster for 10 years, does it? 

Sir Adrian Smith: I would disagree. When I ran a university, I would put it exactly the other way round. The institutional reputation will suffer much more long-term harm if you allowed fraudsters to exist and you don’t do anything about it. In fact, I think you would get a lot of brownie points in many communities if you publicly identified such people and threw them out. I think the incentives are all in the opposite direction. 

Q314 Graham Stringer: It is surprising, therefore, is it not, … that there are no cases in Oxford, as the Pro Vice Chancellor told us, and that there are very few cases in other universities and research institutes where people have found fraudulent behaviour? In the case of Wakefield, even when fraudulent behaviour was found out, the institution investigated itself and found nothing wrong. The evidence we have is in the other direction, isn’t it? 

Professor Beddington: I would not seek to comment on the Wakefield case. The issues here are that there is so much in the checks and balances in the way that science operates that fraudulent behaviour is highly likely to be detected by, initially, I suspect, gossip and then increasing concern that there is something wrong. That will happen. It may happen in the community and the attention will then be drawn to the university, and it would be very unwise for the university to ignore that information. I have not experienced it in 25 years at Imperial College. 

Q315 Graham Stringer: Can I ask why you won’t comment on Wakefield, because it is one of the great scandals of the last 10 or 12 years? It was not dealt with very well. Are there not things to be learnt from that? 

Professor Beddington: Yes, there are. My reason for not commenting is that I haven’t read into it for a while, and I would like to re-familiarise myself before I commented, Mr Stringer, rather than any shyness on my part. I am not on top of the detail. 

* On whether researchers are biased towards the products of the pharmaceutical companies that sponsor their research… 

Q316 Graham Stringer: … Are there problems with peer review in other areas? For instance, there is a huge amount of research sponsored by pharmaceutical companies and companies that produce biomedical products. Do you believe that a lot of researchers in those areas are biased towards the products that those companies are selling? 

Sir Adrian Smith: ... I don’t think a lot of the research itself is biased. There are biased reporting effects, because if you are doing clinical trials and you get negative results, there isn’t a journal on clinical trials that didn’t work. It is the ones that work that get published. There is a selection bias in that sense. Do not forget that at the end of the day these things have to get through the FDA or drug regulatory authorities if they are to come on to the market. Then you have incredibly close scrutiny of the protocols, the trials that were done, the conditions under which they were done and so on and so forth. I think there are tremendous checks and balances in the system against that. 

* On whether there is a problem with colleagues reviewing each other’s papers in those areas where only a small number of researchers are working… 

Q317 Graham Stringer: Are there structural problems where there are only three experts in a particular field, so that they are, effectively, all peer reviewing each other and they either agree or disagree? In one sense, that was the major criticism of those people who criticised the researchers at the university of East Anglia for their research, was it not? There is a very small pool of researchers in that area. 

Professor Beddington: Yes, you have that, but people are always moving out of their own fields. There is academic interchange. If things are of sufficient importance, they are likely to get challenged, not necessarily by the top two experts in the field but by others who are around the fringes, particularly if they are of significant interest… 

Q318 Graham Stringer: To finish on a fairly obvious question, nearly all of our witnesses have used the Churchillian quote, but when you get fraudulent papers that have been through the best process we have of peer review, do you think that peer review has damaged that process? Getting back to Wakefield, his paper was peer reviewed. Do you think the peer review process has been damaged? 

Sir Adrian Smith: How far do you want to take the Churchillian democracy analogy? There are bad things that happen within the peer review system. Not every MP who has been elected has behaved totally honourably. 

Graham Stringer: What a shocking thing to say. 

Sir Adrian Smith: You would not abandon the democratic process, presumably. 

Graham Stringer: No. That would be terrible. Thank you. 

Q319 Chair: Finally, are you aware that RCUK has ever cut funding because of fraud or allegations of fraud? If so, could you give us any examples? 

Sir Adrian Smith: I would have to go back and look through the archives, as it were, and directly ask that of chief executives. I am not directly aware of a case. 

Professor Beddington: I have no experience of it. 

The (uncorrected) transcript of the full session is available here.

The video of the event can be accessed here, if it does not open up automatically in this post.

Sunday 19 June 2011

Open Access by Numbers

Few can now doubt that open access (OA) to scholarly research is set to become an important feature of the scholarly communication landscape. What is less certain is how much of the world’s research literature is currently available on an OA basis, how fast OA is growing, and what percentage of the world’s academic and scientific literature will be OA in the long-term.
 
Trying to crunch the numbers is complicated by the fact that research papers can be made OA in two ways: Researchers can continue to publish in subscription journals and then make them freely available by self-archiving them in an institutional repository (Green OA), or they can pay to publish their work in an OA journal (either a pure Gold journal or a Hybrid OA journal) so that the publisher will make it freely available for them.

OA enthusiasts like librarian Heather Morrison — who publishes a series called “Dramatic Growth of Open Access” — tend to estimate OA occurrence and growth primarily by the simple counting of things.

In March, for instance, Morrison reported that there are now over 6,000 OA journals listed in the directory of open access journals (DOAJ), and implied that the number of OA articles is now growing more quickly than the number of papers being published in subscription journals. As she put it: “Data is presented that strongly suggests that the success rate for open access journals is already higher than that of subscription journals.”

In the same post, Morrison argued that by counting the number of papers flagged as OA on the Mendeley research sharing service we could conclude that self-archiving had grown by 171% in the first quarter of 2011.

Counting in this way presents an upbeat picture, suggests that the world is in the process of being flooded with OA, and that universal OA is just around the corner. 

Refining the counts

Critics, however, point out that simple counting is too crude when trying to measure OA. Counting Gold OA journals, for instance, is not helpful since many of them publish just a handful of papers a year, if that.

Likewise, counting items that have been self-archived can be deceptive: Many records in institutional repositories will consist of metadata alone, or non-target items like presentations and other non-reviewed material.

Certainly publishers describe the incidence and growth of OA in a less upbeat manner. When I spoke to Springer’s Derk Haank at the end of last year, for instance, he estimated that only around 2% to 2.5% of the world’s papers are being published in Gold or Hybrid journals today.

And since the total number of research papers is growing at around 6% to 7% a year, he said, OA remains “just a drop in the ocean”.

In fact, predicted Haank, OA publishing will never be more than a niche activity. “I expect it to remain between 5% and 10% at a maximum,” he said.

Haank did not provide an estimate of Green OA, but implied that it was relatively low. Pointing out that he would be anxious if it did become commonplace he added, “But we are such a long way from that situation today that we are very easy going about author archiving.”

A few researchers, meanwhile, have been busy trying to arrive at more precise figures. When I last wrote on this topic in 2010 I spoke to a number of researchers, including Bo-Christer Björk.

Based at the Hanken School of Economics in Helsinki, Björk has undertaken several studies aimed at sizing the growth of OA, primarily Gold OA.

For a variety of reasons, Björk explained, this is not an easy thing to do. Nevertheless, when I spoke to him in January 2010 Björk estimated that Gold OA was probably increasing its share of the market by 0.5% per annum.

He added, however: “I have no evidence to show any acceleration in growth. On the contrary it seems that growth has been relatively stable, after a short expansive period when BioMed Central and PLoS were founded”. 

“Tremendous growth of Gold OA”

Since then, Björk has taken a closer look at the many new OA journals that have been launched from 1993 - 2009, as well as the many subscription journals that have been converted into Gold journals.

There has also been the rise of “mega journals” like PLoS ONE, now the largest peer-reviewed journal in the world, and which expects to publish 12,000 papers in 2011 alone. In the wake of PLoS ONE’s success a number of PLoS ONE clones have recently been launched.

On June 13th 2011 Björk and colleagues published a new paper reporting an average annual growth rate since 2000 of 18% for the number of OA journals and 30% for the number of articles.

This, the paper suggests, “can be contrasted to the reported 3.5% yearly volume increase in journal publishing in general. In 2009 the share of articles in OA journals, of all peer reviewed journal articles, reached 7.7%. Overall, the results document a rapid growth in OA journal publishing over the last fifteen years.”

And in a note he posted on the American Scientist Open Access Forum (AmSci) Björk said that the results, “show the tremendous growth of gold OA over the past decade”.

As we said, Björk’s primary focus is on Gold OA. What about Green OA?

This is an area that Yassine Gargouri, a postdoctoral researcher who works with OA advocate Stevan Harnad at the Université du Québec à Montréal (UQAM), has been working on for the past four years.

Gargouri’s numbers suggest that between 2005 and 2010 the percentage of Green OA rose from about 15% per year to about 21%, which amounts to an increase of about 1% per year.

His numbers also suggest that introducing a Green mandate (requiring all an institution’s researchers to self-archive their papers) triples the yearly percentage of OA papers from the mandating institution.

Taken together with Björk’s work, this would seem to suggest that around 30% of the academic and scientific literature published in 2011 worldwide may now be freely available on the Web, two thirds of it as Green OA and one third of it as Gold OA.

Can Gold alone buy OA?

Nevertheless, it remains difficult to be precise about OA numbers, and especially difficult to make accurate predictions about future growth.

Like all attempts to understand and predict the world by means of numbers and statistics, much depends on how one derives them in the first place, how one crunches them, and how one subsequently interprets the results. In the case of OA, a key question that emerges is whether Gold OA is able on its own to accelerate the growth of OA to the degree that the OA movement would wish.

Why is it necessary to fret over such things? It is necessary for a number of reasons, but above all because if OA advocates knew exactly what was happening, and why, they would be able to put their main effort into those activities most likely to achieve their goal.

Vitally, they would be better able to answer a question that has plagued the movement for many years: Should the priority be given to Green or to Gold OA?

In the PDF file attached below I am publishing a Q&A interview with Gargouri. With a PhD in cognitive informatics, Gargouri has also participated in projects dealing with knowledge management, semantic web applications and ontologies. He has also taught in the computer science department at UQAM.

The interview includes contributions from Harnad — a leading OA advocate and self-styled archivangelist.

####

If you wish to read the interview please click on the link below. 

I am publishing it under a Creative Commons licence, so you are free to copy and distribute it as you wish, so long as you credit me as the author, do not alter or transform the text, and do not use it for any commercial purpose.

To read the interview (as a PDF file) click here.

See also: John Whitfield Open access comes of age Nature, 21st June 2011

Tuesday 14 June 2011

Bernard Rentier Interview in Portuguese

Brazilian Open Access advocate Dr Hélio Kuramoto is currently translating the recent interview I did with Bernard Rentier, Rector of the University of Liège, into Portuguese.

Dr Kuramoto plans to undertake the translation in stages. The first part was posted today, and can be accessed here.

Thank you  Hélio!

Sunday 12 June 2011

UK politicians puzzle over peer review in an open access environment

The UK House of Commons Science & Technology Committee is currently conducting an inquiry into peer review. The third public event of the inquiry was held on Monday 23rd May, when the Committee heard evidence from experts on open access publishing and post-publication review, and from representatives of the research community.

SOME OF THE ISSUES EXPLORED BY THE COMMITTEE ARE HIGHLIGHTED IN THE EDITED QUESTIONS AND ANSWERS LISTED BELOW. THE ASTERISKED HEADINGS ARE MINE.

I COMMENT ON THE SESSION AT THE BOTTOM OF THIS POST. 

UPDATE: THE COMMITTEE’S REPORT HAS NOW BEEN PUBLISHED. THE DETAILS ARE AVAILABLE HERE.

The Chair of the Science & Technology Committee is Andrew Miller, Labour MP for Ellesmere Port and Neston. Other politicians to pose the questions below were Graham Stringer, Labour MP for Blackley and Broughton, Roger Williams, Liberal Democrat MP for Brecon and Radnorshire, and Stephen Metcalfe, Conservative MP for South Basildon and East Thurrock. (A full list of Committee members is available here).

The hearing was split into two sessions.

== FIRST SESSION ==

Those giving evidence in the first session were Dr Rebecca Lawrence, Director, New Product Development at Faculty of 1000 Ltd., Mark Patterson, Director of Publishing at the Public Library of Science, Dr Michaela Torkar, Editorial Director at Biomed Central, and Dr Malcolm Read OBE, Executive Secretary of JISC.






* On splitting traditional peer review into two separate processes: a) assessing a paper’s technical soundness and b) assessing its significance — a model pioneered by open-access publisher PLoS ONE, and now increasingly being adopted by traditional publishers …

Q162 Chair: We have heard that pre-publication peer review in most journals can be split, broadly, into a technical assessment and an impact assessment. Is it important to have both? 

Dr Torkar: … It is fairly straightforward to think about scientific soundness because it should be the fundamental goal of the peer review process that we ensure all the publications are well controlled, that the conclusions are supported and that the study design is appropriate. That is fairly straightforward as a very important aspect which should be addressed as part of the peer review process.

The question of the importance of impact is more difficult. When we think about high impact papers we think about those studies which describe findings that are far reaching and could influence a wide range of scientific communities and inform their next-stage experiments. Therefore, it is quite important to have journals that are selective and reach out to a broad readership, but the assessment of what is important can be quite subjective. That is why it is important, also, to give space to smaller studies that present incremental advances. Collectively, they can actually move fields forward in the long term.

Dr Patterson: … [B]oth these tasks add something to the research communication process. Traditionally, technical assessment and impact assessment are wrapped up in a single process that happens before publication. We think there is an opportunity and, potentially, a lot to be gained from decoupling these two processes into processes best carried out before publication and those better left until after publication.

One way to look at this is as follows. About 1.5 million articles are published every year. Before any of them are published, they are sorted into 25,000 different journals. So the journals are like a massive filtering and sorting process that goes on before publication. The question we have been thinking about is whether that is the right way to organise research. There are benefits to focusing on just the technical assessment before publication and the impact assessment after publication … Online we have the opportunity to rethink, completely, how that works. Both are important, but we think that, potentially, they can be decoupled ...

Dr Lawrence: … [I]t is not known immediately how important something is. In fact, it takes quite a while to understand its impact. Also, what is important to some people may not be to others. A small piece of research may be very important if you are working in that key area. Therefore, the impact side of it is very subjective.

Dr Read: … Separating the two is important because of the time scale over which you get your answer. The impact is much longer. I guess the technical peer review is a shorter-term issue.
===
* On whether in order to deliver faster publishing times it is necessary to cut corners by, for instance, editing papers more lightly, and whether this approach leads to more submissions …

Q167 Roger Williams: Is light copy editing a feature of how you can deliver faster times?

Dr Patterson: [W]e are balancing these two competing interests of speed and quality. In our production process we focus on delivering really well structured files that will be computable, for example. We don’t expend effort in changing the narrative. Scientific articles aren’t works of literature. That is not to say it wouldn’t be nice if, sometimes, a bit more attention was paid to that. It is also true that one of the criteria for PLoS ONE is that the work is in intelligible English. If an editorial reviewer thinks that something is just not good enough and they can’t really see what is happening, it will be returned to the author.

Q169 Roger Williams: Are there any other corners that your journal “cuts” in order to deliver faster times?

Dr Patterson: I wouldn’t frame it that way. What we are doing is trying to identify and take away any unnecessary barrier to publication …

Q170 Roger Williams: Has your approach and the reputation you have built up resulted in a lot more submissions?

Dr Patterson: PLoS ONE was launched in December 2006 and is still quite a new journal. It is only four and a half years’ old. We published about 4,000 articles in 2009 and 6,700 last year, so it became the biggest peer review journal in existence in four years. It has grown steadily over that time …
===
* On the propagation of the PLoS ONE model and whether the journal might become a victim of its own success … 

Q171 Roger Williams: Has your approach and the reputation and impact of the journal itself increased the number of submissions? 

Dr Patterson: It has. We see a lot of positive feedback … the message that if I have a solid piece of work I’m not going to have to grapple with a journal that is basically biased against publication — the goal of PLoS ONE is to publish all rigorous science — is a very positive one which authors like. Coupled with ideas about how, then, you might assess the impact after publication, it is definitely gaining ground.

The other very significant thing that has happened in the last nine to 12 months is that eight or more big publishers have announced PLoS ONE lookalikes, essentially. That is very striking. The American Institute of Physics and the American Physical Society have both launched physical science versions; Sage has launched a social science version; the BMJ group, who were actually the first, last year launched a clinical research version of PLoS ONE; Nature has launched a natural science version of PLoS ONE, and on it goes. The model is getting that level of endorsement from major publishers and I think, again, that is probably helping to make researchers very comfortable with the way in which PLoS ONE works.

Q172 Roger Williams: But will you be a victim of your own success? Will you be overwhelmed by the volume of submissions and then your time to publication suffers as a result? 

Dr Patterson: I certainly hope not. The growth has been pretty spectacular and has definitely surpassed our expectations …
===
* On whether PLoS ONE’s approach and popularity is impacting on its peer review process, and whether the success of its model might trigger a wider change in peer review …

Q173 Roger Williams: Do you believe that this approach has had an effect on the peer review process perhaps in terms of timing, quality and ease of recruiting or having access to reviewers? 

Dr Patterson: It is beginning to. PLoS ONE has grown very rapidly in the space of four years to become a very big journal. There are now another eight to 10 on the scene that are being launched, or are about to be launched. If another 10, 20 or 30 of these are launched over the next one to two years, which I think is quite likely — because a lot of publishers will be looking very hard and thinking that if they don’t get involved they will potentially lose out — that could make some fairly substantial changes in the way the prepublication peer review process works. There is a lot to say about post-publication but not yet. So I think the model could change …
===
* On whether PLoS ONE’s peer review process could be described as “light touch” …

Q176 Stephen Metcalfe: You wouldn’t describe your approach as "light touch"?

Dr Patterson: No, not at all. It is important to consider not just the peer review process but everything that goes on before an article is accepted for publication as being critical steps in quality control. There are several components to that, of which peer review is one. At PLoS ONE staff are involved in the first step. It goes through a series of quality control steps which are focused. Basically, we want to take stuff away from the academics so that they can focus on the science and we can sort out everything else. We focus on things like whether the competing interest statements are properly indicated; financial disclosures; if the work concerns human participants whether there is an ethics statement and appropriate ethical approval — a whole series of things like that. Hardly any manuscripts get through that without some kind of query going back to the author.

Then there is a step where we involve PhD scientists who scan the work. These are people who have some level of subject expertise. Some — not many — of the submissions are rejected at that point because they are completely out of scope or something. They are also looking for any articles on controversial topics or anything that might require special treatment. They flag work like that. The work then goes to the editors whose responsibility it is to take on the peer review process. It is a pretty involved process.

The peer review part then focuses on seven criteria to do with whether the methodology and analysis are appropriate; whether the conclusions are justified; whether the work is ethically sound and properly reported; and whether data is available as appropriate. There is a set of seven criteria …
===
* On cascading peer review, the difficulties in getting publishers to share reviews, and whether sharing is more likely in an OA environment …

Q181 Stephen Metcalfe: How widely used is the system of cascading submissions and reviews from one journal to another? 

Dr Torkar: … We use this quite extensively at BioMed Central and, in particular, with the BMC series which is more or less our equivalent of PLoS ONE and was launched in 2001. It is a group of more than 60 community journals which are subject specific: BMC Immunology, BMC Genetics, etc. As they also have the premise of publishing all scientifically sound studies without putting too much emphasis on the impact and extent of the advance, they will consider manuscripts that were previously peer reviewed or submitted to some of our flagship journals. Sometimes the transfers will happen before the peer review and sometimes with the reviewers’ reports. That does save time for authors and reduces the burden on the peer reviewers who don’t have to re-review manuscripts for multiple journals.

Dr Patterson: Cascading peer review is a phenomenon that exists at PLoS in its two flagship journals PLoS Biology and PLoS Medicine. Articles can be transferred from there to other journals. To give you a sense of the size of that, about 10% to 15% of submissions to PLoS ONE come from other PLoS journals. It is pretty clear that, internally, that works quite well. A lot of publishers think so and quite a lot of the evidence has shown that.

The much more problematic issue is the sharing of reviews from one publisher to another. I know you heard some talk about the Neuroscience Peer Review Consortium experiment which, interestingly, was not terribly popular with authors, but I am not sure how much publishers were really behind it. For example, it was said that some publishers might feel reluctant to share reviews with another journal or publisher because they have built up relationships with these people and there is some commercial value associated with that. When you hear that you have to ask whether that sense of ownership is in the best interests of science. I am not convinced …

… it is quite natural that journals would feel that way in a world of subscriptions because it is about selling a package of content to a group of readers. That is how the model works. Therefore, anything which allows you to improve that package of content is of value to you commercially. In a way, it is completely understandable that journals in that subscription business model would be reluctant to share their reviews.

When you switch round the model, as BMC, PLoS and many others do now, in terms of supporting and publishing through a publication fee, considering yourselves, as publishers, much more as service providers — you are selling a publishing service to a researcher — your attitude towards sharing peer reviews might be changed. I am not sure.
===
* On the ethics of publishing papers sponsored by pharmaceutical companies aiming to bring new products to the attention of doctors, whether pharmaceutical companies might find PLoS ONE’s peer review process more attractive for these purposes, and whether PLoS ONE has a financial interest in publishing pharma-sponsored papers … 

Q198 Graham Stringer: … How much commercial pressure is there from pharmaceutical companies to publish … and how does that commercial pressure interfere with the publication? A journal that publishes a paper which means doctors can prescribe a particular drug stands to make a lot of money, doesn’t it? How is that pressure dealt with ethically? 

Dr Patterson: This is an issue which has certainly been highlighted in the evidence you have already heard. This is something on which, in particular, the editors of our journal PLoS Medicine have taken a very strong position, to reduce what they call the cycle of dependency in some way between the pharmaceutical industry and medical publishing. One of the ways in which that is manifest is with very substantial reprint revenues associated with high profile, hard-hitting clinical trials; for example, sponsored by the pharmaceutical industry.

What PLoS Medicine and PLoS as a whole have done, in order to keep the two things apart and separate any commercial interest from the editorial integrity of the content to be published, is refuse to accept any form of drug or device advertising, even though it could be a significant revenue stream for us. We feel that is a very strong leadership position to take in that area. The business of open access is also very important to this. The articles we publish are open in the sense that there are no barriers to reusing that content. A lot of publishers retain rights to contents so that they can reprint the article. They are the only people who can reprint that article at the levels of thousands and thousands of copies for redistribution, which then earns them an awful lot of money. We can’t do that.

Q199 Graham Stringer: Are you saying that reproducing your articles is free? 

Dr Patterson: Yes.

Q200 Graham Stringer: You are very different from The Lancet or other journals? 

Dr Patterson: Totally different. We feel that is a very important principle. We have no unique right to take those articles and make that kind of money from them. These are some steps that have been taken. They are not the solution to everything, but I think they are important … 

Q201 Graham Stringer: It struck me, when you spoke earlier, that if a pharmaceutical company wanted to get a drug to market very quickly and within the mindset of GPs and other doctors, your route to publication would be quicker. It might be an incentive, then, for them to go via a route which you said yourself — I can’t remember your exact words — was of a different standard; it wouldn’t be sent back. That worried me slightly, that, commercially, it might be easier for drug companies to make more money by going via your route. But you don’t have a financial interest in that? 

Dr Patterson: There is no financial interest, in that sense. To be clear, we consider work that has been sponsored by the pharmaceutical industry but, obviously, it has to conform to the same criteria as everything else. What might make the pharmaceutical industry reluctant, in terms of thinking about the value of that publication commercially, is that to publish in a very high prestige journal would probably be of great value. That is what might put them off coming to, say, PLoS ONE which does not, in and of itself equal high prestige …
===
* On post-publication peer review, whether publishers should incentivise researchers to contribute to such reviewing, what form post-publication review should take, and how research assessment metrics can be improved … 

Q209 Chair: … I want to go finally to post-publication commenting. Should publishers introduce some system of prestige or credit for post-publication commentary? Dr Patterson, why is article-based methodology a good one? I don’t regard it necessarily as a healthy comment if I make a speech and there is an endless number of blogs. No doubt I will disagree with half of them anyway. Is the F1000 model which uses faculty members to carry out that process a better one, or does it become a biased process? To finish off, let me put to all of you this question: what is a good system of post-publication commenting? Should there be some recognition of the people who participate? 

Dr Patterson: Maybe the starting point is to say that at the moment we have a very blunt instrument for research assessment which is basically a number — an impact factor — associated with a journal. We can do much better than that now. The way we are looking at this is to consider all the things you can potentially measure post-publication. It is not just about a blog comment or something like that. There is a whole range of metrics and indicators, including resources like Faculty of 1000, which can be brought to bear on the question of research assessment. Normally, people are looking at the research literature as a whole, they are identifying the papers that are important to them and they are coming to those papers. We want to provide an indication when they come to that paper of how important this is and what impact it has had through usage data, citation information, blogosphere coverage and social bookmarking. There are so many possibilities.

We have moved in that direction by providing those kinds of metrics and indicators on every article that we publish — we are not the only people doing this but we have probably taken it further than most — to try to move people away from thinking about the merits of an article on the basis of the journal it was published in to thinking about the merits of the work in and of itself. Indicators and metrics can help with that. They aren’t the answer to the question but they will help …

Dr Lawrence: We would agree. Faculty of 1000 is a way of using a panel of experts. We have heads of faculty who then suggest the section heads who then suggest the faculty members. It is all very open. All their comments are against their name. On the question of bias, they also have to sign something to say they haven’t been unduly influenced and, obviously, there are issues of conflicts of interest.

Q210 Chair: Isn’t that a more structured approach to Dr Patterson’s X Factor version? 

Dr Lawrence: I don’t think that any of these different metrics, on their own, are that strong. The point is about bringing together all the various metrics. They all have their own problems. To measure the impact of research you need to use different ones in a sensible way. In a way, the more metrics you have the better your chance of really understanding the impact.

Q211 Chair: I am getting from this that your methodology is making sure that the judges aren’t tone deaf, if I may continue to use my rotten analogy, which is a cruel one to you, Dr Patterson. In Dr Patterson’s case, you don’t care. 

Dr Patterson: No. To be clear, I think both approaches will be required. They are complementary. I would like to see — we probably will shortly — F1000 as one of the indicators on a PLoS article. You go to the article and say, "Ooh! It’s been highlighted in F1000 and this is what the person has said", or something like that. There will be a place for expert assessment, evaluation and organisational content post-publication, as well as grabbing as many metrics and indicators as you can from the world at large.

== SECOND SESSION ==

Those giving evidence in the second session were Dr Janet Metcalfe, Chair of Vitae, Professor Ian Walmsley, Pro Vice Chancellor of the University of Oxford, and Professor Teresa Rees CBE, Professor of Social Science and former Pro Vice Chancellor (Research) at Cardiff University.
===
* On why researchers are under pressure to publish in high impact journals … 

Q216 Chair: … [W]hy is it that researchers are put under so much pressure to get work published in the high impact journals? 

Professor Walmsley: Perhaps a simple answer to that from a parochial view of a university person is that that is the way one’s career advances. As you heard from the previous panel, a lot of very good work gets published in journals that do not have such high visibility, and I think that is quite crucial. None the less, having a highly cited paper in a journal that people would regard as high profile is considered important as a way to raise your visibility and develop your career.

Dr Metcalfe: We have drivers in the system, such as the research assessment exercise, that encourage that, so there is very strong emphasis in terms of the impact of the journal. Coming at it from my perspective as Vitae, it is: how do you support early career researchers to enter into that system and even make decisions about what journals they should be targeting? How do they get a sense of the most appropriate place for them to publish?

Q217 Chair: Doesn’t the tie-in between the research excellence framework and high impact journals potentially create a rather subjective judgment? 

Professor Walmsley: I would argue that the reason peer review works well is the expertise of the community on an inherently subjective set of criteria; that is, one can with any piece of work assess various objective elements of it. Is it right? Is it novel — that is, is it new and not been published before? But the subjective element, which I think differentiates a number of different journals — because they have different subjective criteria — is the piece that is very difficult to assess in an objective way. Knowing that a piece of work is going to be important is a very difficult thing to do. In many ways that is something best assessed post facto …

… I absolutely take the point about RAE. Having sat on one of the RAE panels last time, I can say the panel was very clear that the forum in which the paper had been published was not determinative. It was reading the individual outputs and assessing the value of the work itself that ended up being more important. None the less, when a CV comes across the desk of a head of department for a faculty post, as a first pass through it makes a difference where those papers are published.
===
* On whether researchers should be paid for their work as reviewers …

Q222 Roger Williams: Overall, would your judgment be that in the best of all possible worlds researchers should be paid for their work as peer reviewers?  

Professor Rees: I am not sure of the answer to that question. It is strange that if researchers do the research and publish and then do the peer review and the editing — in some cases now they are asked to pay for their articles to be published — one finds oneself responding to a memo saying, "Which journals do you think we should cut from the library because of budget cuts?" I would say there is a bit of a paradox there. 

Professor Walmsley: I would concur with that, having part of the library as my portfolio too. That is an internal and difficult question to address. As to whether reviewers should be paid, I think that may send incentives in the wrong direction. One wants as wide a fraction of the community with appropriate expertise to be involved as possible. The way we might see it internally in departments at Oxford would be that this is a contribution to the community, just as chairing or sitting on committees in the university is considered part of what you need to do in order to make the place and the business function. But the question is about keeping an appropriate lid on that. There are various ways in which one might do that, but in mentoring terms one would often say, "You want to review twice as many papers as you publish and you want to review three times as many grant applications as you submit." That tempers your workload and makes the whole system work.
===
* On whether researchers should be formally trained to do peer review … 

Q226 Stephen Metcalfe: … Dr Metcalfe, I think that the number of people who take up the opportunity for training either in peer review or other publication training is relatively small. Why do you think that is? 

Dr Metcalfe: The tradition is very much an apprenticeship model. You learn the system by doing it in terms of writing papers, submitting them and maybe getting feedback from your principal investigator. Where that works it is absolutely fantastic in terms of somebody taking an early career researcher through the system and giving them feedback before they submit their articles, maybe having several researchers in their group giving feedback, and showing them how the whole process works. But, because we are a collective in terms of the academic community, there is opportunity for that process not to be as well supported throughout the whole of the academic community as it could be.

The challenge is how to help a researcher maximise their opportunities of publication at submission so that they are reducing the amount of rejections and the amount of comments they have to do through that. Formal training in that process is one way in which you can do that. From some of the research Vitae has done, we have evidence of increases in the success rates of grant applications and fellowship applications by having formal training and development in working within the peer review systems for both of those. We could do more in advance of a researcher having to submit their first paper or grant proposal so that they are better informed and therefore more expert about how the whole process works. 

Q227 Stephen Metcalfe: You would be in favour of moving towards a more formal requirement for training. You consider that it should be provided across all higher education institutions. 

Dr Metcalfe: No, I wouldn’t go down that route. I think the opportunities to have training should be there. The process by which a researcher learns to become expert is very much up to their individual circumstances. If they are getting good individual nurturing and mentoring by their PI, that is great. But there should also be the opportunity, for those researchers who respond more to formal training, to have that available as well. 

Q228 Stephen Metcalfe: Who do you think should pay for that training? 

Dr Metcalfe: Collectively, we all have a responsibility for it to work. I think journals have a responsibility to support and provide more information about what is required and to contribute to the training of their reviewers. I think institutions have a responsibility, as signatories to the concordat for the career development of researchers, to ensure that those opportunities are there. I think research and funding councils and Government have an obligation to provide enough funding within the entire system to make available that kind of training for our early career researchers.
===
* On fraud, and what whether Oxford University has ever sacked anyone for fraud …  

Q234 Graham Stringer: Last week the chair of COPE told us that if a university had not fired at least one academic for fraud there was something wrong with the university. Do you agree with that statement? Do you think she was right? If so, have your universities sacked any academics over the last five years for academic fraud? 

Professor Walmsley: The answer to the second question is no. 

Graham Stringer: So you are not firing. 

Professor Walmsley: Yes. I would say that the answer to the first question is probably no, too, but I want to be careful not to suggest that there are no ethical challenges within publication. We have a process within Oxford, which I am certain is the same at other places, to deal with that. Part of the question is: how does it come to your attention? 

Q235 Graham Stringer: Before you go on, is that process published?  

Professor Walmsley: Yes. It is available on the website through the Research Integrity Portal and there is an access through the SkillsPortal to that as well. 

Q236 Graham Stringer: So, that is true for all?  

Professor Walmsley: Yes. How do you identify and find that out? I think that internally, at the pre-publication end, there is great onus on researchers. As more and more papers are published with joint authors there is joint responsibility for doing that. That could lead in two directions: first, increased pressure to get it right because there are more people involved in the discussion; but, secondly, the chance that you will miss a trick or two because there are more people contributing. It is a difficult tension. Once the paper is out there, if an external party notes something that looks challenging I guess we will hear about that either from the external people or from editors themselves. If an editor writes, we will be able to investigate that internally.

As to the sanction of firing someone, I said I have not known that to happen, but there are certainly lower levels of discipline that can happen. However, I don’t know what the statistics are at Oxford. 

Q237 Graham Stringer: If you were to have an investigation, would you publish the results? Would that become a public document? 

Professor Walmsley: I don’t know the answer to that question. 

Q238 Graham Stringer: Would you write and tell us?  

Professor Walmsley: Yes, I will do that. 

The (uncorrected) transcript of the full session is available here.

The video of the event can be accessed here, if it does not open up automatically here.
=====

Subsequent to giving evidence to MPs a slightly puzzled Rebecca Lawrence penned a blog post about her experience. She concluded: “I think we all left still wondering what the purpose of the whole enquiry is and what they are hoping to achieve.”

She added: “This was made even more evident when at the end of the latest session, following 5 long rounds of oral sessions, MP Graham Stringer suggested that he felt that maybe they should have been looking at the commercial pressure on both editors of journals and researchers instead.”

It’s true that the Committee gave no reason for launching the Inquiry. But there have been hints: In this session, for instance, MPs referred both to the so-called Climategate incident, and to Andrew Wakefield.

It is also true that the Committee has at times seemed a little unfocused. But this was probably inevitable: opening up the topic of peer review is not unlike opening Pandora’s Box — all sorts of things fly out. For this reason perhaps the Committee has not always been able to explore in sufficient depth some of issues that have arisen.

The key theme to emerge from this day of the hearing was whether peer review should/would change in an online, open-access environment; and if so, in what way. I list below some of the areas that I feel could have been examined more closely:
  • There appears to have been little discussion of the possible quality implications of authors (or their funders) being asked to pay a fee to publish in open-access journals. It is widely known (within the open-access movement at least) that a growing number of high-volume, low-quality journals are emerging that appear to offer little more than a vanity publishing service. I should stress that I am not referring to PLoS ONE or BioMed Central. I am referring to a number of start-up companies that have adopted the "author pays" model introduced by BioMed Central — and subsequently copied by PLoS ONE — but which often appear to make little or no effort to have the papers they publish reviewed effectively. The witnesses who appeared in the first part of this session will know of this development, and would surely have been able to talk through the issues. In the process the Committee could have explored the potential conflicts of interest (and how to overcome them) that must surely arise when the income of a publisher of peer-reviewed journals is directly related to the number of papers it accepts. 
  • Likewise, the Committee did not appear to address a topic that has become a serious problem in scholarly publishing, one that surely has implications for peer review. That is, although the stated purpose of publishing scholarly papers is to share new ideas and experimental data with other researchers, the emphasis today is as likely (often more likely) to be on furthering an author's career as it is on communicating research findings — which are not necessarily the same things. In other words, researchers are incentivised to publish as many papers as possible in order to maximise their chances of tenure and/or promotion. Amongst other things this leads to salami slicing, and inevitably to the publication of lower-quality, less-worthwhile papers. In a pay-to-publish environment that can easily morph into vanity publishing if there are inadequate guidelines and processes to ensure high quality this is worrying — not just in terms of quality in fact but, since it can cost 1,000s of dollars to publish a paper, in terms of costs too.
  • And while it did explore the costs of peer review, the Committee seemed more interested in the (very much smaller) costs of reviewers’ time, than the much larger, increasingly burdensome, costs of paying publishers to organise that peer review. And open access looks set to increase these costs (certainly in the short term). This issue was hinted at when Teresa Rees said: “It is strange that if researchers do the research and publish and then do the peer review and the editing — in some cases now they are asked to pay for their articles to be published — one finds oneself responding to a memo saying, ‘Which journals do you think we should cut from the library because of budget cuts?’ I would say there is a bit of a paradox there.” The Committee would have benefitted from trying to unpick that paradox. 
  • Finally, the Committee did not follow through on Stephen Metcalfe’s questions about training researchers. As I understood it, the issue was whether and how researchers are taught to peer review papers. In their answers the witnesses focussed almost exclusively on how one trains researchers to get their papers through the review process, not how to review other researchers’ papers effectively. In the context of the inquiry the latter is surely the more important issue. Indeed, the fact that the witnesses focused on the other side of the process serves to underline the extent to which scholarly communication is now viewed essentially as a career-advancement mechanism, rather than a formal way of sharing research findings. As mentioned, this would seem to have implications for the quality of published research. At least it would have helped had the Committee persisted in looking at both sides of the process — after all, the current pressure to publish as many papers as possible leaves researchers vulnerable to the constant stream of email invitations from publishers of pay-to-publish journals whose review processes are obscure, and may be seriously lacking in rigour.
I realise that, given the time available, it is probably asking too much to expect the Committee to explore all the issues that fly out of the peer review box, certainly in depth. But for an inquiry into peer review that has indicated it plans to examine, amongst others things, “the strengths and weaknesses of peer review as a quality control mechanism for scientists, publishers and the public”; “the impact of IT and greater use of online resources on the peer review process”; and “the processes by which reviewers with the requisite skills and knowledge are identified,” the above issues would seem to be highly germane.

That said, Graham Stringer’s suggestion that they ought to be looking more at the commercial pressures faced by editors and researchers suggests that the Committee is beginning to understand some of the deeper forces currently influencing how peer review operates.

It is also worth bearing in mind that when the Science & Technology Select Committee was conducting an inquiry into scientific publishing in 2004 some began to wonder if it had lost its way. However, when its report — Scientific Publications: Free for All? — was eventually published the research community quickly concluded that the politicians had actually understood the issues very clearly and, moreover, had made the correct recommendations.

Meanwhile the current inquiry continues. Time will tell how successfully today’s crop of MPs can get to grips with the issues.