Saturday, 6 February 2016

The OA Interviews: Kamila Markram, CEO and Co-Founder of Frontiers

Based in Switzerland, the open access publisher Frontiers was founded in 2007 by Kamila and Henry Markram, who are both neuroscientists at the Swiss Federal Institute of Technology in Lausanne. Henry Markram is also director of the Human Brain Project.
 
Kamila Markram
A researcher-led initiative envisaged as being “by scientists, for scientists” the mission of Frontiers was to create a “community-oriented open access scholarly publisher and social networking platform for researchers.”

To this end, Frontiers has been innovative in a number of ways, most notably with its “collaborative peer review process”. This abjures the traditional hierarchical approach to editorial decisions in favour of reaching “consensual” outcomes. In addition, papers are judged in an “impact-neutral” way: while expected to meet an objective threshold before being publicly validated as a correct scientific contribution, their significance and impact are not assessed.

Frontiers has also experimented with a variety of novel publication formats, created Loop – a “research network” intended to foster and support open science – and pioneered altmetrics before the term had been coined.

Two other important components of the Frontiers’ concept were that it would operate on a non-profit basis (via the Frontiers Research Foundation), and that while it would initially levy article-processing charges (APCs) for publishing papers, this would subsequently be replaced by a sponsored funding model.

This latter goal has yet to be realised. “We dreamed of a zero-cost model, which was probably too idealistic and it was obviously not possible to start that way”, says Kamila Markram below.

Frontiers also quickly concluded that its non-profit status would not allow it to achieve its goals. “We realised early on that we would need more funds to make the vision sustainable and it would not be possible to secure these funds through purely philanthropic means,” explains Markram.

Consequently, in 2008 Frontiers reinvented itself as a for-profit publisher called Frontiers Media SA. It also began looking for additional sources of revenue, including patent royalties – seeking, for instance, to patent its peer review process by means of a controversial business method patent.

The patent strategy was also short-lived. “We abandoned the patent application by not taking any action by the specific deadline given by the patent office and deliberately let it die,” says Markram, adding, “we soon realised that it is far better just to keep innovating than waste one’s time on a patent.” (Henry Markram nevertheless remains an active patent applicant).

By the time the peer review patent had died it was in any case apparent that Frontiers’ pay-to-publish model was working well. In fact, business was booming, and to date Frontiers has published around 41,000 papers by 120,000 authors. It has also recruited 59,000 editors, and currently publishes 54 journals. By 2011 the company had turned “cash positive” (five years after it was founded).


Successes not unnoticed


Frontier successes did not go unnoticed. Not only did it quickly gain mindshare amongst researchers, but it began to attract the attention of publishers, not least Nature Publishing Group (NPG), which in February 2013 announced that was entering into a relationship with Frontiers.

The exact nature of this relationship was, however, somewhat elusive. In its press release Nature described it as a “strategic alliance”. An associated news item in Naturereported that Frontiers had been “snapped up” by NPG, which was taking a “majority investment” in the company.

A post on the Frontiers web site also talked of NPG taking a “majority investment”, and quoted an approving Philip Campbell (Nature’s Editor-in-Chief) saying, “Frontiers is innovating in many ways that are of interest to us and to the scientific community”.  

In reality it was Holtzbrinck Publishing Groupthat had invested in Frontiers, not NPG, although Holtzbrinck was the owner of Macmillan Science and Education (and thus of NPG).

It was also unclear as to whether the money that Holtzbrinck had invested in Frontiers could be described as a “majority investment”. Speaking to Science in 2015, Frontier’s Executive Editor Frederick Fenter described it rather as a “minority share”.

Either way, the precarious nature of Frontier’s relationship with Nature became all too evident in January 2015, when it was announced that Macmillan Science and Education (along with NPG) was merging with German science publisher Springer. There was no mention of Frontiers, and the situation was only clarified when Macmillan posted a tweet in response to the enquiries it was receiving about the status of Frontiers.

Looking back, it would appear the much-lauded relationship between NPG and Frontiers was more wish fulfilment than substance – encapsulated perhaps by a glossy 7-minute video produced at the time that (amongst other things) includes a clip of the CEO of Macmillan Science and Education (and former MD of NPG) Annette Thomas welcoming Frontiers to Macmillan’s office in London, lauding its achievements and promise, but failing to specify what exactly Nature planned to do with Frontiers.

The true state of affairs does not appear to have been publicly acknowledged until the 2015 Science article cited above. When asked to clarify the situation Fenter replied: “We made the decision about 6 months ago to make a clean separation and never to mention again that [NPG] has some kind of involvement in Frontiers.”

Critics


Like most successful open access publishers Frontiers has attracted controversy along the way. There have been complaints, for instance, about its peer review process (including an oft-repeated claim that its editorial system does not allow papers to be rejected), complaints about the level of “spam” it bombards researchers with, and complaints that its mode of operating is inappropriately similar to the one used by multi-level marketing company Amway. (By, for instance, requiring editors to recruit further editors within a pyramidal editorial and journal structure, setting editors targets for the number of papers they have to publish in their journal each year, and requiring that they themselves publish in the journal).

There have also been complaints about the way that Frontiers promotes itself on its blog. Its posts have attracted considerable attention (including from high-profile media outlets like the Times Higher) but critics argue that while its contributions tend to be presented as research the data is cherry-picked in a self-serving way. See, for instance, here, here, here, here, and here.

In addition, Frontiers has attracted criticism for publishing a number of controversial papers (see here and here for instance), and in 2014 it was accused of caving in to specious libel threats by retracting a legitimate paper. The latter led to Frontiers’ associate editor Björn Brembs publicly resigning.

A number of other prominent researchers have publicly criticised Frontiers too. In June, for instance, a blog critique was posted by Dorothy Bishop, Professor of Developmental Neuropsychology at the University of Oxford, and another one a month later by Melissa Terras, Professor of Digital Humanities in the Department of Information Studies at University College London (UCL).

More recently, in January, Micah Allen, a Cognitive neuroscientist at UCL, rehearsed the various complaints against Frontiers in a blog post entitled “Is Frontiers in Trouble”.

But the most controversial incident occurred last May, when Frontiers sacked 31 editors amid a row over independence. The editors complained that Frontiers’ publication practices are designed to maximise the company's profits, not the quality of papers, and that this could harm patients.

The wave of criticism reached a peak last October when Jeffrey Beall added Frontiers to his list of “potential, possible, or probable predatory scholarly open-access publishers”.

Supporters


On the other hand, Frontiers has no shortage of fans and supporters, not least amongst its army of editors and authors. It has also received public support from a number of industry organisations.

In a statement posted on its web site last year, for instance, the Committee on Publication Ethics (COPE) said, “We note that there have been vigorous discussions about, and some editors are uncomfortable with, the editorial processes at Frontiers. However, the processes are declared clearly on the publisher's site and we do not believe there is any attempt to deceive either editors or authors about these processes. Publishing is evolving rapidly and new models are being tried out. At this point we have no concerns about Frontiers being a COPE member and are happy to work with them as they explore these new models.”

And in response to questions being askedabout the role that Frontiers’ journal manager Mirjam Curno plays at COPE the statement added, “Frontiers has been a member of COPE since January 2015. In the interests of complete transparency, we note here also that one of the Frontiers staff, Mirjam Curno, is a member of COPE council – a position she was elected to when she was employed at the Journal of the International AIDS Society in 2012 and which continued (with the agreement of the COPE Council and on becoming an Associate Member of COPE) after she moved to Frontiers; she is now also a trustee of COPE.”

Around the same time the Open Access Scholarly Publishers Association (OASPA) published this comment: “We are aware that concerns have recently been expressed about the publisher Frontiers, which is a member of OASPA. We have discussed the situation with Frontiers, who have been very responsive in providing us with information on their editorial processes and explaining their procedures. In light of these responses, the Membership Committee remains fully satisfied that Frontiers meets the requirements for membership of OASPA.”

(We could note in passing that Frontiers Executive Editor Frederick Fenter was a candidate for OASPA’s Board in 2015).

As will perhaps be evident, a central focus for the complaints about Frontiers are its editorial processes, including the claim that its online system does not allow papers to be rejected. Markram agrees that there has been some confusion over this. While insisting that reviewers have always been able to reject papers, she acknowledges below that feedback indicated “it was not clear to them how to recommend a manuscript for rejection to the handling Editor.”

This issue, she says, has now been addressed. “Based on the feedback we have now renamed this option withdraw from review/recommend rejection, and the reasons, which reviewers can choose from to indicate why, have also been split accordingly.”

Daniel Lakens, an assistant professor at Eindhoven University of Technology, has experienced Frontiers as author, reviewer and editor. He has published several papers, and was for two years an associate editor for Frontiers in Cognition, resigning last month due to a lack of time. He continues to act as a reviewer.

Lakens suspects that much of the criticism comes from researchers who have failed to understand, or are not comfortable with, Frontiers’ distinctive peer review process.

“The review process itself is much more collaborative. This is a good thing if you find good reviewers willing to invest time in improving manuscripts. Forcing scientist to enter a discussion, and respond to arguments from the other side, leads to bigger improvements in manuscripts than at traditional journals, in my opinion. But it really depends on the mind-set of the reviewers and authors.”

The other important difference, he says, is Frontiers’ commitment to publishing methodologically sound research, regardless of significance levels or novelty.

“Publication bias is probably the biggest challenge that modern science faces. I think it is important that Frontiers takes a responsibility in publishing all sound research. Some reviewers, more used to traditional journals, just want to reject papers they don’t like. For example, this happened when I submitted my own article to Frontiers, where a reviewer thought there was nothing novel in my explanation of effect sizes, and withdrew from the revision process. It would have been better if this reviewer had instead provided some suggestions to improve it (which was no doubt possible), because the rather substantial interest in the article (it has been cited 200+ times) suggests his judgment about the novelty of the paper seems to have been irrelevant.”

Lakens is also sceptical about claims that it is not possible to reject papers. “Every manuscript I wanted to reject as a Frontiers editor has been rejected.”

Radical when it started


Lakens adds: “Frontiers was radical when it started and paved the way for even more radical open access journals. The collaborative review process is still in many ways novel and, very often, an improvement over the traditional peer review process. But now we see even more innovative journals than Frontiers emerging. One example is PeerJ, which greatly reduces the cost of open access publishing, and also embraces open reviews.”

In truth, impact-neutral reviewing was pioneeredby PLOS ONE in 2006, a year before Frontiers appeared on the scene. But implicit in Lakens’ statement, I think, is a belief that while it has played an important part in promoting new types of peer review, Frontiers now faces competition from younger, more innovative, and less expensive publishers like PeerJ and F1000Research.

It clearly will not help that Beall has added Frontiers to his list, which Lakens believes could encourage researchers to shun the publisher. “Many scientists are sensitive to prestige, and if these researchers would not be able to evaluate the quality of science themselves, they might think twice about submitting to Frontiers, although I would hope this group is rather small.”

Beall, of course, is himself a controversial figure, and his list is widely criticised by open access advocates. “I think Beall’s list is not transparent,” says Lakens. “Inclusions are not justified, and occur on the basis of the personal opinion of a single individual. The scientific community should ignore Beall’s list, and pay more attention to the Directory of Open Access Journals(although no list will be perfect). I think Frontiers should take valid criticisms seriously, because in science, there is always room for improvement, but I don’t think Beall’s list falls on the category of ‘valid criticism’.”

It is indeed remarkable that the decisions of a lone librarian sitting in a Colorado library could have a significant (and global) impact on a publisher. Only too aware of this, in December Frontiers dispatched Fenter and Curno to Colorado to meet with Beall and try and persuade him to take Frontiers back off his list – apparently without success.

Underlying all this, of course, is the fact that the emergence of the Internet has triggered manifold controversies within the research community. Above all, it has plunged scholarly communication into a period of considerable upheaval, and put inherited ways of doing things under growing pressure, not least traditional peer review. The cost of publishing research papers is a further source of often bitter disagreement – and open access publishing has amplified both issues.

A key question here seems to be how publishers find an appropriate role for themselves in the emerging new landscape. In the Q&A below Markram says that “dumping all content on the Internet, unchecked, in multiple versions of readiness, and as cheaply as possible, is not a service to anyone”.

Many, if not most, would doubtless agree with this, which would seem to imply a continuing gatekeeping role for publishers. But who these publishers should be, exactly what kind of service they should provide, and what they should charge for that service remains unresolved.

On the issue of costs, Markram asserts that under the traditional subscription system it costs $7,000 to publish an article, a figure she says that OA publishers have reduced to around $2,000, and Frontiers to just $1,100.

I am sure many would challenge these figures, but I will finish with two (rhetorical) questions: First (leaving aside the issue of whether pedestrian papers written solely in order to bulk up CVs should in fact be formally published), if the average rejection rate at Frontiers is (as Markram says below) just 19% (i.e. 81% are accepted), and if some of those articles turn out not even to have met Frontiers’ lower threshold  for publication (As Markram points it, “no peer-review is bullet-proof, so problematic articles regrettably do sometimes get through) then does $1,100 (or $2,000) per paper represent good value for money? Second, how high does the acceptance rate need to go before simply dumping papers on the Internet becomes a logical way for the research community to save itself millions of dollars a year?

To read Markram’s detailed answers please click on the link below. These are in a pdf file preceded by this introduction.

Readers should be aware that the Q&A is long. I have chosen not to edit Kamila Markram’s text and there are some repetitions, but I was keen to allow her to address my questions in her own words, and as fully as she felt to be appropriate. I have, however, made ample use of pull-quotes.

####

This interview is published under a Creative Commons licence, so you are free to copy and distribute it as you wish, so long as you credit me as the author, do not alter or transform the text, and do not use it for any commercial purpose.

To read the interview click HERE.

Sunday, 17 January 2016

The OA Interviews: Mikhail Sergeev, Chief Strategy Officer at Russia-based CyberLeninka

Пока рак на горе не свистнет, мужик не перекрестится

Mikhail Sergeev

While open access was not conceivable until the emergence of the Internet (and thus could be viewed as just a natural development of the network) the “OA movement” primarily grew out of a conviction that scholarly publishers have been exploiting the research community, not least by constantly increasing journal subscriptions. It was for this reason that the movement was initially driven by librarians.

OA advocates reasoned that while the research community freely contributes the content in scholarly journals, and freely peer reviews that content, publishers then sell it back to research institutions at ever more extortionate prices, at levels in fact that have made it increasingly difficult for research institutions to provide faculty members with access to all the research they need to do their jobs.

What was required, it was concluded, was for subscription paywalls to be dismantled so that anyone can access all the research they need — i.e. open access. In the process, argued OA advocates, the ability of publishers to overcharge would be removed, and the cost of scholarly publishing would come down accordingly.

But while the movement has persuaded many governments, funders and research institutions that open access is both inevitable and optimal, and should therefore increasingly be made compulsory, publishers have shown themselves to be extremely adept at appropriating OA for their own ends, not least by simply swapping subscription fees for article-processing charges (APCs) without realising any savings for the research community.

This is all too evident in Europe right now. In the UK, for instance, government policy is enabling legacy publishers to migrate to an open access environment with their high profits intact. Indeed, not only are costs not coming down but — as subscription publishers introduce hybrid OA options that enable them to earn both APCs and subscriptions from the same journals (i.e. to “double-dip”) — they are increasing.

Meanwhile, in The Netherlandsuniversities are signing new-style Big Deals that combine both subscription and OA fees. While these are intended to manage the transition to OA in a cost-efficient way, publishers are clearly ensuring that they experience no loss of revenue as a result (although we cannot state that as a fact since the contracts are subject to non-disclosure clauses).

More recently, the German funder Max Planck has begun a campaign intended to engineer a mass “flipping” of legacy journals to OA business models. Again, we can be confident that publishers will not co-operate with any such plan unless they are able to retain their current profit levels.  

It is no surprise, therefore, that many OA advocates have become concerned that the OA project has gone awry.

Alternative models


As the implications of this have sunk in there has been growing interest in alternative publishing models, particularly ones that hold out the promise of disintermediating legacy publishers.

So, for instance, we are seeing the creation of “overlay journals”, and other new publishing initiatives in which the whole process is managed and controlled by the research community itself. Examples of the latter include the use of institutional repositories as publishing platforms, and the founding of new OA university presses like Collabra and Lever Press.

Others have cast their eyes to the Global South (where the affordability problem is both more longstanding and far more acute) for possible alternative models. In doing so, they frequently point to Latin American initiatives like SciELO and Redalyc. (See, for instance, here, here, and here).

Both these services started out as regional bibliographic databases, but over time have added more and more freely-available full-text journal content. Today SciELO hosts 573,525 research articles from 1,249 journals. Redalyc has more than 425,000 full-text articles from over 1,000 journals.

But does Western Europe need to look as far afield as Latin America for this kind of model? The Moscow-based CyberLeninka, for instance, reports that it currently hosts 940,000 papers from 990 journals, all of which are open access, and approximately 70% of which are available under a CC BY licence. Moreover, it has amassed this content in just three years.

Significantly, it has achieved this without the support of either the Russian government, or any private venture capital, as CyberLeninka’s Chief Strategy Officer Mikhail Sergeev explains in the Q&A below. The service was created, and is maintained, by five people working from home. Their goal: to create a prototype for a Russian open science infrastructure.

What struck me in speaking to Sergeev is that many of the problems the Russian research community faces today are strikingly similar to those facing the research community everywhere, if somewhat more extreme in both scope and effect. So could CyberLeninka be developing solutions that the West could learn from?

On one hand it would seem not, since CyberLeninka does not currently have a business model, and so no income. It is also not entirely clear to me how the 990 journals it hosts fund and manage themselves. One would also want to know more about the quality and topicality of the 940,000 papers on the service. What isclear is that the most prestigious Russian journals are not freely available today. We in the West can certainly identify with that problem.

On the other hand, to focus on business models alone is perhaps to miss the point. Surely the Russian government should be funding CyberLeninka, and surely it should be seeking to get the prestigious journals published by the Russian Academy of Sciences on CyberLeninka too? Admittedly the latter could present challenges as the journals were in, effect, (and mistakenly) “privatised” in the 1990s. But that does not mean it should not happen.

The point to bear in mind is that the OA strategies currently being pursued in the West appear to be no more sustainable than the subscription system. Better solutions are therefore needed, and so the more experimentation the better.

And remember, CyberLeninka says it has achieved what it has achieved with no source of revenue. Moreover, in the process of loading journals on its system it is making them OA without the costs normally associated with journal “flipping”. That should focus minds on the cost of scholarly publishing.

In the meantime, of course, CyberLeninka continues to face a serious financial challenge. If it is to prosper, and to embark on the many new initiatives it has set its sights on — including developing overlay journals and offering other repository-based publishing services — some source of funding will be essential.

####

If you wish to read the interview with Mikhail Sergeev, please click on the link below.

I am publishing the interview under a Creative Commons licence, so you are free to copy and distribute it as you wish, so long as you credit me as the author, do not alter or transform the text, and do not use it for any commercial purpose.

To read the interview (as a PDF file) click HERE.

Wednesday, 30 December 2015

The OA Interviews: Toma Susi, physicist, University of Vienna

Since the birth of the open access movement in 2002, demands for greater openness and transparency in the research process have both grown and broadened. 

Today there are calls not just for OA to research papers, but (amongst other things) to the underlying data, to peer review reports, and to lab notebooks. We have also seen a new term emerge to encompass these different trends: open science.
Toma Susi

In response to these developments, earlier this year the Research Ideas & Outcomes (RIO) Journal was launched. 

RIO’s mission is to open up the entire research cycle — by publishing project proposals, data, methods, workflows, software, project reports and research articles. These will all be made freely available on a single collaborative platform. 

And to complete the picture, RIO uses a transparent, open and public peer-review process. The goal: to “catalyse change in research communication by publishing ideas, proposals and outcomes in order to increase transparency, trust and efficiency of the whole research ecosystem.”

Importantly, RIO is not intended for scientists alone. It is seeking content from all areas of academic research, including science, technology, humanities and the social sciences.

Unsurprisingly perhaps, the first grant proposal made openly available on RIO (on 17th December) was published by a physicist — Finnish-born Toma Susi, who is based at the University of Vienna in Austria.

Susi’s proposal — which has already received funding from the Austrian Science Fund (FWF) — is for a project called “Heteroatom quantum corrals and nanoplasmonics in graphene” (HeQuCoG). This is focused on the controlled manipulation of matter on the scale of atoms.

More specifically, the aim is to “to create atomically precise structures consisting of silicon and phosphorus atoms embedded in the lattice of graphene using a combination of ion implantation, first principles modelling and electron microscopy.”

The research has no specific application in mind but, as Susi points out, if “we are able to control the composition of matter on the atomic scale with such precision, there are bound to be eventual uses for the technology.”

Below Susi answers some questions I put to him about his proposal, and his experience of publishing on RIO.

The interview begins …


RP: Can you start by saying what is new and different about the open access journal RIO, and why that is appealing to you?

TS: Personally, the whole idea of publishing all stages of the research cycle was something even I had not considered could or should be done. However, if one thinks about it objectively, in terms of an optimal way to advance science, it does make perfect sense. At the same time, as a working scientist, I can see how challenging a change of mind-set this will be… which makes me want to do what I can to support the effort. 

RP: Are you associated with the journal in any way — e.g. on the editorial board?

TS: I have volunteered to be a physics subject editor for the journal, although I have not yet handled any articles.

RP: You published a grant proposal in the journal, which is certainly unusual (perhaps a first?). Why did you do so, and how much (if anything) did you pay to do so?

TS: I should first point out that although rare, this was by no means the first instance. There are some previous proposals — see for instance here, here, and here.

However, RIO is the first attempt to do this systematically across disciplines, with open pre- or post-publication peer review.

The reason for me to do it was that I had received funding recently for a project that I am passionate about, and whose proposal I was quite proud of. At the same time, following my long-term interest in open access and open science, I had offered my services for RIO. Thus I felt I should lead by example in promoting openness in science funding by being one of the first to publish a grant proposal there. The recent RIO editorial gives a good account of the underlying philosophy.

As a volunteer editor, I was allowed to publish one article for free in RIO, which I used to publish the proposal. RIO’s normal pricing is explained here, and it would also have been no problem to fund the publication from my FWF grant.

Instinctively scary


RP: Writing about your experience you have said that publishing a grant proposal is an “instinctively scary” thing to do. Can you expand on that, and say something about both the benefits and the potential risks of publishing a grant proposal?

TS: I said instinctively, because there was an almost visceral reaction to the idea; to give away MY ideas, to let other people take advantage of MY work! But when one steps back from the competitive reality of being a scientist, it should become obvious this is exactly the desirable outcome for science. But the reaction is what it is.

In terms of potential risks, a fear of being ‘scooped’ is probably the big one. I have made a proposal, based on all my expertise and knowledge, to pursue a certain specific research direction. If someone else reads the plan and pursues it, and is perhaps a bit luckier or a bit more hard working, they might reach and publish the results before I can. The way the journal system is, this would likely result in them getting more credit and more accolades for the work.

As for potential benefits, I do hope I might get additional exposure for my work, and for my funder. I’m personally very excited about the project, and extremely grateful for the FWF for still funding risky and curiosity-driven research. Since they seem to be one of the more forward-looking funders out there in terms of open access and so on, I hope being on the cutting edge in open research funding can contribute to that. 

And of course, if someone does scoop me, at least they have to cite the proposal now: http://dx.doi.org/10.3897/rio.1.e7479

RP: I believe it would be quite hard for other researchers to scoop you due to the lab conditions required. Would other researchers be far more cautious than you in publishing a grant proposal?

TS: The proposal is indeed technically challenging, with only about a dozen groups in the world who would be placed to pursue it. So certainly publishing this was an easier choice than in many cases. Other researchers would have to weight those considerations for themselves.

I think the degree of caution will depend on the field in general, and on each proposal in particular. However, I think there is an extra difficulty in being amongst the first, and hopefully my example would at least put this option on more peoples’ radars.

RP: As you noted, this is a proposal for which you have already received funding. Would you have published it if you didn’t yet have funding? Or might it have been more useful to do it before?

TS: That’s right. It might indeed have been useful to publish beforehand and receive constructive feedback on the project. However, I doubt I would have considered publishing it before getting funded, that would have felt too risky. Alternatively, if I had gotten rejected and known I would not be able to pursue the plan further, I might have considered publishing then.

RP: How much money is the grant?

TS: The three-year grant is for 323,972.25 €, which is public information. 

In layman’s terms


RP: I believe the grant is for work on atomic-scale engineering, but can you say something in layman’s terms about what your research is, and the likely applications? If a member of the public asked you why you should be funded to do the research what would you say?

TS: We are aiming to precisely control the placement of heavier atoms in the lattice of graphene using an electron microscope as an engineering tool.

There are some potential applications in plasmonics, i.e. the control of the interaction of light with the electrons of the material, but really, the project is more about pushing the boundaries of the possible. No matter what the technical requirements now are, if we are able to control the composition of matter on the atomic scale with such precision, there are bound to be eventual uses for the technology. 

So my answer to a member of the public would be: to show we can design materials with atomic precision. You can find a comprehensive explanation of the original research on my blog.

RP: Presumably this is for follow-up research to that described in your 2014 Physical Review Letters paper. If so, what is the next step?

TS: In terms of the research, the next steps are exactly as described in the published proposal! The project started running in September 2015, and we are now working on sample preparation and the first modelling steps as planned.

I have a PhD student starting on the project at the end of January, and we’ll definitely think about publishing his thesis plan openly, too, alongside other outputs.

RP: I think RIO offers a basic publishing service. So, for instance, researchers have to type in and format their publications. How long did it take you to do this, and how much of a disincentive do you think this might be for researchers who are not as enthusiastic about open access as you are?

TS: There’s no formatting as such (apart from the possibility to do italics or bold etc), but rather, the writing tool itself takes care of typesetting automatically (based on a built-in template). So, there’s actually less for the author to do than in editing a normal Word or LaTeX manuscript. 

For my proposal, I copied in the text of my original OS X Pages document, which took just a few minutes. Re-inserting citations and figures took perhaps two hours more, which I hope they can somehow streamline in the future.

All in all, it was one of the more painless publishing experiences I have had, so hard to see it as much of a disincentive.

Please see my blog post for some more details on the process.

Tough sell


RP: To put it more bluntly, would anyone who was not (as you are) a committed OA advocate really have much interest in following your example?

TS: At this point, I don’t doubt this is a bit of a tough sell to get your typical scientist interested. But on the other hand, this does yield a citable publication with very little extra effort, so depending on how much attention these receive, it might well prove attractive more generally. However, the longer term prospect is that since funders have good reasons to encourage or even mandate grant publications, the push might come from them.

RP: What licensing issues arise in publishing a grant proposal? Are they different to publishing a research paper?

TS: Figure copyrights were the main issue. I had used several figures from the literature to illustrate my ideas (with the proper citation, of course). If I had written for example a review article, the common practice would had been to obtain reuse rights to the figures via Rightslink. However, since RIO content is CC BY 4.0 licensed and machine-readable, that would had resulted in potential problems along the line. Thus we went to the extra effort to ask the original authors for copyright-free versions of their figures, which we received without exception within a week.

RP: I think you also published the grant review reports alongside the proposal. Is open peer review obligatory with RIO? Should it be?

TS: We did, after passing a request through the funder to the original referees (as they had not agreed to the reports being public originally), both of whom gave permission. In general, peer review with RIO is mandatorily public, as fits the philosophy of the journal.

RP: What kind of feedback have you had? Have you, for instance, had any new offers to collaborate as a result?

TS: I think it’s still too early to see whether publishing the proposal will result in offers for collaboration or useful new connections. However, I did receive a fair bit of feedback from my collaborators when I floated the idea of publishing the proposal, and from the people whose figures I had used in the original plan when I asked for their permissions to reuse them. Even to my surprise, all the feedback I got was very supporting and encouraging.

RP: What other issues arose as a result of publishing your grant proposal, and what are your expectations for RIO going forward?

TS: No other issues, so far the process has been quite positive. It remains to be seen what effects the publication will have.

As for my expectations for RIO, I fear that they will have a hard time in getting significant uptake for their more ambitious initiatives, but I do hope the time is ripe and we’re in for a pleasant surprise. 

RP: Thank you very much for taking the time to answer my questions.

Thursday, 17 December 2015

The open access movement slips into closed mode

In October 2003, at a conference held by the Max Planck Society (MPG) and the European Cultural Heritage Online (ECHO) project, a document was drafted that came to be known as the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities.

More than 120 cultural and political organisations from around the world attended and the names of the signatories are openly available here.

Today the Berlin Declaration is held to be one of the keystone events of the open access movement — offering as it did a definition of open access, and calling as it did on all researchers to publish their work in accordance with the open principles outlined in the Declaration.

“In order to realize the vision of a global and accessible representation of knowledge,” the Declaration added, “the future Web has to be sustainable, interactive, and transparent.”

The word transparent is surely important here, and indeed the open access movement (not unsurprisingly) prides itself on openness and transparency. But as with anything that is precious, there is always the danger that openness and transparency can give way to secrecy and opaqueness.

By invitation only


There have been annual follow-up conferences to monitor implementation of the Berlin Declaration since 2003, and these have been held in various parts of the world — in March 2005, for instance, I attended Berlin 3, which that year took place in Southampton (and for which I wrote a report). The majority of these conferences, however, have been held in Germany, with the last two seeing a return to Berlin. This year’s event (Berlin 12) was held on December 8thand 9th at the Seminaris CampusHotel Berlin.

Of course, open access conferences and gatherings are two a penny today. But given its historical importance, the annual Berlin conference is viewed as a significant event in the OA calendar. It was particularly striking, therefore, that this year (unlike most OA conferences, and so far as I am aware all previous Berlin conferences) Berlin 12 was “by invitation only”.

Also unlike other open access conferences, there was no live streaming of Berlin 12, and no press passes were available. And although a Twitter hashtag was available for the conference, this generated very little in the way of tweets, with most in any case coming from people who were not actually present at the conference,  including a tweet from a Max Planck librarian complaining that no MPG librarians had been invited to the conference.

Why it was decided to make Berlin 12 a closed event is not clear. We do however know who gave presentations as the agenda is online, and this indicatesthat there were 14 presentations, 6 of which were given by German presenters (and 4 of these by Max Planck people). This is a surprising ratio given that the subsequent press release described Berlin 12 as an international conference. There also appears to have been a shortage of women presenters (see here, here, and here).

But who were the 90 delegates who attended the conference? That we do not know. When I emailed the organisers to ask for a copy of the delegate list my question initially fell on deaf ears. After a number of failed attempts, I contacted the Conference Chair Ulrich Pöschl.

Pöschl replied, “In analogy to most if not all of the many scholarly conferences and workshops I have attended, we are not planning a public release of the participants’ list. As usual, the participants of the meeting received a list of the pre-registered participants’ names and affiliations, and there is nothing secret about it. However, I see no basis for releasing the conference participants’ list to non-participants, as we have not asked the participants if they would agree to distributing or publicly listing their names (which is not trivial under German data protection laws; e.g., on the web pages of my institute, I can list my co-workers only if they explicitly agree to it).”

This contrasts, it has to be said, with Berlin 10 (held in South Africa), where the delegate list was made freely available online, and is still there. Moreover, the Berlin 10 delegate list can be sorted by country, by institution and by name. There is also a wealth of information about the conference on the home page here.

We could add that publishing the delegate list for open access conferences appears to be pretty standard practice — see hereand here for instance.

However, is Pöschl right to say that there is a specific German problem when it comes to publishing delegate lists? I don’t know, but I note that the delegate list for the annual conferencefor the Marine Ingredients Organisation (IFFO) (which was held in Berlin in September) can be downloaded here.

Outcome


Transparency aside, what was the outcome of the Berlin 12 meeting? When I asked Pöschl he explained, “As specified in the official news release from the conference, the advice and statements of the participants will be incorporated in the formulation of an ‘Expression of Interest’ that outlines the goal of transforming subscription journals to open access publishing and shall be released in early 2016”.

This points to the fact that the central theme of the conference was the transformation of subscription journals to Open Access, as outlined in a recent white paper by the Max Planck Digital Library. Essentially, the proposal is to “flip” all scholarly journals from a subscription model to an open access one — an approach that some have described as “magical thinking” and/or impractical (see, for instance, here, hereand here).

The Expression of Interest will presumably be accompanied by a roadmap outlining how the proposal can be realised. Who will draft this roadmap and who will decide what it contains is not entirely clear. The conference press release says, “The key to this lies in the hands of the scientific institutions and their sponsors”, and as Pöschl told me, the advice and comments of delegates to Berlin 12 will be taken into account in producing the Expression of Interest. If that is right, should we not know exactly who the 90 delegates attending the conference were?

All in all, we must wonder why there was a need for all the secrecy that appears to have surrounded Berlin 12. And given this secrecy, perhaps we should be concerned that there is a danger the open access movement could become some kind of secret society in which a small self-selected group of unknown people make decisions and proposals intended to impact the entire global scholarly communication system?

Either way, what happened to the openness and transparency inherent in the Berlin Declaration?

In the spirit of that transparency I invite all those who attended the Berlin 12 to attach their name below (using the comment functionality), and if they feel so inspired to share their thoughts on whether they feel that open access conferences ought to be held in camera in the way Berlin 12 appears to have been.

Or is it wrong and/or naïve to think that open access implies openness and transparency in the decision making and processes involved in making open access a reality, as well as of research outputs?



Tuesday, 1 December 2015

Open Access, Almost-OA, OA Policies, and Institutional Repositories

Many words have been spilt over the relative merits of green and gold open access (OA). It is not my plan to rehearse these again right now. Rather, I want to explore four aspects of green OA. 

First, I want to discuss how many of the documents indexed in “open” repositories are in fact freely available, rather than on “dark deposit” or otherwise inaccessible. 

Second, I want to look at the so-called eprint request Button, a tool developed to allow readers to obtain copies of items held on dark deposit in repositories. 

Third, I want to look at some aspects of OA polices and the likely success of so-called IDOA policies.

Finally I want to speculate on possible futures for institutional repositories. 

However, I am splitting the text into two. The first two topics are covered in the attached pdf file; the second two will be covered in a follow-up piece I plan to publish at a later date.

To read the first part (a 16-page pdf) please click the link here.