Tuesday 27 May 2014

Working for a phase transition to an open commons-based knowledge society: Interview with Michel Bauwens

Today a summit starts in Quito, Ecuador that will discuss ways in which the country can transform itself into an open commons-based knowledge society. The team that put together the proposals is led by Michel Bauwens from the Foundation for Peer-to-Peer Alternatives. What is the background to this plan, and how likely is it that it will bear fruit?  With the hope of finding out I spoke recently to Bauwens.
Michel Bauwens
One interesting phenomenon to emerge from the Internet has been the growth of free and open movements, including free and open source software, open politics, open government, open data, citizen journalism, creative commons, open science, open educational resources (OER), open access etc.

While these movements often set themselves fairly limited objectives (e.g. “freeing the refereed literature”) some network theorists maintain that the larger phenomenon they represent has the potential not just to replace traditional closed and proprietary practices with more open and transparent approaches, and not just to subordinate narrow commercial interests to the greater needs of communities and larger society but, since the network enables ordinary citizens to collaborate together on large meaningful projects in a distributed way (and absent traditional hierarchical organisations), it could have a significant impact on the way in which societies and economies organise themselves.

In his influential book The Wealth of Networks, for instance, Yochai Benkler identifies and describes a new form of production that he sees emerging on the Internet — what he calls “commons-based peer production”. This, he says, is creating a new Networked Information Economy.

Former librarian and Belgian network theorist Michel Bauwens goes so far as to say that by enabling peer-to-peer (P2P) collaboration, the Internet has created a new model for the future development of human society. In addition to peer production, he explained to me in 2006, the network also encourages the creation of peer property (i.e. commonly owned property), and peer governance (governance based on civil society rather than representative democracy).

Moreover, what is striking about peer production is that it emerges and operates outside traditional power structures and market systems. And when those operating in this domain seek funding they increasingly turn not to the established banking system, but to new P2P practices like crowdfunding and social lending.

When in 2006 I asked Bauwens what the new world he envisages would look like in practice he replied, “I see a P2P civilisation that would have to be post-capitalist, in the sense that human survival cannot co-exist with a system that destroys the biosphere; but it will nevertheless have a thriving marketplace. At the core of such a society — where immaterial production is the primary form — would be the production of value through non-reciprocal peer production, most likely supported through a basic income.”

Unrealistic and utopian?


So convinced was he of the potential of P2P that in 2005 Bauwens created the Foundation for Peer-to-Peer Alternatives. The goal: to “research, document and promote peer-to-peer principles”

Critics dismiss Bauwens’ ideas as unrealistic and utopian, and indeed in the eight years since I first spoke with him much has happened that might seem to support the sceptics. Rather than being discredited by the 2008 financial crisis, for instance, traditional markets and neoliberalism have tightened their grip on societies, in all parts of the world.


At the same time, the democratic potential and openness Bauwens sees as characteristic of the network is being eroded in a number of ways. While social networking platforms like Facebook enable the kind of sharing and collaboration Bauwens sees lying at the heart of a P2P society, for instance, there is a growing sense that these services are in fact exploitative, not least because the significant value created by the users of these services is being monetised not for the benefit of the users themselves, but for the exclusive benefit of the large corporations that own them.

We have also seen a huge growth in proprietary mobile devices, along with the flood of apps needed to run on them — a development that caused Wired’s former editor-in-chief Chris Anderson to conclude that we are witnessing a dramatic move “from the wide-open Web to semi closed platforms”. And this new paradigm, he added, simply “reflects the inevitable course of capitalism”.

In other words, rather than challenging or side-lining the traditional market and neoliberalism, the network seems destined to be appropriated by it — a likelihood that for many was underlined by the recent striking down of the US net neutrality regulations.

It would also appear that some of the open movements are gradually being appropriated and/or subverted by commercial interests (e.g. the open access and open educational resourcesmovements).

While conceding that a capitalist version of P2P has begun to emerge, Bauwens argues that this simply makes it all the more important to support and promote social forms of P2P. And here, he suggests, the signs are positive, with the number of free and open movements continuing to grow and the P2P model bleeding out of the world of “immaterial production” to encompass material production too — e.g. with the open design and open hardware movements, a development encouraged by the growing use of 3D printers.

Bauwens also points to a growth in mutualisation, and the emergence of new practices based around the sharing of physical resources and equipment.

Interestingly, these latter developments are often less visible than one might expect because much of what is happening in this area appears to be taking place outside the view of mainstream media in the global north.

Finally, says Bauwens, the P2P movement, or commoning (as some prefer to call it), is becoming increasingly politicised. Amongst other things, this has seen the rise of new political parties like the various Pirate Parties.

Above all, Bauwens believes that the long-term success of P2P is assured because its philosophy and practices are far more sustainable than the current market-based system. “Today, we consider nature infinite and we believe that infinite resources should be made scarce in order to protect monopolistic players,” he says below. “Tomorrow, we need to consider nature as a finite resource, and we should respect the abundance of nature and the human spirit.”

Periphery to mainstream


And as the need for sustainability becomes ever more apparent, more people will doubtless want to listen to what Bauwens has to say. Indeed, what better sign that P2P could be about to move from the periphery to the mainstream than an invitation Bauwens received last year from three Ecuadorian governmental institutions, who asked him to lead a team tasked with coming up with proposals for transitioning the country to a society based on free and open knowledge.

The organisation overseeing the project is the FLOK Society (free, libre, open knowledge). As “commoner” David Bollier explained when the project was announced, Bauwens’ team was asked to look at many interrelated themes, “including open education; open innovation and science; ‘arts and meaning-making activities’; open design commons; distributed manufacturing; and sustainable agriculture; and open machining.”

Bollier added, “The research will also explore enabling legal and institutional frameworks to support open productive capacities; new sorts of open technical infrastructures and systems for privacy, security, data ownership and digital rights; and ways to mutualise the physical infrastructures of collective life and promote collaborative consumption.”

In other words, said Bollier, Ecuador “does not simply assume — as the ‘developed world’ does — that more iPhones and microwave ovens will bring about prosperity, modernity and happiness.”

Rather it is looking for sustainable solutions that foster “social and territorial equality, cohesion, and integration with diversity.”

The upshot: In April Bauwens’ team published a series of proposals intended to transition Ecuador to what he calls a sustainable civic P2P economy. And these proposals will be discussed at a summit to be held this week in the capital of Ecuador (Quito).

“As you can see from our proposals, we aim for a simultaneous transformation of civil society, the market and public authorities,” says Bauwens. “And we do this without inventing or imposing utopias, but by extending the working prototypes from the commoners and peer producers themselves.”

But Bauwens knows that Rome wasn’t built in a day, and he realises that he has taken on a huge task, one fraught with difficulties. Even the process of putting the proposals together has presented him and his team with considerable challenges. Shortly after they arrived in Ecuador, for instance, they were told that the project had been defunded (funding that was fortunately later reinstated). And for the moment it remains unclear whether many (or any) of the FLOK proposals will ever see the light of day.

Bauwens is nevertheless upbeat. Whatever the outcome in Ecuador, he says, an important first stab has been made at creating a template for transitioning a nation state from today’s broken model to a post-capitalist social knowledge society.

“What we have now that we didn’t have before, regardless of implementation in Ecuador, is the first global commons-oriented transition plan, and several concrete legislative proposals,” he says. “They are far from perfect, but they will be a reference that other locales, cities, (bio)regions and states will be able to make their own adapted versions of it.”

In the Q&A below Bauwens discusses the project in more detail, including the background to it, and the challenges that he and the FLOK Society have faced.


The interview begins


RP:  We last spoke in 2006 when you discussed your ideas on a P2P (peer-to-peer) society (which I think David Bollier refers to as “commoning”). Briefly, what has been learned since then about the opportunities and challenges of trying to create a P2P society, and how have your thoughts on P2P changed/developed as a result?

MB: At the time, P2P dynamics were mostly visible in the process of “immaterial production”, i.e. productive communities that created commons of knowledge and code. The trend has since embraced material production itself, through open design that is linked to the production of open hardware machinery.

Another trend is the mutualisation of physical resources. We've seen on the one hand an explosion in the mutualisation of open workspaces (hackerspaces, fab labs, co-working) and the explosion of the so-called sharing economy and collaborative consumption.

This is of course linked to the emergence of distributed practices and technologies for finance (crowd funding, social lending); and for machinery itself (3D printing and other forms of distributed manufacturing). Hence the emergence and growth of P2P dynamics is now clearly linked to the “distribution of everything”.

There is today no place we go where social P2P initiatives are not developing and not exponentially growing. P2P is now a social fact.

Since the crisis of 2008, we are also seeing much more clearly the political and economic dimension of P2P. There is now both a clearly capitalist P2P sector (renting and working for free is now called sharing, which is putting downward pressure on income levels) and a clearly social one.  First of all, the generalised crisis of our economic system has pushed more people to search for such practical alternatives. Second, most P2P dynamics are clearly controlled by economic forces, i.e. the new “netarchical” (hierarchy of the network) platforms.

Finally, we see the increasing politicisation of P2P, with the emergence of Pirate Parties, network parties (Partido X in Spain) etc.

We have now to decide more clearly than before whether we want more autonomous peer production, i.e. making sure that the domination of the free social logic of permissionless aggregation is directly linked to the capacity to generate self-managed livelihoods, or, if we are happy with a system in which this value creation is controlled and exploited by platform owners and other intermediaries.

The result of all of this is that my own thoughts are now more directly political. We have developed concrete proposals and strategies to create P2P-based counter-economies that are de-linked from the accumulation of capital, but focused on cooperative accumulation and the autonomy of commons production.

RP: Indeed and last year you were asked to lead a team to come up with proposals to “remake the roots of Ecuador’s economy, setting off a transition into a society of free and open knowledge”. As I understand it, this would be based on the principles of open networks, peer production and commoning. Can you say something about the project and what you hope it will lead to? Has the Ecuadoran government itself commissioned you, or a government or non-government agency in Ecuador?

MB: The project, called FLOKSociety.org, was commissioned by three Ecuadorian governmental institutions, i.e. the Coordinating Ministry of Knowledge and Human Talent, the SENESCYT (Secretaría Nacional de Educación Superior, Ciencia, Tecnología e Innovación) and the IAEN (Instituto de Altos Estudios del Estado).

The legitimacy and logic of the project comes from the National Plan of Ecuador, which is centred around the concept of Good Living (Buen Vivir), which is a non-reductionist, non-exclusive material way to look at the economy and social life, inspired by the traditional values of the indigenous people of the Andes. The aim of FLOK is to add “Good Knowledge” as an enabler and facilitator of the good life.

The important point to make is that it is impossible for countries and people that are still in neo-colonial dependencies to evolve to more fair societies without access to shareable knowledge. And this knowledge, expressed in diverse commons that correspond to the different domains of social life (education, science, agriculture, industry), cannot itself thrive without also looking at both the material and immaterial conditions that will enable their creation and expansion.

FLOK summit


RP: To this end you have put together a transition plan. This includes a series of proposals (available here), and a main report (here). I assume your plan might or might not be taken up by Ecuador. What is the procedure for taking it forward, and how optimistic are you that Ecuador will embark on the transition you envisage?

MB: The transition plan provides a framework for moving from an economy founded on what we call “cognitive” and “netarchical” capitalism (based respectively on the exploitation through IP rents or social media platforms) to a “mature P2P-based civic economy”.

The logic here is that the dominant economic forms today are characterised by a value crisis, one in which value is extracted but it doesn’t flow back to the creators of the value. The idea is to transition to an economy in which this value feedback loop is restored.

So about fifteen of our policy proposals apply this general idea to specific domains, and suggest how open knowledge commons can be created and expanded in these particular areas.

We published these proposals on April 1stin co-ment, an open source software that allows people to comment on specific concepts, phrases or paragraphs.

This week (May 27th to 30th) the crucial FLOK summit is taking place to discuss the proposals. This will bring together government institutions, social movement advocates, and experts, from both Ecuador and abroad.

The idea is to devote three days to reaching a consensus amongst these different groups, and then try and get agreement with the governmental institutions able to carry out the proposals.

So there will be two filters: the summit itself, and then the subsequent follow-up, which will clearly face opposition from different interests.

This is not an easy project, since it is not possible to achieve all this by decree.

RP: Earlier this year you made a series of videos discussing the issues arising from what you are trying to do —  which is essentially to create “a post-capitalist social knowledge society”, or “open commons-based knowledge society”. In one video you discuss three different value regimes, and I note you referred to these in your last answer — i.e. cognitive capitalism, netarchical capitalism and a civic P2P economy. Can you say a little more about how these three different regimes differ and why in your view P2P is a better approach than the other two?

MB: I define cognitive capitalism as a regime in which value is generated through a combination of rent extraction from the control of intellectual property and the control of global production networks, and expressed in terms of monetisation.

What we have learned is that the democratisation of networks, which also provides a new means of production and value distribution, means that this type of value extraction is harder and harder to achieve, and it can only be maintained either by increased legal suppression (which erodes legitimacy) and outright technological sabotage (DRM). Both of these strategies are not sustainable in the long term.

What we have also learned is that the network has caused a new model to emerge, one adapted to the P2P age, and which I call netarchical capitalism, i.e. “the hierarchy of the network”. In this model, we see the direct exploitation of human cooperation by means of proprietary platforms that both enable and exploit human cooperation. Crucially, while their value is derived from our communication, sharing and cooperation (an empty platform has no value), and on the use value that we are exponentially creating (Google, Facebook don’t produce the content, we do), the exchange value is exclusively extracted by the platform owners. This is unsustainable because it is easy to see that a regime in which the creators of the value get no income at all from their creation is not workable in the long; and so it poses problems for capitalism. After all, who is going to buy goods if they have no income?

So the key issue is: how do we recreate the value loop between creation, distribution, and income? The answer for me is the creation of a mature P2P civic economy that combines open contributory communities, ethical entrepreneurial coalitions able to create livelihoods for the commoners, and for-benefit institutions that can “enable and empower the infrastructure of cooperation”.

Think of the core model of our economy as the Linux economy writ large, but one in which the enterprises are actually in the hands of the value creators themselves. Imagine this micro-economic model on the macro scale of a whole society. Civil society becomes a series of commonses with citizens as contributors; the shareholding market becomes an ethical stakeholder marketplace; and the state becomes a partner state, which “enables and empowers social production” through the commonication of public services and public-commons partnerships.

Challenges and distrust


RP: As you indicated earlier, it is not an easy project that you have embarked on in Ecuador, particularly as it is an attempt to intervene at the level of a nation state. Gordon Cook has said of the project: “it barely got off the ground before it began to crash into some of the anticipated obstacles.” Can you say something about these obstacles and how you have been overcoming them?

MB: It is true that the project started with quite negative auspices. It became the victim of internal factional struggles within the government, for instance, and was even defunded for a time after we arrived; the institutions failed to pay our wages for nearly three months, which was a serious issue for the kind of precarious scholar-activists that make up the research team.

However, in March (when one of the sides in the dispute lost, i.e. the initial sponsor Carlos Prieto, rector of the IAEN), we got renewed commitment from the other two institutions. Since then political support has increased, and the summit is about to get underway.

As for Gordon, he became a victim of what we will politely call a series of misinterpreted engagements for the funding of his participation, and it is entirely understandable that he has become critical of the process.

The truth is that the project was hugely contradictory in many different ways, but this is the reality of the political world everywhere, not just in Ecuador.

Indeed, the Ecuadorian government is itself engaged in sometimes contradictory policies and is perceived by civil society to have abandoned many of the early ideas of the civic movement that brought it to power. So, in our attempts at broader participation we have been stifled by the distrust many civic activists have for the government, and the sincerity of our project has been doubted.

Additionally, social P2P dynamics, which of course exist as in many other countries, are not particularly developed in their modern, digitally empowered forms in Ecuador. It has also not helped that the management of the project has been such that the research team has not been able to directly connect with the political leaders in order to test their real engagement. This has been hugely frustrating.

On the positive side, we have been entirely free to conduct our research and formulate our proposals, and it is hard not to believe that the level of funding the project has received reflects a certain degree of commitment.

So the summit is back on track, and we have received renewed commitments. Clearly, however, the proof of the pudding will be in the summit and its aftermath.

Whatever the eventual outcome, it has always been my conviction that the formulation of the first ever integrated Commons Transition Plan (which your readers will find here) legitimised by a nation-state, takes the P2P and commons movement to a higher geopolitical plane. As such, it can be seen as part of the global maturation of the P2P/commons approach, even if it turns out not to work entirely in Ecuador itself.

RP: I believe that one of the issues that has arisen in putting together the FLOK proposals is that Ecuadorians who live in rural areas are concerned that a system based on sharing could see their traditional knowledge appropriated by private interests. Can you say something about this fear and how you believe your plan can address such concerns?

MB: As you are aware, traditional communities have suffered from systematic biopiracy over the last few decades, with western scientists studying their botanical knowledge, extracting patentable scientific results from it, and then commercialising it in the West.

So fully shareable licenses like the GPL would keep the knowledge in a commons, but would still allow full commercialisation without material benefits flowing back to Ecuador. So what we are proposing is a discussion about a new type of licensing, which we call Commons-Based Reciprocity Licensing. This idea was first pioneered with the Peer Production License as conceived by Dmytri Kleiner.

Such licences would be designed for a particular usage, say biodiversity research in a series of traditional communities. It allows for free sharing non-commercially, commercial use by not-for-profit entities, and even caters for for-profit entities who contribute back. Importantly, it creates a frontier for for-profits who do not contribute back, and asks them to pay.

What is key here is not just the potential financial flow, but to introduce the principle of reciprocity in the marketplace, thereby creating an ethical economy. The idea is that traditional communities can create their own ethical vehicles, and create an economy from which they can also benefit, and under their control. 

This concept is beginning to get attention from open machining communities. However, the debate in Ecuador is only starting. Paradoxically, traditional communities are today either looking for traditional IP protection, which doesn't really work for them, or for no-sharing options.

So we really need to develop intermediary ethical solutions for them that can benefit them while also putting them in the driving seat.

Fundamental reversal of our civilisation


RP: In today’s global economy, where practically everyone and everything seems to be interconnected and subject to the rules of neoliberalism and the market, is it really possible for a country like Ecuador to go off in such a different direction on its own?

MB: A full transition is indeed probably a global affair, but the micro-transitions need to happen at the grassroots, and a progressive government would be able to create exemplary policies and projects that show the way.

Ecuador is in a precarious neo-colonial predicament and subject to the pressures of the global market and the internal social groups that are aligned with it. There are clear signs that since 2010 the Ecuadorian government has moved away from the original radical ideas expressed in the Constitution and the National Plan, as we hear from nearly every single civic movement that we've spoken with.

The move for a social knowledge economy is of strategic importance to de-colonialise Ecuador but this doesn't mean it will actually happen. However, the progressive forces have not disappeared entirely from the government institutions.

As such, it is really difficult to predict how successful this project will be. But as I say, given the investment the government has made in the process we believe there will be some progress. My personal view is that the combination of our political and theoretical achievements, and the existence of the policy papers, means that even with moderate progress in the laws and on the ground, we can be happy that we will have made a difference.

So most likely the local situation will turn out to be a hybrid mix of acceptance and refusal of our proposals, and most certainly the situation is not mature enough to accept the underlying logic of our Commons Transition Plan in toto.

In other words, the publication and the dialogue about the plan itself, and some concrete actions, legislative frameworks, and pilot projects, are the best we can hope for. What this will do is give real legitimacy to our approach and move the commons transition to the geo-political stage. Can we hope for more?

Personally, I believe that even if only 20% of our proposals are retained for action, I think we can consider it a relative success. This is the very first time such an even partial transition will have happened at the scale of the nation and, as I see it, it gives legitimacy to a whole new set of ideas about societal transition. So I believe it is worthy of our engagement.

We have to accept that the realities of power politics are incompatible with the expectations of a clean process for such a fundamental policy change. But we hope that some essential proposals of the project will make a difference, both for the people of Ecuador and all those that are watching the project.

For the future though, I have to say I seriously question the idea of trying to “hack a society” which was the initial philosophy of the project and of the people who hired us. You can't hack a society, since a society is not an executable program. Political change needs a social and political basis, and it was very weak from the start in this case.

This is why I believe that future projects should first focus on the lower levels of political organisation, such as cities and regions, where politics is closer to the needs of the population. History though, is always full of surprises, and bold gambles can yield results. So FLOK may yet surprise the sceptics.

RP: If Ecuador did adopt your plan (or a significant part of it), what in your view would be the implications, for Ecuador, for other countries, and for the various free and open movements? What would be the implications if none of it were adopted?

MB: As I say, at this stage I see only the possibility of a few legal advances and some pilot projects as the best case scenario. These, however, would be important seeds for Ecuador, and would give extra credibility to our effort.

I realise it may surprise you to hear me say it, but I don't see this as crucial. I say this because, we already have thousands of projects in the world that are engaged in peer production and commons transitions, and this deep trend is not going to change. The efforts to change the social and economic logic will go on with or without Ecuador.

As I noted, what we have now that we didn’t have before, regardless of implementation in Ecuador, is the first global commons-oriented transition plan, and several concrete legislative proposals. They are far from perfect, but they will be a reference that other locales, cities, (bio)regions and states will be able to make their own adapted versions of it.

In the meantime, we have to continue the grassroots transformation and rebuild commons-oriented coalitions at every level, local, regional, national, global. This will take time, but since infinite growth is not possible in a finite economy, some type of transition is inevitable. Let’s just hope it will be for the benefit of the commoners and the majority of the world population.

Essentially, we need to build the seed forms of the new counter-economy, and the social movement that can defend, facilitate and expand it. Every political and policy expression of this is a bonus.

As for the endgame, you guessed correctly. What distinguishes the effort of the P2P Foundation, and many of the FLOK researchers, is that we’re not just in the business of adding some commons and P2P dynamics to the existing capitalist framework, but aiming at a profound “phase transition”.

To work for a sustainable society and economy is absolutely crucial for the future of humanity, and while we respect the freedoms of people to engage in market dynamics for the allocation of rival goods, we cannot afford a system of infinite growth and scarcity engineering, which is what capitalism is.

In other words, today, we consider nature infinite and we believe that infinite resources should be made scarce in order to protect monopolistic players; tomorrow, we need to consider nature as a finite resource, and we should respect the abundance of nature and the human spirit.

So our endgame is to achieve that fundamental reversal of our civilisation, nothing less. As you can see from our proposals, we aim for a simultaneous transformation of civil society, the market and public authorities. And we do this without inventing or imposing utopias, but by extending the working prototypes from the commoners and peer producers themselves.

RP: Thanks for speaking with me. Good luck with the summit.

Sunday 4 May 2014

Interview with Kathleen Shearer, Executive Director of the Confederation of Open Access Repositories

In October 1999 a group of people met in New Mexico to discuss ways in which the growing number of “eprint archives” could co-operate.
 
Kathleen Shearer
Dubbed the Santa Fe Convention, the meeting was a response to a new trend: researchers had begun to create subject-based electronic archives so that they could share their research papers with one another over the Internet. Early examples were arXiv, CogPrints and RePEc.

The thinking behind the meeting was that if these distributed archives were made interoperable they would not only be more useful to the communities that created them, but they could “contribute to the creation of a more effective scholarly communication mechanism.”

With this end in mind it was decided to launch the Open Archives Initiative (OAI) and to develop a new machine-based protocol for sharing metadata. This would enable third party providers to harvest the metadata in scholarly archives and build new services on top of them. Critically, by aggregating the metadata these services would be able to provide a single search interface to enable scholars interrogate the complete universe of eprint archives as if a single archive. Thus was born the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). An early example of a metadata harvester was OAIster.

Explaining the logic of what they were doing in D-Lib Magazine in 2000, Santa Fe meeting organisers Herbert Van de Sompel and Carl Lagoze wrote, “The reason for launching the Open Archives initiative is the belief that interoperability among archives is key to increasing their impact and establishing them as viable alternatives to the existing scholarly communication model.”

As an example of the kind of alternative model they had in mind Van de Sompel and Lagoze cited a recent proposal that had been made by three Caltech researchers.

Today eprint archives are more commonly known as open access repositories, and while OAI-PMH remains the standard for exposing repository metadata, the nature, scope and function of scholarly archives has broadened somewhat. As well as subject repositories like arXiv and PubMed Central, for instance, there are now thousands of institutional repositories. Importantly, these repositories have become the primary mechanism for providing green open access — i.e. making publicly-funded research papers freely available on the Internet. Currently OpenDOAR lists over 3,600 OA repositories.

Work in progress


Fifteen years later, however, the task embarked upon at Santa Fe still remains a work in progress. Not only has it proved hugely difficult to persuade many researchers to make use of repositories, but the full potential of networking them has yet to be realised, not least because many repositories do not attach complete and consistent metadata to the items posted in them, or they only provide the metadata for a document, not the document itself. As a consequence, locating and accessing content in OA repositories remains a hit and miss affair, and while many researchers now turn to Google and Google Scholar when looking for research papers, Google Scholar has not been as receptive to indexing repository collections as OA advocates had hoped.

For scholars, the difficulties associated with accessing papers in repositories is a continuing source of frustration. Meanwhile, critics of green OA argue that the severe shortage of content in them means that any hope of building an effective network of OA repositories is a lost cause anyway.

For their part, conscious that green OA poses a potential threat to their profits, publishers have responded to the growing calls for open access by offering pay-to-publish gold OA journals as an alternative.


It was against this background that in 2012 the Finch Committee concluded that in order for the UK to make an effective transition to OA “a clear policy direction should be set towards support for publication in open access or hybrid journals, funded by APCs, as the main vehicle for the publication of research.”

Explaining the decision to prioritise gold OA, Finch argued that repositories had failed to deliver on their promise. “Despite the best efforts of repository managers and librarians … rates of deposit and usage of published materials remain fairly low; and a number of issues will need to be addressed if institutional repositories are to fulfil a bigger and more effective role in the research communications landscape.”

For that reason, Finch added, repositories should in future be viewed as being merely “complementary to formal publishing, particularly in providing access to research data and to grey literature, and in digital preservation”

The Finch Report proved highly controversial, particularly when Research Councils UK (RCUK) responded by introducing a new gold-preferred OA Policy conforming to its recommendations. Many OA advocates in particular felt betrayed.

But we need to ask: did Finch have a point?

We should not doubt that huge challenges remain in getting content into repositories. However, the whys and wherefores of this have been well rehearsed elsewhere, so we won’t dwell on them here.

Instead, let’s consider the current state of the repository infrastructure, particularly with regard to interoperability and discoverability. Why, for instance, do many repositories not expose adequate metadata?  Why do they sometimes provide just the metadata and not the full text? When will the sophisticated search functionality that researchers need become standard in repositories? Will it? And what new developments might help here? More generally, what does the future hold for the OA repository?

Investing for the long term


Who better to put these questions to than Kathleen Shearer, Executive Director of the Confederation of Open Access Repositories (COAR)? Launched in October 2009, COAR’s mission is to “enhance the visibility and application of research outputs through a global network of open access digital repositories” and its membership currently includes over 100 institutions from around the world.

Reading Shearer’s replies below one has to conclude that there is much still to be done. Scholars and scientists will therefore clearly need to be patient. And while new repositories are constantly being created, and existing ones improved (as are cross-repository search services like BASE), the truth is that if the vision articulated in New Mexico fifteen years ago is to be fully realised the research community is going to have to invest a great deal more time, effort and money to developing its repositories.

But should it? Now that most if not all scholarly publishers offer gold OA is further investment in repositories justified?

Shearer believes it is — for two reasons. First, she says, wide-scale take up of green OA would contain publishers’ prices; second, the time has in any case come for the research community to take back control of the scholarly communication system, and repositories will be vital in doing that.

As Shearer puts it, “[T]he Green Road is key. We must collectively build and maintain a global system of repositories. It introduces competition into the system and will act as an important deterrent to arbitrary price increases by publishers.”

She adds, “It will also demonstrate the important role that institutions play in the stewardship of research outputs. To that end, institutions should devote more resources to their repository operations in order to improve repository services and increase the size of their collections.”

As I read it, the promise is that any investment made in OA repositories today will more than pay for itself in the long term.

The interview begins


RP:  Can you say who you are, where you are based and what role you play within COAR?

KS: I am the Executive Director of COAR and I am based in Montreal, Canada, although the COAR office is located in Göttingen, Germany. I have been working in the area of open access and digital repositories for about a dozen years now, mainly in the Canadian context as a consultant and a research associate with the Canadian Association of Research Libraries. In June 2013, I became the Executive Director of COAR.

RP: Briefly, what is COAR, how is it funded, and what is its purpose?

KS: COAR, the Confederation of Open Access Repositories, is an association of repository initiatives with an international membership.

We have over 100 members in 35 countries around the world. Our members come from a variety of communities including universities/libraries, research institutions, funding agencies, intergovernmental organizations and government departments — any organization that may have an interest in repository development and wants to be connected with the international community.

COAR’s mission is to raise the visibility of research outputs through a global network of repositories. We are active on two levels: (1) At the practical level, we support communities of practice around areas of importance for our members mainly in terms of best practices, interoperability and monitoring trends in the repository landscape and (2) At the strategic level, we aim to facilitate greater alignment of regional and national repository networks around the globe.

COAR is funded mainly through membership fees, although we receive in-kind support for our office space from the University of Göttingen and some partnership funding as well.

We are quite a light-weight organization with about 1.5 full time positions in total and an Executive Board chaired by Norbert Lossau, Vice-President of the University of Göttingen. Most of our activities are undertaken by the active participation of our members. 

RP: The mission of COAR, you said, is to “raise the visibility of research outputs through a global network of repositories”. I think it might help if we tried to clarify what this means in practice. In other words, what do we mean by repository here, and what role exactly do we expect that repository to play? Are we talking about a global network of institutional repositories, or does repository here encompass more than that (i.e. central subject-based repositories like PubMed Central and arXiv too, and perhaps other content management systems and databases?)

Likewise, should we assume the role of the repository remains as it was originally conceived — a tool to support green OA by providing a place where papers published in subscription journals can be self-archived in order to ensure that free copies are always available outside the subscription paywall?

Or do we assume that the repository can now also act as a publishing platform on which institutions can publish their own journals — as currently planned, for instance, by University College London?

Alternatively, perhaps the assumption is that today the repository should be viewed as little more than what the Finch Report assumed it to be: something “complementary to formal publishing, particularly in providing access to research data and to grey literature, and in digital preservation” (A model that assumes open access is provided by means of gold rather than green OA)?

KS: Repositories are evolving and play a number of roles. At their core, a ‘repository’ could be theoretically defined as a set of services that provide open access to research outputs (along the lines of Cliff Lynch’s original definition in 2003). However, in practice, repository services and infrastructures are diverse and there is a lot of overlap with other systems. Perhaps most significantly, practices and technologies are changing quickly, making it a challenge to concretely define their services. My feeling is that we need to be flexible in the way we conceptualize repositories.

In terms of COAR, we are a community brought together by a set of shared principles and common practices rather than by a narrowly delineated concept of repository. So yes, we would include disciplinary repositories and content management systems (if they provide open access to full text) in our global network.

In terms of a complement to formal publishing, I expect that traditional publishing will soon be going through some pretty big transitions, likely some very disruptive changes. I agree with Dominique Babini, Jean-Claude Guédon and others that we should aim for a basic, open, and interoperable system that is free to both access and contribute to. Value-added services by publishers and others can be built on top of this content.

One way of thinking about repositories is that they represent an institutional commitment to the stewardship of research outputs. In this sense, they address two important problems in the current system: sustainability and stewardship.

I believe institutions should assume greater responsibility for managing, providing access and preserving the content created through research. It will alleviate some of the inflationary aspects of scholarly publishing and enable us to have more influence on future directions. This was the traditional mission of libraries in the print world, which has been somewhat lost in the transition to digital content. How this plays out in terms of models will likely vary according to content type, discipline, and region.

Interoperability


RP: I would like to focus on the issue of interoperability. I am aware of a number of current initiatives devoted to getting institutional repositories to interact/interoperate, including DRIVER, DRIVER II, euroCRIS, OpenAIRE and no doubt there are others too. How do these various initiatives fit together (do they?), and why are there so many initiatives that — to the layperson at least — might seem to be duplicating effort?

KS: There are several initiatives that have evolved from different requirements, regions, and with differing aims.

DRIVER and DRIVER II were European Commission-funded projects to support the implementation of repositories in EU countries. The aim was to have repositories adopt common guidelines for organizing their content so they could be harvested and searched through the DRIVER search service. 

OpenAIRE has built upon work of DRIVER to implement further standards that enable the European Commission to track the open access research output they fund. Each of these three projects required some level of interoperability between participating repositories.

There are similar initiatives in other regions, such as La Referencia in Latin America and SHARE in the US that will also require some level of interoperability across those repository networks.

COAR is a forum whereby all of these regional initiatives can work together to identify issues in common and, where appropriate, agree on standardized practices. COAR will be intensifying efforts in this area and has just launched an initiative to address some of the differences between repository networks that are evolving.

EuroCRIS is a European association that is looking at interoperability between research administrative systems. The objective of these systems is to manage and report on research activities. Unlike repositories, CRIS systems do not usually manage full text content.

We have seen in the last few years some merging between CRIS systems and repositories, with some repositories being integrated with CRIS's, or at least interoperability between repositories and CRIS. 

COAR has also been working with EuroCRIS to identify strategies for greater interoperability between research administration systems and repositories.

RP: The concept of networking repositories dates back at least to 1999, and the Santa Fe Convention. I believe it was in the wake of the Santa Fe meeting that the OAI-PMHprotocol was developed. However, I assume that both the thinking and the technology have developed somewhat since then.

As I understand it, for instance, OAI-PMH was based on the principle that services would be developed to harvest metadata from repositories in order to aggregate their holdings and provide a centralised discovery service. I guess this assumed that records in repositories would consist of metadata but not the full text (so the goal presumably was to signal where papers were held, not to provide direct access to them).

I would think that the emphasis today is more on providing direct access to full-text documents not just their metadata. Briefly, therefore, can you say how thinking has developed since 1999, and how the technologies and protocols have changed to reflect this?

KS: OAI-PMH was developed on the principle that a service would harvest the metadata record that would then point the user back to the full text content in the repository. So in that sense it does facilitate access to the full text, but without having to aggregate the content into a central archive.

OAI-PMH is still the common denominator for metadata exposure in repositories and it remains standard practice for cross-repository search services to harvest metadata and then point the user back to the repository to access the full text. Full text harvesting is much more demanding, requiring large storage space to house the content in a central location and there are other technical challenges attached to full text harvesting.

The disadvantage of metadata harvesting is that the search services are based on the metadata supplied by the repositories, which isn't always comprehensive, complete or consistent. COAR aims to improve the current situation by identifying and encouraging the adoption of common standards and metadata globally. However, for better discoverability, and especially for other services such as text mining, using full text search is highly desirable.

In terms of discovery, repository managers have found that most users find the content in repositories through search engines such as Google and Google Scholar, not from metadata harvesting services or by directly searching the repository. Therefore, the repository community has put significant efforts into exposing their content to commercial search engines through various optimization techniques. 

Beyond discoverability, there are other areas of repository networking and interoperability, like content transfer, usage data, etc. where new technologies and standards/protocols have been created. COAR is a forum whereby interoperable practices can be agreed upon globally.

Full text


RP: You say that it remains standard practice for cross-repository search services to harvest metadata and then point back to the full text in the repository, and you said that COAR assumes OA repositories will “provide open access to full text”. This would seem to imply that an OA repository always now includes the full-text as well as the metadata (and indeed most people would presumably expect that of an OA repository).

However, not all records in OA repositories do provide access to the full-text, and many seem to offer little more than the bibliographic details. Even a poster child of the OA movement — Harvard’s DASH repository — has been criticised for not providing the full text (e.g. here). These criticisms were made a few years ago, but DASH does still today contain records without any full-text attached. Moreover, some do not even provide a link to the full-text (and DASH does not seem to have a RequestCopy Button). When I looked in DASH the other day, for instance, I found (at random) five examples of this (one, two, three, four, five).

I think this cannot be a consequence of publisher embargoes since the articles concerned date back as far as 1993, with the two most recent published five years ago (and in any case the Harvard OA Policies claim to moot publisher embargoes). Moreover, where in a couple of cases the DASH records do pointto the full-text this is a link to the publisher’s version, where the user is asked to pay for access ($35 in one case). This cannot be described as OA.

You may not want to comment specifically on DASH, but do you think it problematic when records in OA repositories do not always provide access to the full-text, and maybe don’t even link to a free copy of it? If so, what can/is COAR do/doing to address the situation, in concrete terms?

KS: Ideally, all records in the repository will have the full text attached. However, as you point out, this isn’t always the case. I’m not sure about the specific case of DASH, but this really speaks to the collection policy of the individual repository.

As I said earlier, more and more repositories are now being used to track research output. In that case the objective may be to collect information about all of the publications at the institution, regardless of whether they are open access or not. Still other repositories may be inputting metadata records without the full text as a strategy to encourage authors to upload their documents.

If we look at the OpenAIRE portal as an example, they are currently harvesting 8.4 million records from over 400 sources (mostly repositories, but also open access journal articles). Over 8.2 million of those records are open access. So, I believe that the vast majority of content in repositories is open access, with a small percentage of metadata-only records. The portion of open access, of course, will vary depending on the repository.

In my opinion, the most effective way to improve the proportion of full text in repositories is to continue to advocate for open access policies at funding agencies and institutions around the world. These are the levers that will have a real influence on the policies and practices of the individual repositories. More staffing and resources directed towards repository operations would also help.

RP: You said that rather than searching directly in repositories, or exploiting metadata harvesting services (like OAIster perhaps?), researchers tend to rely on search services like Google and Google Scholar for the discovery of scholarly content in repositories.

Does this mean that the repository community tends today to assume that the research community should rely on mainstream search services, rather than trying to build sophisticated repository search services itself?

If so, I am conscious that OA advocates frequently complain that Google is not supportive enough of their needs, and not as keen to index repository collections as they would like. Would you agree? What is the current situation with regard to mainstream search services like Google, Bing and Yahoo in terms of indexing repositories, and what future developments do you envisage that might improve the situation so far as searching repositories is concerned?

KS: It’s not really about what the repository community believes is the best solution, but rather a practical response to user behaviour.

It would be erroneous to assume all information seekers are the same. However, we do know that even for well-developed disciplinary services, such as PubMed Central and Medline, the majority of users access articles directly from commercial search engines like Google and Google Scholar.

According to my COAR colleague Eloy Rodrigues, Director of the University of Minho Documentation Services, most well developed institutional repositories have about 3/4 of their traffic coming from Google and other generic search engines. Repository managers take that as very positive sign of the visibility and accessibility of the content in the repository. 

In terms of mainstream search engines and Google Scholar there has been ongoing discussion about their efficacy in retrieving scholarly content. It really depends on if you are looking for something you know exists (i.e. you search the title or author’s name) or you are searching using key words.

As reported in an article published in the Online Journal of Public Health Information (Giustini and Boulos, 2013), “Google Scholar’s constantly-changing content, algorithms and database structure make it a poor choice for systematic reviews.”

If you are looking for a specific document in a repository and you know the title, the search engine will likely point to it. However, searching by key words, content in repositories are not always high in the rankings.

The problem of visibility is likely even more acute for repositories with non-English content as there does seem to be a bias towards English language content in these search engines. 

This will remain an ongoing challenge for repositories as technology continues to change rapidly.

Inherent tension


RP: Certainly there seems to be some disappointment amongst researchers that 15 years after the Santa Fe meeting they still find it extremely difficult, if not impossible, to search effectively in and across OA repositories. I saw this view expressed most recently by Cambridge University chemist Peter Murray-Rust who tweeted, “IF libraries provide modern search I'd change my mind; but articles in repos are difficult to discover”. His conversation can be viewed here.

Does Murray-Rust have a point? What can you say to convince him that his needs will be met soon? Can you? If so, how will they be met?

KS: There is an inherent tension that exists in the repository community. On the one hand, we aim to make the deposit process as easy as possible so that creators will contribute (or repository staff costs are manageable); on the other hand, we want to assign good quality metadata (which takes time and effort) because we know it will enable greater interoperability and improve discoverability of content. So far, the former has been a greater priority.

There is some truth to Peter Murray-Rust’s comments in that complex search services, such as those developed for some discipline-based repositories, require quite a high level of curation, especially for non-textual material. Datasets, for example, need to be accompanied by fairly comprehensive metadata describing them and those metadata elements need to be standardized across each item.

It is a far greater challenge to develop complex searching across numerous repositories containing different disciplines, languages and formats. To facilitate advanced searching in this context, there needs to be interoperability across repositories. COAR has been working on this and this is one of our top priorities; but it takes time to realize this across a very diverse repository landscape.

That being said, there are already a number of cross-repository search services, for example BASE, CORE, and OpenAIRE, which are working to improve the retrieval of content in repositories. They have advanced search options that allow you, for example, to limit your search to publication type, geographic location, publication year and so on. You can’t do all of these things in Google Scholar.

OpenAIRE enables users to identify publications related to the projects for which they are funded. These services (and others) will continue to develop and will incorporate more sophisticated tools to improve discovery in the future.

Personally, I can envision a time not too far in the future when more complex search services are built on top of repository networks. What individual repositories should focus on, in my opinion, is ensuring that their content is open, can be indexed, and is attached with the necessary metadata in order to facilitate the development of these services.

RP: From what you have said would it be accurate for me to conclude the following: Users tend to prefer using commercial search engines and Google Scholar for discovering research papers in repositories. However, this is not always the best approach.

We don’t yet know exactly what the role of the OA repository will be, nor what form it might eventually take (indeed, repositories will likely take a number of different forms, and play a variety of different roles).

For these reasons it is important that repository managers ensure their content is open, that it has appropriate metadata attached, and that it can be indexed. Doing this will provide sufficient flexibility for future developments.

Finally, we are still some years out from the point where researchers with sophisticated search needs can expect the level of discoverability that they want/need?

Have I understood correctly?

KS: Yes, you are for the most part correct in summarizing my opinion.

A couple of small clarifications: We know from repository managers that the majority of users are coming to repositories from commercial search engines and not through harvesting services or the search facility built into the repository; and we know from user studies that the starting point to find information for many researchers is through Google or Google Scholar.

Currently, as things stand, the content in repositories is not highly ranked in Google Scholar, and in terms of Google, repositories are indexed alongside billions of other pages. So, no, this is not ideal for the discoverability of repository content, particularly for key word or topic-based searching.

I note that in the early days of Google Scholar, the open access community advocated for the search results to be tagged as open access (or not). Obviously we were not successful, but this would have enabled users to limit results to open access content and certainly been a boost for the visibility of repository content in this context.

I do believe the discoverability of repository content will improve greatly in the coming years. Refining the cross-repository search services, those that are based on harvested metadata, will depend on improving the standardization and comprehensiveness of metadata records. Technology will help with this. There are new, automated methods for assigning metadata and repository software platforms can build-in standard vocabularies and metadata elements.

The greater challenge is coming to an agreement about common terminologies and approaches across the entire repository community. COAR will play an important role by acting as a forum whereby the repository community can make these kind of collective decisions. 

There will also likely be a number of services developed in the coming years to facilitate full-text searching through harvesting the content. According to Petr Knoth (Knowledge Media Institute, The Open University, UK) who has been doing research in this area through the CORE initiative referenced earlier, there still are a number of technical and legal barriers to full text harvesting from repositories.

However, in the coming years, I expect that the repository community will begin to address these barriers, especially the technical ones.

Again, I hope that COAR can play a role in developing solutions and disseminating best practices.

SHARE or CHORUS?


RP: You said (or at least implied) that repositories should be viewed as tools to enable the research community to “assume greater responsibility for managing, providing access and preserving the content created through research”. And you cited SHARE as an example of an initiative focussed on providing interoperability between repositories.

It is worth noting that SHARE is a response by librarians to the OSTP Memorandum, which directs US Federal agencies to develop plans to ensure that the published results of research they have funded is made OA. As such, SHARE could be viewed as a good example of how research institutions can try to take greater responsibility for scholarly communication, since it would put librarians in charge of managing access to papers released as a result of the OSTP Memorandum.

However, you will know that publishers have proposed an alternative model based on CHORUS. The aim of CHORUS is to ensure that it is publishers rather than librarians who manage access to these papers, and it demonstrates their wish to remain firmly in control of scholarly communication, even after research papers have been made OA.

How would you respond to someone who argued the following: Since the research community is finding it difficult to fill repositories (a point frequently made, not least by the Finch Report), and both difficult and time-consuming to create the necessary infrastructure to ensure repository content is optimally discoverable, might it not make more sense to outsource the task to publishers via initiatives like CHORUS? After all, CHORUS will deliver OA, and since publishers have greater resources they might be expected to undertake the task more effectively, and more quickly. Moreover, since it is they who publish the papers in the first place, they already have all the content in place.

KS: My major concern about CHORUS is that the publishing community would have too much control of the scholarly communication system. A number of large publishers have already demonstrated that they don’t support the principle of open access (remember PRISM).

Frankly, the interests of publishers often lie elsewhere and they may be motivated by things such as profit margin not the public good.

On the other hand, at the core of the mission of the university and the library is the advancement and dissemination of knowledge. It seems to me that the world’s collective knowledge created through research should rest in the hands of long-term actors whose raison d’etreis to ensure that it is preserved and remains accessible to all.

CHORUS may seem like an appealing option for the US agencies at the moment, but the long-term implications are that the research community will have little control or ability to influence the future directions of scholarly communication if we take that route.

I’m also very concerned about the costs of such a system. Article processing fees are already way too high for many researchers, especially in developing countries. The recent study of APCs undertaken by the Wellcome Trust and others found that the average per article APC is $1,418 USD for open access publishers. I don’t believe this can scale globally and will ultimately result in disadvantaging a large number of researchers who can’t afford to pay.

RP: You are right that speed and effectiveness is one thing, cost and ownership something else. And as you suggested earlier, if the research community were to take greater responsibility for managing access to research it could hope to “alleviate some of the inflationary aspects of scholarly publishing and enable us to have more influence on future directions.”

This reminds me of what your colleague Eloy Rodrigues said to me last year. The future of scholarly communication, and its cost to the research community, he suggested, will depend on whether there is a “research-driven”’ transition to open access or a “publishing-driven” transition (in order words, whether the transition prioritises the needs of the research community or the needs of publishers). I would think that the competing SHARE and CHORUS initiatives are representative of these two approaches, and this suggests to me that in the coming years we will see publishers and librarians jostling for control of the scholarly communication system. And if that is right, the institutional repository will surely become a key battleground in the struggle.

Would you agree? And if it wants to ensure a “research-driven” transition to OA what should the wider research community be doing in your view?

KS: The choices that institutions make now about how they are going to invest in scholarly communications are absolutely critical.

First of all, I think the Green Road is key. We must collectively build and maintain a global system of repositories. It introduces competition into the system and will act as an important deterrent to arbitrary price increases by publishers.

It will also demonstrate the important role that institutions play in the stewardship of research outputs. To that end, institutions should devote more resources to their repository operations in order to improve repository services and increase the size of their collections.

Secondly, we should encourage and sponsor the development of new publishing models and value-added services that conform to our vision.

In terms of repositories, this would include better cross-repository discovery services, text mining capabilities, disciplinary views, and the development of overlay journals. Leslie Chan, for example, makes the case that the distinctions between “journal” and “repository” are increasingly blurred and that “mega-journals” are essentially repositories with overlay services.

We should be participating in projects that demonstrate the added value of repositories and repository networks across the research life cycle. Of course, this will require that we take some risks, which is a difficult case to make in hard economic times to (often) risk adverse organizations.

Global discussion


RP: You said that the way in which scholarly communication develops will vary “according to content type, discipline, and region.” Certainly, as OA develops we do appear to be seeing distinctive regional differences emerging. For instance, where the pay-to-publish gold OA model is being pushed heavily by the UK and The Netherlands there is still more of a focus on green OA in North America. Meanwhile, in Africa and Latin America a repository-based publishing model currently appears to dominate.

As things stand I would expect to see the Global North increasingly move to a pay-to-publish gold OA model and the Global South to a free-to-publish/free-to-read repository-based publishing model similar to that pioneered by SciELO and AJOL. If that proves the case, however, will it be the best outcome in a global research environment?

When I spoke to Dominique Babini last year she said “[W]e owe ourselves a global discussion about the future of scholarly communication”. And she added, “Now that OA is here to stay we really need to sit down and think carefully about what kind of international system we want to create for communicating research, and what kind of evaluation systems we need, and we need to establish how we are going to share the costs of building these systems.”

This would seem to imply a more global approach than we are currently seeing develop. Would you agree with Babini? If so, who should organise the global discussion she has called for, and who should take part in it?

KS: Yes, I agree, and I would add that we should consider carefully the unintended consequences of adopting the various models.

“What kind of system do we want to create for communicating researcher?” I would propose that we want one in which all researchers can access and contribute to, regardless of geographic location or discipline; and where the knowledge created is assessed on its real value, rather than on the region from which it emerges or the so called “impact” of the journal in which it is being published.

A dual system as you describe above is not ideal and I believe it will create inherent inequalities across the regions. Especially if we continue to rely on impact measures that do not reflect the quality of the research, but rather serve to prop up the traditional publishing system.

I believe there is a general lack of awareness in the “north” about the “southern” perspective and that we do need to ensure that the voices from the south are heard.

In terms of the global discussion, we already have a number of international forums for exchange: the funding agencies have the Global Research Council; libraries have organizations such as the SPARCs and IFLA; the repository community has COAR; and, publishers have their own venues.

UNESCO, and the governments represented there, has also become interested in open access. We could begin the global discussion by facilitating greater dialogue across these different stakeholder organizations.

One missing but very important link is the research community. It’s clear that many researchers have not been sufficiently engaged with the issues of open access to understand the nuances. For example many researchers still equate open access with open access journals. So we need a mechanism for bringing those communities into the discussion as well.

It is illuminating to note that a parallel global discussion is currently occurring in the area of research data through the Research Data Alliance (RDA). It has been comparatively easy in the context of research data to bring together the key stakeholders — researchers, data repositories, institutions, and funding agencies — to adopt a common vision and agree on practical strategies for moving forward.

Why haven’t we been able to do that for publications? The essential difference is that for publications, there are some parties that have a significant financial interest in maintaining control of the system. This makes the global discussion far more challenging.

RP:  Thank you very much for taking the time to speak with me.