Tuesday 22 March 2011

Of Citizenship and Software

What is the role, purpose and point of software? Most people might assume that question to have an obvious answer: as the Berkman Center for Internet & Society puts it, software is “the programs or other ‘instructions’ that a computer needs to perform specific tasks. Examples of software include word processors, e-mail clients, web browsers, video games, spread sheets, accounting tools and operating systems.”

What this description omits of course is human agency, which ultimately determines what software does, how it does it, the degree to which it supports or undermines the rules and laws of society, and how it encourages or discourages ordinary citizens to participate in the process of defining those rules and laws.

Lawrence Lessig came to understand the power of software to construct and shape our world when he was (briefly) “special master” during the Microsoft antitrust case. As he later put it to me, “[Y]ou can code software however you want, to produce whatever kind of product you want. And that capability is unique with software: you can't, for instance, say that an automobile will be something that is a transmission and a radio wrapped in one. But you can do exactly that with software, because software is so plastic.”

As such, he added, the Microsoft case was just “a particular example of a more general point about how you need to understand the way in which technology and policy interact.”

Yet, as more and more of our lives are organised and controlled by computers, and the role that software plays in society becomes increasingly central, most people still assume that the virtual world that opens up before them when they switch on the computer, and the choices they are offered onscreen, is how things are and ought to be — not a consequence of the way in which the underlying software has been coded.

Most of us now realise that there are bad guys in cyberspace — people who will try to steal your identity, or harass you in some way — but we too often fail to understand that the computer-generated world we enter, and what it does and does not allow us to do, has been specifically constructed to behave in that way. It is not the way things inevitably have to be, but the way someone has decided to code the underlying software. Importantly, how software is written also has a direct impact on our lives, and the world we inhabit off-screen.

For that reason the software choices that individuals, companies, organisations and governments make have important political, economic and social consequences for us all.

Yet when we attend a basic computer course we are instructed how to point and click, how to send and receive email and, perhaps, how to create a simple database — but we are not told why our choice of software is important, why it is imperative to insist that our governments and political administrators use open data formats, and why software raises important ethical issues.

Importantly, it is not made clear to us that we can challenge the way in which software is written and used.

Frustrated by the limits and inadequacies of most computer courses Marco Fioretti, a former telecom engineer based in Rome who teaches about digital rights issues, has added to his repertoire a basic online course on digital citizenship.

The course, which is open to everybody, will not teach students how to code, or send email, says Fioretti, but how computers impact on and determine what happens in the real world and how, by means of active citizenship, we can help shape and influence that process, and create a more democratic world as a result. As he puts it, “Understanding digital issues is no longer optional for citizens. It's necessary whether one likes computer or not.”

Below Fioretti explains the background to his course, and the thinking that lies behind it. Unsurprisingly, we learn that Fioretti has been influenced by Richard Stallman and the Free Software Movement, although he is not an uncritical fan.


clip_image002
Marco Fioretti

RP: Can you give me some brief background information on yourself, and your qualifications for running a course on digital citizenship?

MF: I discovered Linux and the Free Software world in the mid 90's, because I used UNIX in my daily work and I wanted to try something similar at home. I like Free Software because it's an extremely flexible computing environment. That's why about ten years ago I started to write tutorials and other articles for Linux Journal and other magazines.

Almost immediately, however, I became much more interested in the ethical side of Free Software rather than in the coding, and in all its implications for civil rights, politics, education, and social development.

For that reason I founded RULE in 2002 and Eleutheros in 2006, and it is why I joined the OpenDocument Fellowship and Digistan, and why I began giving the talks listed on the left-hand bar of my web site.

RP: Can you say something about RULE and Eleutheros, and their relevance to the course you have developed?

MF: As I said, what appealed to me about free Software were its ethical advantages. But as the FSF [Free Software Foundation] is always saying, "free as in freedom" isn't the same thing as "free as in beer". Free software is not necessarily affordable for schools, charities etc., for instance —particularly when, as is the case with several Gnu/Linux distributions, it only runs decently on new powerful computers.

RP: In other words, even where code is licensed under the General Public Licence (GPL), and software is made freely available to all, it does not mean that people can necessarily afford to use the programs built on it?

MF: Correct. So in 2002 we founded RULE (Run Up to Date Linux Everywhere) — with the aim of making modern Free Software easily installable and usable by non-geeks, even on very limited, refurbished hardware.

RP: And Eleutheros?

MF: In late 2005 I realized that some of the official Catholic documents about Social Communications seemed in some ways to be an endorsement of free/open digital technologies. This led to the Eleutheros Manifesto — which is an articulation of what I see as the strong affinities between Christianity, the philosophy of Free Software, and the adoption of Open Formats and Protocols.

For almost ten years I studied all these things, and wrote about them, and gave talks; and I volunteered as much as my full-time day job would allow me to. Then in 2008 the company I was working for was completely restructured, and I saw that as an opportunity to quit that kind of activity and do all these other things full time.

RP: What was your day job?

MF: I was an ASIC/FPGA designer for a telecom equipment manufacturer.

Anyway, soon after quitting — as a natural development — I also started to study as much as I could about Open Data. Amongst other things this led to a report on the topic that was published recently.
By the way, another qualification I feel I have for running a course on digital citizenship is that, although I am an advanced GNU/Linux user, and can write simple code and test hardware and software if I really have to (it was one of the things I did in my full time job), I am not a programmer, or a programming enthusiast.

And as I do not have to write, hack or support software to make a living, I have no direct business interest in Free and Open Source Software (FOSS) and — much more importantly — I am not obsessed with source code.

Software freedom

RP: You are not a geek?

MF: I'm not a traditional software geek, no. And I feel that that gives me a much more balanced point of view than FSF people. While I think that almost all they say is right, their way of communicating it — their language and style — isn't sufficient for a world where almost all computer users couldn't care less about programming.

RP: Who do you expect to take your course?

MF: The important point to note is that this is not a technical course: It is a citizenship course. As such, it will in my opinion be particularly useful for (university) students, parents and teachers: that is people who are preparing themselves for the new world as it is being shaped by digital technologies, or people with a responsibility to prepare future generations for the new environment.

RP: So would it be accurate to call it a digital citizen course based on the ethical precepts and beliefs of the Free Software Movement?

MF: Certainly the course will run on Free Software; and the content will probably be put online with some form of Creative Commons licence (not that it really matters, because there will be little content: it will mainly consist of discussions and other activities that happen during the course).

But it would not be accurate to say that it is based on the ethical precepts and beliefs of the Free Software Movement, for several reasons.

While I appreciate the FSF "precepts and beliefs", for instance, I feel less strongly than they do, or differently, about software freedom. For me the first software freedom is the freedom to ignore what software others use — even if they are using proprietary software. I’d be happier if they were also using Free Software, but hey, I respect their decision — because it doesn't limit my freedom. There are exceptions of course, but that's a separate issue.

RP: So you don’t try to hector people into using Free Software in the way that some Free Software advocates do?

MF: I don’t. The important thing is that if you and I communicate by means of entirely open file formats and digital standards then so far as I am concerned we are operating in a “free as in freedom” way.

And if you think about it, that is what always happened with that very advanced communication technology known as "pen and paper,” and using the “free as in a freedom” alphabet: if we were doing this interview in the old way, would you care what brand of pen I was using? Would you need to care?

Another reason why the course isn't really based on what the FSF fights for is that it deals with much more — including areas where software licenses are totally irrelevant: social networking for instance, where the problem with services like Facebook is what they do with your data, not whether they run free or proprietary software.

RP: Privacy issues for instance?

MF: Yes. The course also deals with democracy. And here I would note that you can build the perfect police state using only "free as in freedom" software, if citizens allow you do so. Likewise we look at the effectiveness of Open Data in providing transparency in politics, which has nothing to do with software licenses. And so the list could go on.

Technology embedded in citizenship

RP: What specifically will those who take the course learn?

MF: The course provides answers to questions like:

= What's the real problem with e-voting?

= What is the impact of open digital standards on pollution and smart-grids that may lessen our dependence on fuels from foreign not-so-stable-and-peaceful countries?

= How much simpler and more transparent government and politics could be if we all used computers in the right way

= Where should I stop when sharing my personal life online?

= How can open computing and data offer me better job opportunities — even if I am completely unable to program, and have no interest in it?

RP: You will nevertheless be dealing with basic technical issues like file formats I think. How will you teach about such things in ways that other basic computer skills courses do not? How exactly will the technology be embedded in the citizenship?

MF: First of all thanks for the "technology embedded in citizenship" definition, I think I'm going to "steal" it as one of the points of the course; it's exactly one of the things I hope to help people to understand.

But yes, Part 1 of the course explains just what file formats and other concepts are, and how they are tied to citizenship. As an example of how I will deal with this you could take a look at slides 8-13 of my seminar on file formats.

By the way, another very large class of people that sorely and urgently need specific, and adequate, education of the kind provided by this course are people in the religious establishment — be they priests, rabbis, vicars, imams or whatever. Being Catholic I see plenty of examples of the need for this (plus some exceptions of course) in the Catholic Church — but the need is general within all religions.

RP: Why do you say that?

MF: My position is that religious people of whatever religion have even more reason to learn and practice what I teach in this course than ordinary citizens. Although I am not suggesting for a minute that you have to be religious in order to learn and practice certain things — like the fact that, as you very well put it, "technology is embedded in citizenship".

This is like avoiding sneezing in the face of others to avoid spreading diseases, or not driving when drunk. Religious precepts add one extra dimension to it (you'll not go to Heaven if you violate these rules because you harm other people), but you surely don't need that dimension to see why those rules are good. As long as you are told that such rules exist.

RP: I am not sure I fully understand: Are you saying that because Free Software has been defined as an ethical issue — and I think you are suggesting that the other things you cover on your course also have an ethical component — anyone who is religious has a special duty to study these things, and live by the ethics?

MF: First of all, I would turn it the other way around: anyone who is religious has already chosen as his or her special duty to follow some kind of ethical system in all spheres of life. If that system includes (as is normally the case, isn't it?) values like community, respect, reciprocal support and so on, then using Free Software, or at least Free as in Freedom file formats, is just a natural consequence; something that should happen even if Richard Stallman, GNU and the FSF didn't exist.

But all this comes before even looking at religion, and is much more general than Free Software. Religious people may have extra reasons to come and take this course, but the course is not about software and religion.

In general, digital technology creates or limits both habits (that is culture and visions of the world!) and possibilities for concrete action in countless, often unnoticed ways. Consequently, it gives us huge potential to do good or bad. Deciding what to make of it is an ethical decision. What people decide after the course according to their ethics is their responsibility. My goal is to make them realize that there are decisions to take, so they had better be conscious ones.

Let me stress again that this is not a technical course or, more exactly, the topics are not technical ones — because this has a direct, very practical implication: if this were a "software for dummies" course, parents, as well as students and teachers in unrelated fields like pedagogy, social/ political/ environmental sciences etc. would be justified in ignoring it. But the topics encompass the impact of software on human rights, education, equal opportunities, privacy.

Political element

RP: Why do you think such a course is necessary?

MF: Because almost all the initiatives with similar names I've met so far don't deal with such questions.

RP: What are the limitations you see in other courses?

MF: Well, for example, they limit themselves to the purely technical, monkey-training level e.g. how to send an email, click on an icon, push that button etc. Take a look, for instance, at the ECDL [European Computer Driving Licence] and you will see what I mean.

Non-technical courses, on the other hand, almost always only provide useful, but very superficial, "social, digital survival" skills. Things like "What to do in order not to be harassed online"; or, "This is how you tweet, or start your own blog, and why you should do it". In other words, they are always based on the assumption that the current mainstream view of "being digital" that we see on the TV is the only way.

RP: So they are more concerned with how one can be a fashionable without getting into trouble, rather than making ethical choices?

MF: Honestly, I don't know. Whatever the reason is, the common assumption is that they will accept uncritically that the first thing you see when you fire up a computer and go online is the only way things should be, without stopping to figure out what it really is, what it means and if there are better alternatives. They tell you what an iPad is, not how to use it in your interest and for the common good.

Finally, when these other courses do mention Free and Open Source Software (FOSS) their approach tends to be limited to the canonical and (in my view) very restricted Stallman /FSF point of view and attitude — although one important (UK-based) exception is www.theingots.org. However, they are aimed at younger students, and have a different focus.

RP: You said earlier that you deal with democracy; and you said that you became interested in “the ethical side of Free Software and in all its implications for civil rights, politics, education, social development.” How then would you define the ethical issues implicit in your course?

MF: Perhaps a good summary would be an expanded version of my standard slogan: today your own civil rights, the quality of your life, and the quality of life of everybody you care about, depend on how software is used around you. If you care about doing good to yourself, and to others — or at least about not making everybody's life more complicated than it could be — then (among other things, of course) you need to understand how software works. That is what the course is for.

RP: Essentially, you are saying that your course has a political element to it. How would you define the political element?

MF: Yes, there surely is a political element here. Otherwise I wouldn't insist so much that this is a "citizenship" course, instead of a technical one. However, the political vision that I propose and explain in this course is really limited to this concept: today software and digital technologies are, for all practical purposes, legislation. They make politics. Therefore they are too important to be left to specialists: the general criteria regulating (or not) their usage should be the product of conscious political decisions taken by all citizens, in discussion with their representatives. The point is that they can participate in those decisions without being specialists.

RP: Can you expand on that?

MF: Let me take as an example a very hot topic here in Italy, and in other European countries too right now: should water be privatized? Put that question to anyone in the street and they will have, and will give you, a strong opinion on the matter. Try then to suggest that their opinion (whatever it is) doesn't deserve any consideration because they aren't professional hydraulic engineers. How do you think they will respond to that?

The political component of the course I plan to run will be to explain to people how and why it is in their interest to approach questions like "should file formats be privatized" in the same way (even if they do prefer proprietary software), without delegating the decision to specialists.

RP: What you say about the role of software in modern society sounds very like the argument made by Lawrence Lessig in his book Code and Other Laws of Cyberspace. In that book Lessig argues that a network is free or restrictive depending on how the software that makes it work is coded, because the software architecture underlining the network influences people’s behaviours, and the values they adopt. Consequently, how software is written can affect free speech, privacy and other political freedoms in cyberspace. As Lessig put it to me when I interviewed him, “there is no such thing as the Net. What there is is a bunch of people who write code that defines the Net … [and] … it is not clear that commerce or the government — which is pretty significant in its role of defining the Net — would define a Net that protected the values that most people would feel should be central.”

However, I think you are taking this a step further, and saying that in the digital world more and more of our lives is now managed and controlled by software. As such, this is not just a question of how the Net is constructed, and what its architecture allows in terms of freedoms, but that how most software is coded will likely have political and social consequences.

MF: Yes, it is not just a question of how the Net is constructed. There is much more in our lives than the Net. E-voting, for instance, e-waste and long-term digital State Archives (that is, the memory of nations) are just three of the cases presented in the course that are heavily influenced by how software and digital standards are regulated — but in ways that depend very little or nothing on how the Net is constructed.

Benefits but also dangers

RP: I infer from what you say, and from reading the description of your course, that you believe the digital environment offers us significant benefits, but also dangers. 

MF: Sure. Consider that software and digitized data are the only things that are always present in all the other things and services we build and use today — from health to taxes; and from weapons and cars to everyday gadgets. In order to live together we need to manage these data, and today it is software that is used to generate, analyse, access and distribute all possible data and metadata.

RP: As such the issue is not just about how systems are constructed, but how data is managed and mingled, and indeed the degree of control that individuals have over their own data?

MF: It has always been so. Bernard Stiegler says that "the production of metadata has been the principal activity of those in power from the time of the proto-historical empires right up to today".

RP: Can you expand on that, and its relevance to your course? 

MF: Software and open digital standards can either help people to produce, share and use data (=power, freedom, self-reliance) together, or it can be used to restrict their access to the same things in a much more sophisticated way than in the past, just because they are so ubiquitous.

This is where the actual benefits and dangers of the digital revolution are. But this awareness is still largely missing from people outside hacker circles and (some little parts of) academia. With this course I hope to help people to see the existence of these issues, and motivate them to know more, and to act on that knowledge.

RP: Some might be confused by your use of the term metadata here, particularly in connection with the Stiegler quote you cite. 

MF: Metadata means “data about data” or “data that explains the meaning and the relationships between other, apparently unrelated, data”. As an example, consider a spread sheet containing a city budget: The actual numbers in that file are budget data. That file, however, also contains formulas that link those numbers to each other, a timestamp of the exact moment when the file was last modified or printed. And the database that stores that file may keep a record of everybody who read or modified that file, and when and how they did so.

These are all metadata, and when it comes to accountability, and who is responsible for errors in the budget, they are much more important than the actual numbers. Moreover, this is not a theoretical example. In one of the chapters of my Open Data report I link to, and explain, a 2009 Court ruling in Arizona about a very similar situation.

RP: I think people tend to assume that metadata are a product of the computer age but, as you indicate, their use is not new. Thinking again of the Stiegler quote, can you give me an example of the historical importance of metadata, and how they have been used to attain or assert power?

MF: The first examples that come to mind are associated with maps. A map is and has always been both a) a declaration of power and political statement (because it declares how the world is in the mind of the mapmaker — Map or be mapped, as the Mapping Hacks book from O'Reilly puts it) and, b) a source or container of essential metadata.

In past centuries, every king would know that he would become rich if he could go get gold from the Incas, or spices from Indonesian islands. But in order to actually do it, to just attempt such a move, he would have to have the maps — that is, the knowledge of how to get safely from here to there and back.

So a map shows and contains metadata — not just the names of places and what they produce, but the way they are related on the Earth’s surface, their relative positions, and any obstacles in between. That's metadata about those places.

We should note that maps were always kept secret. This page says it well: maps have always been really helpful, but especially 400 years ago. Maps were a secret source of power then. If you had one, you understood where you were in the world and you realised where you were going.

Not everyone knew that. But the Dutch did. They knew where America, Africa and the Far East were. In the 17th century mapmakers in Holland ruled the world.

At the same time, note, smuggling maps out of Spain became punishable by death. It’s easy to see why: Spain was shipping unimaginable amounts of gold and silver out of the Americas. In 1628 when the Dutch hijacked a Spanish galleon loaded with Peruvian silver, Spain was almost bankrupt for a year.

Making the world a better place

RP: Ok, so access to metadata has always been important, and has always had economic, social and political connotations. What is different in the digital age perhaps is that so much more of what is important in life is determined by metadata, so we need open access and open data standards. What would you say were the specific promises and dangers for individuals in the new digital era?

MF: The promise of this era is disintermediation: you still delegate others to do some things for you (be they bankers, travel agents or MPs) but (if you know the right things) you can get by with a great deal fewer intermediaries, and you can control what's happening much better than in the past. The danger lies in continuing to delegate too much to others, just because it’s much easier now than before we had computers.

RP: So the course is not so much about how to protect your privacy online, or how to avoid identity theft, but a democratic message about the need for citizens to make their representatives more accountable, and to take more responsibility on themselves. Does it perhaps also imply some scepticism about representative democracy? 

MF: I may be wrong, but I suspect that I like representative democracy much more than lots of software geeks and other people interested in “digital citizenship” matters.

This may seem to contradict what I said about disintermediation, but I think Einstein got it right when he said “Make everything as simple as possible, but not simpler”. In a modern society there are too many details to have the time to decide everything directly, and many of those decisions require specific technical, uncommon skills. So representative democracy is OK for me, as long as there is much, much more transparency and efficiency in than there is today.

Technically, this is possible only with digital technologies, which we already have. Politically, it won't happen unless the majority of people realize how those technologies work. This is another reason to attend the course. 

RP: As you indicated earlier, ensuring that politicians and others with power over our lives are transparent requires active citizenship. Is there a danger that most people today have allowed ourselves to become too infantilised to take up the challenge? Do not most of us now prefer to let someone else take the decisions, and then blame then when things go wrong?

MF: That was already happening before the Internet. Surely, many people prefer to let others decide for them and then take the blame if needed. I'd imagine that we all behave that way, at least in certain moments or on certain issues (“should I trust my car mechanic and only buy original spare parts or not?”).

RP: What is your motivation for putting on this course?

MF: My motivation for introducing this course, and the other things I do, is to make a decent living by contributing to making the world a better place, in a way that I feel I can do well.

RP: Can you give me an example of how you would hope someone would do things differently after doing the course, and how doing it differently could benefit them directly?

MF: For example, people will hopefully come to realize why and how they (and their public administrations!) can generate much less e-waste — by, for instance, replacing their computers only when it's actually necessary, not when some software manufacturer decides to increase its dividends.

And I hope they will have the basic skills to understand whether the computer classes run by their son's school are providing useful or useless skills.

And I hope they will look around their hometown and see many ways in which open data and the smart use of computers can improve services and/or reduce local taxes — by asking their mayor or local representatives the right questions.

In practice

RP: How will the course work in practice? I am assuming it will all take place online, and will be highly interactive?

MF: Yes, the course is all done online, but it would certainly be possible to run special editions of the course in person.

Anyway, each unit starts with a short list of things to read and/or a short online slideshow to introduce the topic (for example, “What is e-waste, how bad exactly is it, and how is it dependent on personal software choices?”). Then there are multiple choice tests — not to give scores, but to allow everybody to personally verify how much they understood of the reading material.

At that point, the interesting (and, for the students, the most valuable) part starts: Discussing together, with my coordination and assistance, the case studies provided, with students trying to explain them in their own words, providing specific examples from their own experience and locales, and asking and getting advice from each other about what to do in practice to solve related problems.

It is this last part that makes the difference. A short time spent in this way, stimulating and helping each other, working on one thing at a time, can make people understand the issues much more clearly, and much more quickly, than a month spent reading random stuff found online on the same topics, or discussing it in some forum where (even ignoring trolls, flame wars and the like) there will surely be much less focus.

RP: I believe you plan to charge seven Euros per student. How many students do you envisage recruiting? Presumably you will want to get sufficient numbers to ensure that those Euros add up to a viable revenue stream? 

MF: I am doing my best to not envisage any number of students. If I think too much about that, I may just give up.

Somebody asked me "Why don't you just get a sponsor to pay you to prepare the material and then give the course for free?" The answer is very simple: it wouldn't be sustainable. This is not a "fire and forget" project like writing a book and letting it loose.

Collecting and organizing the material is something I would do anyway to write articles, and give talks etc. The real value of a course is direct interaction with the tutor and other students, not getting stuff to read. I have already tutored online courses, so I do have an idea of where the value is and what kind of work is involved.

At least in the medium/long term in a good course the amount of work/effort/time spent by the tutor is directly proportional to the number of students, not to how many slides or papers the tutor had to prepare. Therefore, the compensation must be proportional to the number of students. Otherwise I can't afford it, it's as simple as that.

My other courses are much more expensive than 7 Euros per person, because they last longer, go into much more detail, have much smaller potential audiences. I want to offer this course to the greatest possible number of people, even in developing countries, because it's the basic stuff that everybody should know. So it must be a per-person fee, but that has to be the lowest possible one. Time will tell if 7 Euros is sustainable. For now, it's a good start.

RP: What are the prerequisites to participate?

MF: The main prerequisite is time: since the course works as I described, it will do you very little good if you can't find the time to participate as much as possible to the discussions during the whole course. But the great advantage of an online course done in a non-real time way is just that it is immensely easier for everybody to find that time. Apart from that, if you can use any browser and have any computer or smartphone connected to the Internet, you already have all the skills and equipment it takes to participate. This is an introductory course about the real impact of computers on our lives, not a class for some engineering degree.

RP: Thanks for your time, and good luck with the course!


Further details of Fioretti’s course can be found here.

Monday 14 March 2011

The Demise of the Big Deal?

Claudio Aspesi — an analyst based at the sell-side research firm Sanford Bernstein — predicts a difficult future for Reed Elsevier, particularly for its scholarly journal business. He also predicts the demise of the Big Deal, the business model in which scholarly publishers sell access to multiple journals by means of a single electronic subscription.

In a report published last year Aspesi warned that a combination of the global financial crisis and the rise of the Open Access (OA) movement would impact negatively on the revenues of scholarly publishers. Yet, he said, Reed Elsevier appeared to be "in denial on the magnitude of the issue potentially affecting scientific publishing".

A year later Aspesi appears even more gloomy. In his most recent report he has downgraded Reed Elsevier to “underperform”, and warns that the widely-used “Big Deal” arrangement, is becoming “unsustainable in the current funding environment.”

While the Big Deal may have worked well as a solution for over a decade, he says, we can expect to see research libraries start cancelling their contracts — a development that will “lead to revenue and earnings decline”.

Speaking to me last week Aspesi repeated his belief that Reed Elsevier is in denial. “[I]f management has a Plan B, they have certainly kept it under wraps, and everything they have said supports my current view that they are in denial”, he told me.

Of course Reed Elsevier is not the only publisher threatened by the current climate, and clinging in desperation to the Big Deal. When I spoke to the CEO of Springer Derk Haank last year, for instance, he told me that it was “in the interests of everyone — publishers and librarians — to keep the Big Deal going.”

Unlike Elsevier, however, Springer embraced OA seven years ago.

Cleary these are difficult times. And the nub of the matter is money: With the number of research papers produced annually constantly rising publishers expect to increase their prices each year to reflect that rise. And the Big Deal, they believe, is the best way of ensuring they can do this, while providing research institutions with access to the greatest number of journals.

Librarians, however, are adamant that prices must fall. To that end library organisations like Research Libraries UK (RLUK) — which represents the libraries of Russell Group universities — have begun organising public campaigns designed to pressure big publishers to end up-front payments, to allow them to pay in sterling, and to reduce their subscription fees by 15%.

The RLUK campaign is being led by Deborah Shorley, the director of Imperial College London Library. “[T]he fact is”, she said recently, “we don’t have money in the sector and we can’t afford to go on spending as we have.”

Meanwhile OA is both a threat and an opportunity for scholarly publishers, and Reed Elsevier should have embraced it by now, says Aspesi.

OA advocates want to see all papers arising from publicly-funded research made freely available on the Web. Green OA, or self-archiving, would see researchers doing this themselves, and so pose a significant threat to publishers’ revenues.

For that reason, suggests Aspesi, Elsevier should have pre-emptively experimented with Gold OA — or OA publishing — as Springer did. By levying a fee for publishing papers, rather than a subscription to read them, publishers can hope to reduce costs, and perhaps maintain their current profit levels as a result.

But there are no certainties in scholarly publishing today. And while many in the research community also believe that OA publishing offers the best long-term solution, it is far from evident that it will resolve the affordability problem. It could also lead to a decline in the quality of published research.

Either way, prices look set to fall over time. This would inevitably mean a decline in the revenues of scholarly publishers.

In short, both scholarly publishers and the research community are caught between a rock and a hard place. For researchers, suggests Aspesi, it may mean having to accept that they will be able to publish fewer papers in the future.

Aspesi explains why in the email interview below, conducted at the end of last week.

clip_image001

Claudio Aspesi

RP: Bloomberg reported in February that Elsevier’s most recent profits are up more than 60%, to $1.03bn. In a report published last week, however, you downgraded Elsevier to “underperform”. Why?

CA: There are several reasons, but it ultimately boils down to two. The stock market is a powerful discounting mechanism for future events: a company can be performing very well but have a share price that overvalues the future (or perform very badly but have a share price that undervalues the future).

In the case of Reed Elsevier, I believe that market expectations about the future of Elsevier are too optimistic. In addition, I have concerns about the future performance of other parts of the portfolio (like their US legal business) that also lead me to think the stock is overvalued.

RP: In your report you say that the days of The Big Deal are coming to an end. The death of the Big Deal has been long predicted. Why do you think we have finally reached the “crunch point” as you call it?

CA: I think there are three trends overlapping: a long term unsustainable trend, a cyclical funding crisis and a more tough minded and analytical community of librarians.

Revenues for STM publishers have been rising faster than library budgets for many years, and librarians have had to cope with this discrepancy by cutting back their spending in other areas.

We can all speculate whether this could have continued indefinitely or not, but it does not really matter: the financial crisis has led to widespread cuts in library budgets, forcing research libraries to take a harder look at what they spend on serials.

Overlay to the funding crisis the realisation that Big Deals forced librarians to take journals that nobody (or almost nobody) really accessed and you set up a perfect storm.

RP: The consensus is that Elsevier will see 4% growth in 2013. You predict only 2%, possibly lower. Is that correct?

CA: Yes. Please bear in mind that Elsevier also sells books and databases and electronic retrieval services, and I expect these businesses to prove more resilient over time. It is the Big Deal itself that looks increasingly under pressure.

The funding issue

RP: When I spoke to Springer CEO Derk Haank at the end of last year he described The Big Deal as the “best thing since sliced bread”. He added: “The truth is that it is in the interests of everyone — publishers and librarians — to keep the Big Deal going.” What is he not seeing that you see?

CA: The funding issue. Of course — from a publisher’s perspective — to be able to gain some revenues, any revenues, from journals which were not really read is terrific. We have seen the usage data of some universities and found that as many as two thirds of the titles in some Big Deals were accessed once a month or less.

I suspect that many librarians, if they run the numbers, will find that one third or less of the titles they acquire through Big Deals really matter to their user community.

But it was nice for librarians to be able to offer “everything” to their users. They did not consider that the Big Deals were depriving libraries of the funding needed for other activities.

Professor Darnton of Harvard University published in December 2010 an article in the New York Review of Books (The Library: Three Jeremiads) which captures perfectly the impact of publishers revenue growth on academic libraries and on the academic press.

RP: Haank pointed out to me that the number of research papers is growing at 6% to 7% per year. “Librarians need to accept that if they want access to a continually growing database, then costs will need to go up a little bit,” he said. “We try to accommodate our customers, but at a certain point, we will hit a wall.”

This seems to be the nub of the problem: Publishers are processing more and more papers each year, and so they expect to raise their prices. But the research community insists that it cannot afford to pay any more. Nevertheless, Haank assumes that eventually more money will be found. As he put it, “[O]ver time, I expect people will realise that scientists have to have sufficient funding to keep abreast of new developments.”

You, however, believe that publishers will simply have to accept that their revenues are going to fall, because there really is no more money?

CA: I have no doubt that — over time — adjustments would be made. But it remains to be seen if they need all the 2,200/2,400 journals that the each of the largest publishers maintain today.

You know, my job is not to pass judgement on how people run their business or to decry capitalism, only to advise investors whether they should buy or sell stocks.

I can observe, however, that there is something unhealthy about an industry which has managed to alienate its customers to the point their membership associations increasingly focus time and attention on how to overturn the industry structure. It is not a good thing to have your customers spend their time trying to put you out of business.

No Plan B?

RP: In a report you published last June you said that Elsevier was in denial about the situation. In this report you argue that even though the company was aware of the current problem as early as 2005, it ignored it. Is that correct? Why do you think they have no Plan B — as you put it?

CA: I dug out a presentation that Reed Elsevier organised for investors back in 2005 on Elsevier. At the bottom of a slide titled “Attractive Growth Markets”, and dedicated to show how global university and R&D funding were growing between 5 and 8% a year, was the sentence “Although library budgets have not kept pace: 1-3%”. If they thought they needed to address this issue, it did not show in the following years.

To be fair, the financial community also ignored that line. In my recent report, I quoted one of my own reports from December 2007 in which I wrote “Elsevier has still enjoyed a healthy underlying growth rate of 5%...We expect these levels of revenue growth to be sustained going forward”. I am just as guilty of ignoring it as everyone else.

RP: If you too were wrong in 2007 it is presumably possible that your assumptions are wrong today?

CA: Of course. I wish I could always be right.

But if management has a Plan B, they have certainly kept it under wraps, and everything they have said supports my current view that they are in denial. My report opens by quoting David Prosser, the Executive Director of Research Libraries UK, who told the Wall Street Journal back in November 2010 “We do not wish to cancel big deals, but we shall have no alternative unless the largest publishers substantially reduce their prices”.

Somehow, this issue has to be addressed; Elsevier may be in a better position than me to answer why they are not.

RP: I wonder if Elsevier might be stuck on the horns of a dilemma: They have to keep delivering profits for their shareholders, and this may be restricting their ability to do the right thing for their long-term future.

CA: Again, I think it would be interesting to know what the top management of Elsevier thinks is the right thing for their long term future. More of the same? Hope that funding returns to libraries by the end of 2011 so that the storm blows over? Go back to the 1-3% libraries funding growth that was already putting publisher revenue increases on a collision course with library budgets?

In Haank’s own words “We will hit a wall”. Does Elsevier think that walls stand only on the path of other publishers?

Open Access

RP: In September last year Elsevier launched its first Gold OA journal, The International Journal of Surgery Case Reports. Was that a good move on their part? Should they be doing more of that?

CA: Excellent companies experiment and try new concepts all the time.

RP: Does Open Access publishing offer a solution to the current difficulties confronting scholarly publishers? Some argue, for instance, that OA could provide them with the same revenues as they currently get from the Big Deal. Moreover, since some/much of the costs would be met by research funders, rather than libraries, new money would flow into the system?

CA: I have asked myself that question several times. Publishers talk about tapping the larger revenue streams that come from the funding of science, and Open Access would seem a logical way to do that.

I think what holds them back is the fear that the transition will be long and messy, with many researchers unable to secure the funding to pay for publication fees. Moreover, some journals with very high impact factors probably reject so many submissions that the publication fees would have to be absurdly high.

RP: In your earlier report you said that the real threat comes from self-archiving. Has that threat grown or eased in the last year would you say?

CA: It is remarkable how slowly the world of academic publishing changes from year to year. Just to show examples from a business we are all familiar with, at least to some extent, 12 months ago the iPad did not exist, Nokia was a well-liked mobile handset manufacturer, and so on; a year later, everyone talks about tablets and apps and Android.

Somehow, the academic community moves at a glacial pace. It is no wonder that investors decided that Open Access will never happen: they are used to seeing most activities and businesses change constantly.

RP: Can I push you on whether you see self-archiving as a greater or lesser threat than a year ago. Essentially we are talking about the growth of self-archiving mandates.

CA: Even on mandates we have seen little progress. There have been several new mandates from both public and private institutions, but there is such a patchwork of mandates (and little transparency on the compliance) that I doubt there is a single research library that has changed its buying patterns to reflect the new or the cumulated mandates.

An aggressive US Federal Government mandate could change that, both because it would affect a significant amount of research and because the rest of the world would likely feel pressured to follow suit. Even so, if the embargo of copyrighted material was long enough, the impact would be negligible.

RP: What are the implications of the current situation for the research community?

CA: If the Big Deal goes away altogether, fewer journals will be sustainable, which means that less research will be published. This headline sounds threatening for the research community, until you ask yourself how much of the research which is being published today is actually read. My guess is that if fewer subscription journals are published, something else will take their place, probably a combination of Open Access journals and self-archiving repositories.

If, on the other hand, only a meaningful portion of the Big Deals are discontinued, the publishers will be faced with the dilemma of whether to cull marginal journals or not, since they support their pricing for the portion of the market which would still buy Big Deals. My guess is that truly marginal titles would be discontinued, but it would be difficult to cut most.

RP: As you say, most articles are not read, and so librarians are currently paying for products that their patrons don’t use. One could perhaps argue that in an OA environment this would not matter since the customer becomes the author, not the reader, and authors have to publish to get promotion and tenure. As such, reading is not as important as publishing, and given the publish-or-perish pressures they are under one might assume that researchers would inevitably find the money to publish, even if it meant taking it from their own pockets. If correct, this suggests that publishers could preserve their revenues by embracing Gold OA?

CA: I am always puzzled by the hostility of the leading publishers towards Open Access. As you point out, there is no reason why revenues, over time, could not be roughly equal for the two models. I suspect the hostility derives from the need to operate in a period of transition that is difficult to manage. The fog of change can be daunting.

RP: Some in the research community warn of the danger of a catastrophic collapse of the scholarly communication system. Might they have a point?

CA: Yes and no. It is clear that if 15 to 30% of the academic publishing industry revenues were to evaporate over a three to five year period, the industry would have to slim down substantially.

Lower revenues would lead to fewer articles being published, and probably lower profits. Lower profits and dimmer prospects for profit growth may in turn lead to less investment in innovation as investors would demand that less capital is allocated to an industry with less future profit growth. Does this mean that there would be no innovation? Not necessarily.

Again, I suspect that other models could emerge to fill the space: Open Access journals, repositories, and perhaps some we cannot even imagine today.

A bubble

RP: So how would you characterise the current situation with regard to scholarly journal publishing?

CA: I will use an analogy that may be imperfect, but which contains some lessons. For a very long time, the music industry sold to consumers more music than consumers really wanted. It did so by forcing consumers to buy albums even when they may have just wanted one or two songs; in fact, it killed the single because it was not profitable enough, in spite of the fact it had always been very popular with consumers because it was cheap and it contained the music they wanted.

Then consumers figured out they could rip music from CDs, put it on a hard drive, and burn it on a blank CD. If their friends did not have the tracks they wanted, they could even go to peer to peer sites and get all the music they wanted, and — even better — it was free.

Physical piracy and peer to peer distribution of music did lead to the catastrophic collapse of the music industry in many countries because the revenue base shrank to the point that it became impossible to sign up artists and produce albums. But the music industry is still in existence, albeit on a much smaller scale, and an endless stream of innovators is trying to find ways to revive it with all kinds of new consumer propositions.

Perhaps there is too much research being published today, and the value of the long tail of research is probably modest. This industry benefitted — just like many others — from a bubble, and that bubble has now burst. But the basic demand for dissemination and retrieval of important scientific and medical research is still there, and I am confident it will continue to be fulfilled.

RP: Do you still think that Reed Elsevier should go for a “progressive divestiture”, as you suggested last year?

CA: It all depends on the value of the sum of its parts at any given time. Last time we calculated it, back in January, we decided there was a 15/20% upside to the share price — meaningful but not staggering. I can think of companies in my coverage where the upside would be as much as 50 to 60%. In any case, Reed Elsevier management seems to show no inclination to pursue this path.

RP: Thank you for your time.

Claudio Aspesi’s latest report can be accessed here.

Tuesday 8 March 2011

PLoS ONE, Open Access, and the Future of Scholarly Publishing: Response from PLoS

Last year a researcher drew my attention to a row that had erupted over an article published in the Open Access journal PLoS ONE. Believing that the row raised some broader questions about PLoS ONE, and about Open Access, I sent a list of questions to PLoS. The publisher eventually declined to answer them, sending over a statement instead. I nevertheless decided to write something, but sent a copy of what I had written to PLoS before publishing it. And I invited PLoS to comment on what I had written. Below is the publisher’s response:

We were disappointed to see Richard Poynder’s current article about PLoS, but grateful that Richard sent us a draft, so that we could prepare a brief response.

Richard is absolutely correct about one thing – we at PLoS are really committed to open access, and we are doing our absolute best to inspire a broader transformation in scientific communication. We make no apology for that. We expect to be watched and scrutinized and indeed have been the subject of some criticism over the years, but not at the length or with the amount of negativity that we see in Richard’s essay.

Much of the article focuses on PLoS ONE and Richard uses some selected examples (about 5 of them) from the more than 17,000 peer-reviewed articles that we’ve published in PLoS ONE to draw much broader conclusions about the quality of its content. Although we would be the first to agree that PLoS ONE isn’t perfect, neither is any journal, as Richard points out – although not until around 30 pages into the article. But, just to quote one statistic, is it not more striking that of the 4400 articles published in PLoS ONE in 2009 around 55% of them have been cited 3 or more times (Scopus data)? The evidence points to the fact that PLoS ONE is attracting a vast amount of high-quality content.

PLoS ONE is attempting to challenge the conventional model of a journal. The peer review criteria for PLoS ONE are focused on rigour, ethical conduct and proper reporting. Reviewers and editors are not asked to judge work on the basis of its potential impact. Our argument is that judgments about impact and relevance can be left (and might be best left) until after publication, and this argument is clearly resonating with the tens of thousands of researchers who work with and support PLoS ONE as authors, reviewers or editors. It’s now also resonating with many other non-profit and for-profit publishers who are exploring the same model.

We do not argue that the PLoS ONE approach is the only way to publish research, and indeed we view PLoS ONE as just one aspect of a much more fundamental transformation of scholarly communication.

Another aspect of that transformation is in the assessment and organization of research findings, which is currently done using conventional journals. That’s why we have launched article-level metrics and PLoS Hubs as new and alternative approaches to post-publication evaluation. There will be much more to come from PLoS and many other innovators.

At several points, Richard’s article uses quotes from staff, press releases and so on that are now several years old and misses the point that much has changed even in the short few years since PLoS ONE launched. We are learning all the time from PLoS ONE. His frequent quotes from PLoS staff also show that we’ve answered many of his questions (including some less than friendly ones) over the years.

Nevertheless, he places great emphasis on the fact that we declined to answer a set of more than 20 detailed and complex questions about general aspects of PLoS ONE, as a follow up to a series of exchanges about the peer review process on a particular PLoS ONE article about which there was some disagreement. Indeed we posted a comment to try and clarify the issues in light of Richard’s questions, and comments from researchers. We were surprised by the number and wide-ranging nature of Richard’s subsequent questions about PLoS ONE, and chose not to answer them because we felt that the issues surrounding the PLoS ONE article were closed. If Richard had signaled his intention to write a lengthy article about the history and status of PLoS at the outset of the exchange, our response might have been rather different.

But the more significant point is that PLoS ONE has evolved since its launch. We did originally place a lot of emphasis on ‘commenting’ and ‘rating’ as tools for post-publication assessment, but we rapidly realized that much commentary and other activity happens elsewhere. If we could capture this activity and add it to the PLoS articles (in all our journals), that could be a powerful approach to post-publication assessment, and could also be used to filter and organize content. Thus, the article-level metrics project was born. The PLoS ONE editorial and publishing processes are also under constant review and revision as the journal’s size and complexity has grown, and we post some updated general information about these processes on the PLoS ONE web site.

Another theme in Richard’s article is whether PLoS ONE represents value for money. PLoS Journals are an ecosystem and they all contribute to the whole, both financially and to PLoS’ reputation and brand. Looked at in isolation, PLoS ONE, as well as the Community Journals (PLoS Genetics, PLoS Computational Biology, PLoS Pathogens and PLoS Neglected Tropical Diseases), all make a positive financial contribution to PLoS. They help to support PLoS Biology and PLoS Medicine, as well as the development of the journal websites and other important initiatives such as article-level metrics, PLoS Hubs, PLoS Currents and our work on advocacy. From the PLoS authors’ perspective, wherever they publish in PLoS their work will reach anyone with an interest in it, and their work will be stamped with a brand that is associated with social change, innovation and quality. There’s much more to value than the direct costs of publishing a single article.

And a final point on value. Publishing in the conventional system is estimated to cost the academy around $4500 per article. What PLoS (and for that matter BioMed Central, Hindawi, Co-Action, Copernicus and other successful open-access publishers) is showing is that high-quality publishing can be supported by publication fees that are substantially less than the costs of the conventional system.

There is a revolution in the making, and despite the wealth of support that we are seeing, it won’t be comfortable for everyone. We need constructive criticism, but also some optimism and creativity to make it work. There are grounds for hope that we’ve moved beyond the antagonism that has characterized many discussions around open access. There really is a lot to celebrate.

The article can be accessed here. 

**** UPDATE FEBRUARY 2012: AN INTERVIEW WITH PUBLIC LIBRARY OF SCIENCE CO-FOUNDER MICHAEL EISEN IS NOW AVAILABLE HERE ****

Monday 7 March 2011

PLoS ONE, Open Access, and the Future of Scholarly Publishing

Open Access (OA) advocates argue that PLoS ONE is now the largest scholarly journal in the world. Its parent organisation Public Library of Science (PLoS) was co-founded in 2001 by Nobel Laureate Harold Varmus. What does the history of PLoS tell us about the development of PLoS ONE? What does the success of PLoS ONE tell us about OA? And what does the current rush by other publishers to clone PLoS ONE tell us about the future of scholarly communication?

Our story begins in 1998, in a coffee shop located on the corner of Cole and Parnassus in San Francisco. It was here, Harold Varmus reports, that the seeds of PLoS were sown, during a seminal conversation he had with colleague Patrick Brown. Only at that point did Varmus realise what a mess scholarly communication was in. Until then, he says, he had been “an innocent person who went along with the system as it existed”.

Enlightenment began when Brown pointed out to Varmus that when scientists publish their papers they routinely (and without payment) assign ownership in them to the publisher. Publishers then lock the papers behind a paywall and charge other researchers a toll (subscription) to read them, thereby restricting the number of potential readers.

Since scientists crave readers (and the consequent “impact”) above all else, Brown reminded Varmus, the current system is illogical, counterproductive, and unfair to the research community. While it may have been necessary to enter into this Faustian bargain with publishers in a print environment (since it was the only way to get published, and print inevitably restricts readership), Brown added, it is no longer necessary in an online world — where the only barriers to the free-flow of information are artificial ones.

Physicists, Brown said, have overcome this “access” problem by posting preprints of all their papers on a web-based server called arXiv. Created by Paul Ginsparg in 1991, arXiv allows physical scientists to ensure that their work is freely available to all. “Should not the biomedical sciences be doing something similar?” Brown asked Varmus.

It was doubtless no accident that Brown — who had previously worked with the Nobel Laureate — chose Varmus as his audience for a lecture on scholarly publishing: at the time Varmus was director of the National Institutes of Health (NIH) — the largest source of funding for medical research in the world. He was, therefore, ideally placed to spearhead the revolution that Brown believed was necessary.

Fortunately for the open access movement (as it later became known) Varmus immediately grasped the nature of the problem — aided perhaps by some residual Zen wisdom emanating from the walls of the coffee shop they were sitting in, which had once been the Tassajara Bakery. Varmus emerged from the cafĂ© persuaded that it would be a good thing if publicly-funded research could be freed from the publishers’ digital padlocks. And he went straight back to the NIH to consult with colleagues to that end.

Again fortuitously, one of the first people Varmus broached the topic with was David Lipman — director of the NIH-based National Center for Biotechnology Information (NCBI). NCBI was home to the OA sequence database GenBank, and Lipman was an enthusiastic supporter of the notion that research should be freely available on the Web. By now Varmus’ conversion was complete.

This conversion was to see Varmus embark on a journey that would lead to the founding of a new publisher called Public Library of Science, the launch of two prestigious OA journals (PLoS Biology and PLoS Medicine), and subsequently to the creation of what OA advocates maintain is now the largest scholarly journal in the world — PLOS ONE.

As we shall see, Varmus’ journey was to prove no walk in the park, and some believe his project lost its bearings on the way. Rather than providing a solution, they argue, PLoS may have become part of the problem.

Certainly PLoS ONE has proved controversial. This became evident to me last year, when a researcher drew my attention to a row that had erupted over a paper the journal had published on “wind setdown”.

Even some of the journal’s own academic editors appeared to be of the view that the paper should not have been published (in its current form at least). As the row appeared to raise questions about PLoS ONE’s review process — and about PLoS ONE more broadly — I contacted PLoS ONE executive editor Damian Pattinson.

The response I got served only to pique my interest: While Pattinson invited me to send over a list of questions, I subsequently received an email from PLoS ONE publisher Peter Binfield informing me that it had been decided not to answer my questions after all. 

To read on please click here (for a long PDF file). 

The PDF includes a response from PLoS. I will also be publishing the response as a separate post. (Now available here).
 
This article was cited in a 2011 report on peer review in scientific publications by the British House of Commons Science and Technology Committee.


**** UPDATE FEBRUARY 2012: AN INTERVIEW WITH PUBLIC LIBRARY OF SCIENCE CO-FOUNDER MICHAEL EISEN IS NOW AVAILABLE HERE ****