3

Sustaining the ‘Great Conversation’: the future of scholarly and scientific journals

Jean-Claude Guédon

Abstract:

This chapter sets the journals business in the larger context of scientific practice. Its starting point is the history of scientific journals up until the arrival of the Internet. The chapter discusses the work of Eugene Garfield and his ‘law of concentration’, identifying what he considered to be the ‘core’ of scientific research through citation analysis. The chapter goes on to explore the potential for a paradigm change in journal publishing, both in its business models and its modes of analysis of the significance of scholarly work. Central to these transformations has been the rise of a range of models of open access publishing.

Key words

scientific knowledge

history of scientific journals

Eugene Garfield

commercial journals publishing

open access

Introduction: the ‘Great Conversation’ of science

The quest for knowledge is less concerned with truth than with the ways in which understanding of reality is expressed. Religions may claim definitely to know the truth; more modestly, knowledge builds various approximations of reality. It does so gradually, endlessly, relentlessly. The quest for discovery is a journey documented by a collective travel log of observations and experimental results that form what is called the ‘scientific record’. Despite Koyré’s opinion that the history of science reflects a long and difficult ‘itinerarium mentis ad veritatem’,1 ‘truth’ can be no more than the tantalizing motive for doing science; it is not its essence. It would be better to say ‘itinerarium mentis ad realitatem’.

If scientists and scholars are not expected to tell the truth of the world, they are expected to act truthfully, i.e., honestly. Scientists simply do not know the truth, and will never know it. Instead, they weave ever more refined and complex forms of discourse, even inventing special languages such as various branches of mathematics, to conceptualize reality. And they do so in a very special way, by proposing their interpretation of reality as a challenge to their peers. If the proposed description stands the test of peer criticism it lives on, endlessly enriched by new levels of work seeking finer details about reality, until it finally bumps into some sort of irreconcilable divorce between the kind of story woven, sometimes for centuries, and some new and startling experimental result or some new and unexpected observation that simply does not fit. If science periodically undergoes deep cognitive revolutions, it is precisely because it does not express the truth. Once in a while, the accepted ways to account for reality need to be abandoned, and new ways of thinking have to be adopted. This is what happened with Newtonian theories, first of optics and later of gravity: the first theory of light as particles was swept away with the wave theory of light that Augustin-Jean Fresnel inaugurated at the beginning of the nineteenth century, a little more than a century after Newton’s Opticks first appeared;2 universal attraction and classical mechanics persisted for more than two centuries, but were eventually replaced by Einstein’s theory of relativity.

That scientific knowledge should evolve in a discontinuous manner is an issue that Thomas Kuhn did much to document in The Structure of Scientific Revolutions.3 His thesis startled many when it first appeared, yet it is not particularly surprising if we accept the point of view that to come closer to reality, science sometimes has to change its discursive and conceptual strategies. At the same time, these cognitive and discursive discontinuities rest on a deep stability found in the behaviour of scientists: they always try to follow the scientific method as best as they can; they try to be as logical as possible; finally, to increase the robustness of the process, they submit the results of their work to the world, offering them to the criticism of the best minds humanity can produce. Across the centuries and the continents, science has striven to evolve, maintain and grow a ‘Great Conversation’ structured by strict methods so as to produce descriptions and interpretations of reality that are taken to be valid, until refuted. Once raised to that level, duly recorded scientific statements can be used in further scientific work (the usual metaphor here, following Newton, is ‘standing on the shoulders of giants’). This is how the ‘Great Conversation’ of science works across space and time, somewhat like a planetary seminar room where voices are heard from everywhere, and from any past moment. At least this is the ideal and utopian vision.

In the eighteenth century, most of modern science was concentrated in academies and learned societies, with some activities, generally related to medicine, taking place in universities. Together, these institutions supported the ‘Great Conversation’ and great care went into recording and preserving the results of this virtual debate for the benefit of generations to come. Scientific knowledge is knowledge haunted by a kind of jurisprudence: new elements of knowledge must position themselves with regard to the past, either by building on it or by refuting some part of it while proposing another way to reflect what is understood of reality.

The tasks associated with recording, validating, preserving and retrieving scientific results have varied widely with time, but the intent behind these efforts has remained remarkably steady since at least the seventeenth century in Europe: it is to make the ‘Great Conversation’ as rich and efficient as possible. Occasionally, technological changes such as the printing press have profoundly transformed the context within which the ‘Great Conversation’ takes place, but they have not modified the relentless effort to approach reality according to the rules and methods that set scientific and scholarly knowledge apart. Nowadays, the transition to a digital universe of information and communication is deeply affecting the parameters of scientific communication, but, like its predecessors, we can safely predict that this transition will reaffirm the broad tenets of the ‘Great Conversation’, rather than challenge them. Through these tenets, and for centuries now, scholars and scientists have developed ways to create knowledge in a distributed way. In fact, science is the greatest monument to distributed intelligence, and the ‘Great Conversation’ of science is probably the crucial and strategic element that makes scientific knowledge possible.

A bit of history

Where journals came from

Natural philosophers, as scientists were first known, began to network in a significant way when the transmission of messages from one town to another became reliable. The advent of postal services in Europe led to scholars corresponding with each other. However, as the number of scholars grew, the epistolary model grew inefficient and cumbersome. Sending a letter to just one individual was hardly a good way to publish, i.e., make an idea ‘public’. Therefore, checking what had been done before was very difficult, and so was demonstrating one’s paternity claim to a discovery. In short, the peculiar intellectual agon that characterizes science was still poorly organized, and even poorly defined, in the early seventeenth century, limiting it to small circles driven by local luminaries.

Moving this epistolary intercourse to print proved crucial for a number of reasons: it allowed the centralization of the ‘letters’ inside a few periodic publications, and it allowed reaching a much larger number of people. But print achieved more, albeit unwittingly: the names of the printing presses could act as signposts pointing to the quality and value of scientific work, and, as a result, they contributed to the reputation of authors. At the same time, the existence of relatively large numbers of copies helped archive and preserve past works. Book collections and libraries grew to form a tangible and material memory that could be exploited. The ‘jurisprudence’ of science alluded to earlier had simply grown more effective.

How journals evolved before the Internet

The first scientific journals were sponsored by academies or were commercial, a situation that remained stable until scientific societies began to multiply in the nineteenth century. At that time, societies began to set up their own journals alongside the older academic and commercial journals. They often started as a way of providing a publishing outlet to scientists who had difficulties being accepted or even noticed by older, more established publications. The ‘Great Conversation’, as this phase of development shows, grows through a kind of sedimentation process whereby each layer corresponds to a new type of publishing format: first came books, followed by periodicals, then bibliographies, and finally bibliographies of bibliographies.

Commercial publishing continued alongside society and academic periodicals, but it did so in a somewhat discrete and muted manner. Indeed, most commercial publishers did not see much profit in publishing scientific journals, but doing so conferred a certain amount of prestige to their business; it also helped identify potentially interesting authors who could then be invited to write books. Indeed, in the nineteenth century, many scientists valued publishing books more than articles: Darwin offers a good example of this attitude. Nowadays, many scholars in the humanities and social sciences continue to favour monographs over journal articles, essentially because they carry on the traditional ways of the ‘Great Conversation’, where a scientist, by virtue of his or her ability to present a synthetic work of some scale, reaffirms his or her claim as a scientific author, and not only as the faithful observer or the subtle interpreter of a modest slice of reality.

The twentieth century witnessed a number of important changes. In a number of disciplines such as physics or chemistry, articles definitely became the preferred publishing format. As a result, the strategic importance of journals grew. Also, after the Second World War, the pace of scientific and technical research accelerated, in part because of the Cold War. In parallel, research became more interdisciplinary. These two factors led to a bibliographical crisis that induced a number of transformations: for example, Chemical Abstracts, the bibliographic service of the American Chemical Society (now known as the Chemical Abstracts Service), had to completely overhaul its procedures to keep up with the flood of new publications. As interdisciplinary research was not well served by discipline-based bibliographies, new tools had to be invented. Eugene Garfield, with the active encouragement of Joshua Lederberg, developed a way to trace citations from articles to articles, and this effort ultimately led to the design of the Science Citation Index (SCI).4 This evolutionary step in bibliographical practice, nowadays known as the Web of Science, simply expresses the fact that the ‘Great Conversation’ of science does not always conform to the Procrustean constraints of disciplines.

To manage the enormous number of citations commonly found in the scientific literature, and to do so with the limited computer power available in the 1960s, Garfield needed to radically truncate the scientific documentary base – and needed to justify doing so. His solution took the form of a ‘concentration law’ which underpinned his claim that one could identify a set of core journals that ultimately underpinned a knowledge zone baptized as ‘core science’. An unexpected consequence of this strategy was that science, always characterized by a continuous quality gradient, came to adopt a two-tier structure: core science on the one hand, and the rest on the other. It was as if the ‘Great Conversation’ could now be limited to a subset of its practitioners, and the rest essentially ignored. A limited number of journals (initially around 1000 titles) defined ‘core’ science, and ‘core’ science came to behave in two different ways: on the one hand it could be interpreted as elite science; on the other, it acted as some kind of ‘active principle’ – to use an old pharmaceutical concept – for the progress of science. Furthermore, the prestige associated with a particular set of journals had turned into an operational tool that could help identify and even define ‘core’ science, and vice versa since ‘core’ science, being elite science, justified treating its publication sites (i.e., the journals in which it appeared) as elite journals. Thus, causality was made to work both ways and thus helped bootstrap the entire operation into some degree of credibility, particularly among librarians.

Indeed, librarians first used Garfield’s Law of Concentration to rationalize and guide the process of building a collection of periodicals. In so doing, however, librarians unwittingly gave credence to a set of journals that rapidly began to be looked upon as the set of ‘must buy’ journals. Unwittingly, the way in which librarians began to treat these journals helped form an inelastic market. Given that inelastic markets are markets where demand is little affected by rising prices, some publishers, and Robert Maxwell illustrates this point nicely, noticed the turn of events and sought to profit from it. Maxwell quickly understood that the best way to promote his own Pergamon Press journals was to push them into SCI’s core set. Most of his efforts came after he regained possession of Pergamon in 1974.5 Revamping peer review to make it look more professional, objective and, to some extent, more transparent was part of his strategy to convince SCI that many Pergamon titles should be included in the core set. Later, he even tried to take control of SCI – a move that would have allowed him to play the role of judge and party simultaneously. Garfield, luckily, resisted.6 In 1992, Pergamon Press was purchased by Elsevier. In 2004, interestingly, this big publishing house began developing Scopus, a project fundamentally similar to SCI; in doing so, Elsevier gave all the signs that it had fully integrated Maxwell’s strategic vision. Inelastic markets are a businessman’s dream, because they allow increasing benefits almost indefinitely. This is precisely what is observed in the evolution of subscription prices in the last third of the twentieth century, when the so-called ‘serial pricing crisis’ emerged. Scientific publishing became one of the more lucrative areas of publishing in general. Librarians’ acquisition budgets were the first to suffer, and they began to denounce this situation in the late 1970s. Talks of resistance surfaced in the late 1980s in response to the new context. They are still with us.

As revised by the inelastic market situation, the ‘Great Conversation’ found itself strictly limited to those who had access to libraries that could afford very expensive journals. No longer was it sufficient to be a good scientist in order to do research; one also had to be part of an institution that could afford to buy the record of the ‘Great Conversation’, i.e., to subscribe to the set of journals defined by SCI. And if one wanted to join the ‘Great Conversation’, simply publishing in a journal recognized as scientific was no longer enough; it had to be a journal included in the SCI set of journals. All the other journals simply disappeared from most radar screens, particular when they could not be ranked according to a new device based on citation counts: the impact factor (IF).7

The last element needed to reinforce a powerful two-tier system of science came when university administrators and grant-funding agencies began to use the same tools, i.e., the IF, to evaluate the quality of individuals and their articles, and to manage various forms of institutional competition. Rules of competition work through various systems of rewards. For individual researchers, they are called tenure, promotion, research grants, etc. For institutions, they relate to governmental grants. The Research Excellence Framework (REF) in the UK is a good example of the latter. Ultimately, all these competitive rules refer to journal rankings based on the IF. Even official caveats against such practices do little more than bring the tyranny of IFs back to mind.8

Rules of competition based on IFs are then extended, often with few or no justifications, to institutions or individuals. Deans, tenure and promotion committees, juries distributing grants, ministries of education and research, all swear by this parameter, and all researchers dance to this tune. Its quantitative form provides an air of objectivity that appears to put it beyond the reach of controversies: a number is a number. Yet, these numbers are not without flaws or limitations.9 For example, who has ever justified the use of three or even four decimals in IFs? But no matter: publishers and editors do all they can to have their journals ranked as high as possible by massaging the IFs as well as they can. The IF entertains some relationship to quality, of course, and that is how its use is justified, particularly for marketing purposes, but it is neither direct, nor reliable: when IFs are increased because more review articles are published, or because authors are encouraged to cite articles from the journal that will publish them,10 the quest for quality is obviously not the unique factor at work here. Worrying about IFs is the editor’s default position, and the urge to manipulate the results in order to improve the quantitative results is well-nigh irresistible. So too is the use of performance-enhancing drugs for athletes involved in high-level competition.

By accepting the IF, administrators of research have reinforced the credibility of the ‘core’ set of journals. As a general result, the very device underpinning the serial pricing crisis has been made stronger by the very same administrators who keep wondering why libraries are having so many financial difficulties with their acquisition departments; they do not seem to understand that they are part of the problem, not of the solution.

Journals are ‘core’ journals if they are sufficiently cited by other ‘core’ journals. Reaching the status of a ‘core publication’ and being cited by other core publications is what counts. Being cited in non-core journals does not count because, it is argued, the citation remains invisible. Citing papers from non-core journals is obviously common, but such papers may be hard to retrieve, given common library acquisition practices. Not being in the core set is to be invisible. Not being in the core set and being cited by similarly-fated publications is the ultimate degree of irrelevance for the kind of competition that rules the world of scholarly journals. In short, the ‘core’ set works a lot like a club with complex, often opaque, inclusion/exclusion rules where co-optation plays a significant role. This means that large segments of the ‘Great Conversation’ are excluded simply because they do not make it into the set of journals justified by Garfield’s sleight of hand, grandly called ‘Garfield’s Law of Concentration’.11

Had citation tracking and measurement been carried out only as a way to sample the ‘Great Conversation’, and thus understand better how it works, criticism would be unwarranted. However, the citation measurements were quickly adapted to define new rules of journal competition, and, by extension, individual and institutional evaluations. The very definition of the IF involves limiting citation counts to two years, independently of the way in which life cycles of articles vary with each discipline; it simply reflects the urge to produce ranking results as soon as possible. Journal editors unhappy with the performance of their journal in the Journal Citation Reports (JCR) can thus move quickly to redress the situation, i.e., improve the IF of their journal. Publishers proclaim IF progress to market their journals.12 Whether higher IFs correspond to better quality is questionable, but their vulnerability to manipulation does not seem to affect their acceptance. Rather than evaluating journals by confronting some of their characteristics to established benchmarks (quality), we have come to rely on a global competitive scheme that, at best, is suited to identify a small number of top champions. Proceeding in this fashion is not a good way to raise the general quality of the whole system, if unaccompanied by other measures. In effect, the ‘champions’ are defined more by the very terms of the competition than by quality benchmarks; meanwhile, those who have little or no chance to enter the competition are simply ignored. The global result actually may be a lowering of average quality. Moreover, excessive competitive pressure can generate negative forms of behaviour such as cheating.

Going digital, and its commercial consequences

The early 1980s witnessed the rise of the so-called ‘personal computer’ which essentially familiarized a first wave of users with a new and somewhat daunting technology. The late 1980s and early 1990s saw the Internet emerge in public light; this was also the period during which commercial publishers began to explore the possibilities of electronic publishing. Elsevier’s TULIP project which started in 1991 is a good example of this trend. But a few more years would be needed before electronic journals would become the default publishing format.

The reasons why commercial electronic journals took so long to appear were not so much technical as economic: simply stated, electronic journals could not be sold in the same way as printed journals because copying a digital document and transmitting it through the Internet essentially costs nothing. Digital journals, so the publishers felt, needed to be transacted differently: rather than selling paper codices that corresponded to issues of journals, publishers began experimenting with licensing access to electronic files. As a result, a library was no longer the owner and organizer of what it paid for; instead, it became the conduit through which access was procured.

The advantages for the publishers were many: as they did not sell anything, they never lost any right over the materials they owned, as they did, for example, with the ‘first sale doctrine’; because they were dealing with licences and not sales, they could rely on contract law to frame access conditions while resting on copyright to assert ownership. In particular, as libraries no longer owned their journals the conditions under which inter-library loan could be undertaken had to be negotiated anew, and the results were not pretty for the ‘Great Conversation’: in some cases, for example, digital files had to be printed and then faxed to comply with the publishers’ conditions – direct digital transmission was simply too threatening to publishers’ interests. Preservation became the publishers’ responsibility rather than the libraries’. As for legal deposit, the digital environment forced many national libraries to rethink their position in a fundamental way.

Digitization also changed the rules libraries were following to procure the right sets of documents for their constituencies. When publishers noticed the near-zero marginal cost of making an extra title accessible to a library, they also realized that they could partially decouple revenue streams from the number of titles subscribed to. Libraries, let us remember, were building collections; publishers were increasing revenue streams and profit margins.

Before digitization, publishers and libraries bargained hard, title by title – publishers to ensure a basic revenue stream, and libraries to build coherent collections and respond to local needs. In the digital world, bargaining title by title became the first phase of a more complex negotiation process: once a publisher would reach the desired level of revenue it could move to the second phase by dumping the rest of the titles for a relatively small sum in what came to be known as the ‘Big Deal’.

The ‘Big Deal’ was a devilishly clever invention. The Association of Research Libraries had used the cost/title to demonstrate the reality of a serial pricing crisis and put a quantitative fix on it. For the first time, publishers could show that the trend was being reversed while rejoicing over the increased revenue stream. Moreover, they could argue that this result would look very good in any library annual report. Of course, many of the titles were not very useful to the targeted constituency and the ‘Big Deal’ locked in a greater proportion of the acquisition budget with a few large publishers, thus crowding out other, smaller, independent publishers such as society publishers. As a result, the latter saw their revenue stream shrink. Gradually, their business plan became unsustainable and they often had no other recourse but to sell themselves to big publishers. The movement of concentration among publishers is also a part of the serial pricing crisis, and it is not very difficult to understand some of its basic causes.

As the ‘Big Deal’ demonstrates, the most obvious innovations in electronic journal publishing were financial. On the technical side, publishers had recreated the equivalent of traditional journals and of articles in a digital format. Quite often, they treated the new digital format merely as a new packaging system added to the paper version. Little else changed. For example, the PDF format that has remained so popular to this day is a format meant to facilitate printing. Doing a full-text search through a few dozen PDF documents is painfully slow, if possible at all (and PDFs based on page images do not even allow for full-text searching). Yet, the HTML or PDF formats that are accessible to the reader are often produced on the fly from a back-end resting on a much better format, such as some flavour of XML. But the XML back-end remains inaccessible as publishers have been releasing their electronic journals in ways designed strictly for eye reading, while reserving for themselves the possibilities opened by machine reading. Offering citation links within publisher collections – thus steering the reader to a biased fragment of the ‘Great Conversation’ – is an example of what can be done with richer file formats. This strategy is all the more effective in that it cleverly plays on the attention economy of working scientists or scholars. In an attention economy, disposable time becomes a major selection factor. If a publisher manages to steer a reader preferentially to articles in its own collection, rather than to the best articles of the ‘Great Conversation’, it essentially manages to manipulate the ‘Great Conversation’ to its own advantage as this tactic will tend to generate more citations; in an IF-dominated world, creating attention attractors that generate more impact is very much of the essence.

How the ‘Great Conversation’ of science could benefit from the new digital order has obviously not been the most burning concern of commercial publishers; neither has it been that of the society publishers who followed in their footsteps. Improving profitability has constantly trumped improving the communication system of researchers. The global effect has been to make the ‘Great Conversation’ serve the interests of the publishing industry rather than the reverse.

Caught in a kind of prisoners’ dilemma, many libraries, and even consortia, chose to privilege their individual ‘relationship with vendors’, to use a familiar stock phrase of the profession, rather than collectively play for a strategic redistribution of roles that would not leave them in such a vulnerable position. For example, some consortia, such as Canada’s, have consistently refused to reveal the nature of their ‘deal’ with a publisher.13 Meanwhile, access to the scientific literature did not significantly improve. In rich countries, rich libraries could access more titles from large publishers, but with reduced concern for collection building and with difficulties maintaining access to publications from smaller publishers. In poor countries or poor institutions, researchers, teachers and students were barred from access even more radically than before; for example, passing on older issues of journals to developing-nation libraries was no longer an option for rich libraries since they no longer owned their material.

In short, the two-tier system of science inaugurated by SCI had largely excluded publications from developing nations; with digitization, it was made worse by the so-called digital divide and by licensing practices that severely limited the possibilities of inter-library loans. Further, some mitigating strategies later put in place – such as HINARI14 – were questionable at best. In effect, some of the important underpinnings of the ‘Great Conversation’ of science have been put under the control of international publishers and this has led to an unprecedented situation for science journals.

Going digital, and its consequences: the rise of non-commercial electronic journals

The possibility of creating new journals had long been limited to well-heeled companies or societies because of high entry costs: commercial publishers regularly estimated that it took a journal about seven to ten years to recoup basic investments and generate a profit. This situation changed radically in the last years of the 1980s, thanks to computers and networks. The reason is simple: electronic publishing is characterized by the nearly-total disappearance of the ‘variable costs’:15 copying an electronic copy costs nothing, or close to it, and disseminating this copy through a network such as the Internet is also essentially free. The entry barrier to publishing is thus limited to the cost of putting the first copy together, i.e., ‘fixed costs’. In a university setting, a large part of this work can be taken up as a labour of love by academics; researchers do peer-review anyway, and the skills needed for editorial tasks are found readily on campuses. Digitization meant that little actual money was needed to start a new journal. The modest amounts of cash needed could be scrounged from remnants of research budgets, friendly deans and forward-looking civil servants or foundations. Starting in the late 1980s, such journals began to multiply.16 These early first steps, however, soon proved incomplete. They largely neglected the more subtle elements of scientific publishing that relate to the creation of hierarchies of journals, with prestige and with authority. Quality alone is not sufficient to promote a journal; the means to provide visibility and accessibility are also essential. Visibility was very much linked to large, international bibliographies and, since the 1970s, with Garfield’s SCI; in general, it could not be reached by young and untested journals, whatever their inherent quality. Accessibility was another matter: the networks made journals available everywhere the Internet went. With access costs brought down to zero, non-commercial journals could hope for a significant competitive advantage simply because they responded exactly to the communication requirements of researchers. This has remained true of all open access publications.

Open access journals

Early open access journals were concerned with mastering, tweaking and perfecting the technical side of their enterprise, and with issues of quality control, but they had few means to enter the kind of competition that produces the usual identification of excellence. In effect, they were being excluded from the official IF competition in ways that paid little attention to their intrinsic quality, exactly as was the case with many journals from developing countries.

Early electronic scholarly and scientific journals turned out to be in open access for pragmatic reasons that incorporated few of the later arguments that would arise in support of open access. Seeking to improve the ‘Great Conversation’ played a role in the early stages of the digital age, but few of the people involved saw beyond the immediate and more obvious benefits. Small journals were multiplying by the dozens, but their impact was limited due to near invisibility within an unfamiliar medium. The sense of marginality that accompanied them was reinforced by their fragility: electronic journals cost much less than print journals, but they do cost something. The result was a scattering of disconnected and uncoordinated efforts that testified to the need for new solutions, but with few means to bring them forward in a convincing way.

In parallel, in 2001, a growing sense of frustration among researchers led to an ill-designed boycott of journals that refused to release their content for free after a certain period: it was called PLOS (Public Library of Science). Publishers did not budge, and the petition failed, at least at first, for PLOS is still with us, but sporting a quite different strategy. The petition phase of PLOS also turned out to be useful because it triggered the 2001 Budapest meeting convened by the Open Society Institute (OSI, now Open Society Foundations [OSF]), which led to the Budapest Open Access Initiative (BOAI), published in February 2002.17 It also led to the financing of open access activities by the same foundation. Open access activities had begun in the early 1990s, but 2002 marks the year when open access became a true movement and began to enjoy a degree of public visibility. For its part, PLOS received very significant financial support from the Gordon and Betty Moore Foundation, and in 2003 it launched its first open access journals: PLOS Biology and PLOS Medicine. These were its first flag bearers, and they stunned the publishing world by quickly reaching very high IFs.

In Budapest, open access journals were initially seen as the most obvious objective for the open access movement. In fact, had it not been for the relentless defence of self-archiving by Stevan Harnad, the open access movement might have neglected what became known as the ‘green road’, and thereby would have been weaker. Journals remain an important element in a global open access strategy because researchers need suitable outlets for their work, and would probably rather take one step than two. Publishing in a traditional journal, and then taking the extra step of depositing some version close or identical to the published version in a suitable depository, while paying attention to confusing copyright issues, is probably not the kind of effort our typical researcher relishes. In the absence of a compulsory mandate to deposit, repositories capture less than 20 per cent of the literature. The question then becomes: is creating highly visible open access journals more difficult than structuring and maintaining repositories, and obtaining mandates in each research institution? A similar level of difficulty appears to be the most probable answer, and this means that both approaches need to be pursued in parallel with a view to making them converge later.

Creating open access journals requires solving financial issues. A first answer pioneered by BioMed Central (now owned by Springer) shifts the cost of publishing from the reader, via libraries, to the author, presumably via some funder’s programme, be it a research institution (often through its library), a research charity (such as the Wellcome Trust in the UK) or a public research agency (such as the National Institutes of Health in the United States). But for many authors this business plan also meant that they had to contend with a new hurdle: how to finance the cost of publishing an accepted paper. Of course, if publishing costs are viewed as part of research costs and are wrapped into research grants the issue may disappear, but this is by no means a universal practice. Alternatively, the same result can be achieved if publishing can be wholly subsidized by public or foundation money, just as research is, and as a complement to it. But reaching this goal is complex, even though open access journals are clearly attractive to researchers because they understand that their papers will enjoy a greater impact, as also happens to articles placed in repositories and offered in open access to everyone.

For a typical researcher, a ‘suitable outlet’ means a journal that corresponds to his or her speciality and enjoys some degree of prestige. As, alas, prestige is still measured by IF, open access journals have had to abide by its rules. BioMed Central18 began to demonstrate that open access journals could compete in the IF world as early as 1999. PLOS followed BioMed Central’s business scheme, and furthermore proved that it could compete at the very highest levels of IFs. Governmental and international organizations have also helped to create large baskets of open access journals while working on improving their quality, and trying to insert them in the international competition for excellence (still based on IFs). For example, the Scientific Electronic Library Online (SciELO) has grown a large collection of journals (1145 titles in December 2013) that spans most of Latin America, involves some of the Mediterranean countries in Europe, and now extends to South Africa.19 The fundamental goal of SciELO has sometimes been expressed as a fight against the phenomenon of ‘lost science’,20 i.e., the process that makes most observers focus on ‘core’ science while neglecting the rest. SciELO also emulates the attitude of BioMed Central and other commercial publishers by seeking to place as many of its journals as possible in the IF rankings; on the other hand, it also produces its own rankings through metrics that are not limited to the perimeter of the Web of Science. However, unlike PLOS or BioMed Central, SciELO does not require publishing fees from authors.

Commercial publishers which deal with traditional subscription models have also explored ways to incorporate open access into their business plans and, as a result, various strategies and a confusing array of terms have emerged. These include the notion of ‘hybrid journals’ and of ‘open choice’. Some have also developed a full open access line of publishing, either by acquiring an open access publisher (Springer with BioMed Central, as mentioned earlier) or by creating open access journals of their own (like the Nature Publishing Group). They have universally resorted to an ‘author-pay’ scheme similar to those of BioMed Central and PLOS. Unsurprisingly, commercial publishers often require higher publication fees than do non-profit publishers such as PLOS.21

More to the point of this analysis, how do such journals serve the ‘Great Conversation’? The answer is clear: for readers, everywhere, the articles can be accessed gratis and, from this perspective, the ‘Great Conversation’ is well served. This is particularly true if the publisher also accepts access licences that allow for re-use and redistribution, etc.22 For authors, the situation is more complicated. Waivers exist for authors without resources, or from ‘poor countries’, however defined, but the system accommodates researchers with financial resources so much more easily that it could be called a bias. While open access ‘author-pay’ journals serve the ‘Great Conversation’ better than journals based on subscriptions, the optimal solution for the ‘Great Conversation’ really is a system that requires no money from either authors or readers. This obviously requires governmental support in the majority of cases, but those who might object would do well to reflect that scientific research itself would not be viable without government support. What was the business plan behind the quest for the Higgs boson?

Peering into the future

The slow mutation of journals

At this stage of the argument, the present state of development of scientific journals can be broadly sketched out as follows:

1. In scientific, technical and medical (STM) journals, the transition to a first phase of digitization is complete, certainly in the ‘core’ set, but even beyond. The gradual disappearance of printed issues is at least programmed where it is not already complete.

2. In the social sciences and humanities (SHS), the persistence of paper and print has been stronger. Most journals, particularly subscription-based journals, are still being produced in both paper and electronic formats. The continued importance of monographs in many of the disciplines has contributed to the staying power of paper and print.

3. The transition to electronic journals has demonstrated the crucial importance of common platforms to access them: they permit economies of scale for production; a bundle of titles, especially if it covers some kind of defining category, may help market these titles to libraries; a standardized approach to navigation, if well designed, may be helpful to readers; the use of proper statistics allows the user to monitor download numbers and point out the download champions over various disciplines and/or time periods, either in terms of titles, or in terms of authors. In short, the transition to electronic publishing has revealed that electronic documents tend to relate to one another in a richer and denser manner than was the case in the print world. At the same time, the same transition brings an important detail to light: journal titles, however important they may be individually as logos for authors, tend to find themselves integrated inside much larger sets of titles that have been assembled in a wide variety of manners (regional, multidisciplinary, by a company, etc.).23

4. In the case of commercial journals, they offer a wide variety of possibilities. While these may be the effect of a transition to new modes of publishing, they also result in a confusing publishing landscape for researchers who try to find the best outlet and the best modalities for their most recent submission:

– The traditional, subscription-based journal that may or may not ask for page charges.

– The hybrid journal that behaves as a traditional, subscription-based journal, and also as an open access journal with publishing charges for accepted papers. In short, if an author is aware of the open access advantage, has an article accepted in a hybrid journal, and has a not insignificant amount of loose change in his or her research grant (several thousand pounds as a rule), he or she may choose to pay to have an article placed in open access within the commercial platform that harbours the journal. For the publisher this scheme provides a new source of revenue, end even double-dipping, while libraries have been complaining that hybrid journals keep their subscription prices steady even as open access articles increase in number.24

– The author-pay open access journal is the best known, yet most misunderstood, type of open access journal. It is not limited to commercial publishers, and it should not be equated with the ‘gold road’: such journals form only a subset of open access journals. The confusion stems from the fact that it is the business model most commonly mentioned when discussing the financing of open access journals, and commercial publishers have made extensive use of it. After inventing the hybrid journal, Springer, as mentioned earlier, also embraced the author-pay model when it bought up the line of journals developed by BioMed Central. Many other publishers have also gone in that direction, given that a number of institutions, including libraries and various research funders, provide some funds to finance the publishing of such articles.25

5. Society journals and other, university-based, journals have tended to follow some of the strategies of commercial publishers, but they also differ from them in significant ways.

– Some societies, such as the American Chemical Society, have been even more aggressively opposed to open access than some commercial publishers; this demonstrates, alas, that modes of behaviour do not coincide neatly with categories of publishers.26

– Some societies, most notably physics societies, have sought to rationalize the production and access of their journals by simply working out a kind of collective subscription agreement with a sufficient number of stakeholders to ensure the viability of the publishing enterprise, with the idea of releasing the content for everyone. This trend appears somewhat specific to the physics community, which is inhabited by a particularly strong ethos of sharing and teamwork, and is taking shape in a project called SCOAP3.27 Its Achilles’ heel is that it opens the door to free riders, but it appears that those who would have to subscribe anyway are sufficiently numerous to support the whole publishing programme at a cost that is not greater than the cost of traditional, subscription-based research. However, it may be an act too tough to follow for other disciplinary communities that do not exhibit the same degree of cohesiveness and solidarity as the physics community.

– Society publishers have sometimes offered a kind of diluted access in the form of the confusingly named ‘deferred open access’. Typically, these ‘moving walls’, as they are also called, range from rarely fewer than two years to five years and beyond. Publishing platforms of small societies also provide this kind of delayed access, partly because it attracts traffic to the site, partly because few articles find buyers after a certain time. This strategy amounts to offering access without costs only to publications that no longer produce revenues, and to making the publishing business model trump the needs of scientific communication.

– In a number of cases, the supporting institutions are sufficiently subsidized to offer their journals for free to both authors and readers. This represents the optimal form of open access journal publishing and it is the recipe that fits best with the requirement of the ‘Great Conversation’, at least in its present historical form.

– Journals with limited financial resources can share tools and know-how with other journals on a common platform. Communities can develop in this fashion, as the Open Journal Systems (OJS), brilliantly conceived by John Willinsky and his team, has demonstrated in a number of regions and countries.28

Experimenting with electronic journals

While journals have been diversifying according to forces that are largely dictated by the financial framework within which they operate, they are also beginning to test new possibilities that the advent of computers and networks have made possible. Below are some recent developments that are worthy of note.

Super-journals

PLOS has certainly demonstrated a talent for innovation in the way in which it created and promoted a journal that reached an extremely high IF almost as soon as was possible. Fuelled by foundation money, fervour and very high quality, it achieved in about three years what many other journals manage in a decade or two, if ever: visibility, accessibility and prestige. However, the author-pay model that they favoured provided fewer resources than anticipated, with the result that PLOS went through difficult times. Necessity being the mother of invention, PLOS leaders went back to the drawing board and began exploring a new variation on the author-pay business model: PLOS ONE.

PLOS ONE began in 2006. It incorporates a number of ideas that had never been linked together before. Like Nature and Science, it opted for a multidisciplinary coverage of science, but it also sought to increase the publication scale to transform the new journal into a publishing platform. Starting with a modest 138 articles in 2006, PLOS ONE grew extremely quickly to reach 23,468 articles in 2013. PLOS ONE also differs in the ways in which it handles peer review: the evaluation is done in terms of quality, and can involve external readers, but the editorial board numbers in the thousands and external readers in the tens of thousands. An article cannot be refused on account of topic, importance (important to whom?) or relevance to the journal’s orientation. What counts is whether experiments and data are treated rigorously. In short, PLOS ONE, just like the ‘Great Conversation’, relies strictly on the rules of the scientific method, leaving to its own global dynamics the task of selecting and foregrounding different elements of its archival memories at different times of its history. Nothing of quality should be rejected, just in case … In this fashion, PLOS ONE submits itself fully to the shape and goals of the ‘Great Conversation’ and helps to foster it by opening up opportunities for further discussions once an article is published. As a result, an article has a chance of entering the ‘Great Conversation’ even before being cited. The authorial phase of publication can be immediately complemented by a collective effort that may include criticisms and suggestions for improvements. These variations on a familiar theme contain important hints for the future of scientific publishing.

The success of PLOS ONE also rests on at least one ironic result: it enjoys a more than decent IF of 3.730 (to maintain the mythical number of decimal numbers that Thomson Reuters promulgates). But because citation practices vary significantly from one discipline to another,29 and because the disciplinary mix of papers published in PLOS ONE may change from one year to the next, a gross average of the total bears little meaning. Yet the result, and this too is ironic, is probably needed if authors are to submit to the journal: their tenure and promotion committees will probably ask for PLOS ONE’s IF, and, once provided, they will be uncritically satisfied by it. PLOS ONE itself, however, does not appear to be driven by IF results as so many journals are. The reason as to why not may be the one proposed in an article in Nature, somewhat inaccurately titled ‘Open-access journal will publish first, judge later’:30 ‘A radical project from the Public Library of Science (PLOS), the most prominent publisher in the open-access movement, is setting out to challenge academia’s obsession with journal status and impact factors.’ Actually, PLOS ONE does judge first, like any other scholarly journal, but it does so with an omnivorous taste for any methodologically sound study rather than for articles that might support the journal’s prestige. Doing the research at Harvard or in an obscure laboratory in a developing nation does not change the requirements for evaluation, and if the authors from such a country cannot pay, a waiver is automatically available. In effect, PLOS ONE purifies the selection process of articles from all considerations that actually may interfere with the building of the best ‘Great Conversation’ possible, the idea being to avoid losing science, to bring back W.W. Gibbs’ expression.31 The selection is not made to help the journal’s competitive status, but to act as a fair instrument of communication among all scientists. The journal exists to help the ‘Great Conversation’, not the reverse. It does not try to manipulate it either.

The new contours of scientific publishing explored by PLOS ONE have not left the publishing industry indifferent. At the beginning of 2011, the Nature Publishing Group itself demonstrated that the best form of flattery is imitation. It announced the creation of a PLOS ONE-like journal, Scientific Reports,32 which, in practically all of its facets, imitates PLOS ONE. It is also rumoured that SciELO is exploring the possibility of launching a similar interdisciplinary journal.33 Super-journals are here to stay and they will deeply affect how scientific communication will evolve in the next few decades.

Linking published research to data

Published articles summarize and synthesize varying amounts of data which, in most cases, remain locked up in the laboratories. The proprietary feeling researchers may entertain with regard to data is quite understandable as they are the most direct product of their daily activities, observations or experiments. It is like a treasure trove out of which may flow a certain number of articles, and one never quite knows whether all the potential of a set of data has been really fully exploited. This last point probably accounts for much of the reluctance of researchers to part from their precious lore. As a result, they often insist on keeping them for a period of a few years, if only to protect a head start.

However, other forces are also encouraging a full and early release of the data underpinning publications, and there are a number of reasons for this: it provides a much better way to evaluate the submitted papers; it allows for a faster exploitation of all the potential of a data set; it allows scientists deprived of the right kinds of instruments to try their hand at the crucial interpretation of real data; it allows a better archiving of science and it provides for a periodic revisiting of older hypotheses recast in the light of more recent results and developments. It also allows for a better detection of fraud and the various forms of cheating that, alas, are growing in number even as the competition for publishing grows more intense.

As a result of this tug of war, two kinds of movement are now afoot. On the one hand, how can data be released as fully and as early as possible? On the other, what kind of governance should preside over the releasing of data, given that formats, curation and other issues have to be attended to if the ‘interoperability’ of data can be facilitated across disciplines, as well as across space and time? Finally, a familiar problem of the digital world rears its ugly head: preservation. All these issues are being discussed at the highest levels, and meetings between the National Science Foundation in the US and the European Commission in Europe have already taken place. Other countries such as Australia are also closely monitoring developments that are being shaped in new organizations such as the Research Data Alliance (or RDA).

Other signs demonstrate that publishers are also paying attention to this issue. In some domains, particularly in biomedical fields, some journals require that data be added to the papers they publish.34 A commercial publisher like Elsevier revealed its interest in the matter by offering a bounty to the team that would provide the best design and some degree of implementation for ‘executable papers’.35 The objective, in this case, is to have a paper which, once online, allows for the direct manipulation of the data to check various outcomes or calculations thanks to algorithms embedded in the published text. In passing, these developments cast a different light on the puzzling fact that commercial publishers provide little more than electronic paper (or PDFs) to researchers. They may be exploring ways to commercialize functions that only digital documents can incorporate.36

There is a deeper lesson to draw from the concern for open data: exposing the data underpinning published articles reveals all the work that goes from the raw results obtained in the laboratory or through observations to their inclusion in a neat, synthetic interpretation offered as the best possible way to approach reality to date. It is as if the complete assembly line of knowledge was finally exposed for all to see, learn from and discuss. Obviously there is no better way to nourish the ‘Great Conversation’ sustaining the creation of knowledge.

Making use of the computational dimension of digital documents

In an article that has not been discussed sufficiently, Clifford Lynch has argued insightfully that open access does not mean much without open computation.37 He was absolutely right to do so because open access documents exist only in a digital format, and machine reading of such documents is becoming ever more important. For example, people involved in the Google Books project have argued that making books available for human reading (and only reading, as page images in PDF do not allow for anything else) is a minor and secondary objective of mass digitization.38 Most probably, Google intends to limit the use of the computational dimension of the documents it digitizes.

The computational dimensions of digital documents may promise new profitable activities for commercial publishers. Open access, on the other hand, offers a different kind of promise: if all the files corresponding to the peer-reviewed results of scientific research are truly open, inventing computational tools around these treasure troves of documents would be accessible to all, and a very healthy competition could then develop between private interests and the commons.

Open access journals are not always in the best position to achieve this computational dream. For example, the excellent Open Journal System has focused more on editorial tasks than on suitable formats for digital documents. Most journals, commercial or open access, continue to offer PDF files and little else. One regrettable prediction that is all-too-easy to make is that, in a decade or so, we will be frantically trying to pull ourselves out of the PDF trap. However, other factors are already working against the exclusive reliance on the PDF format. For example, the obsession with IFs requires retrieving and organizing the citations associated with each article. The PDF file format is inadequate in this regard. Taking advantage of the computational dimensions of digital texts is an issue now, and it will grow enormously in the next ten years. Many commercial publishers already rely on a back-end based on XML while producing on-the-fly various formats, some of which are meant for human eyes only. A better use of the computational potential of digital documents will require some degree of standardization across publishers.

In the open access world, one may wonder who should tackle this issue. One possibility would be to devolve this whole area to a consortium of repositories such as COAR. Repositories have already shown that they could work together as OAI-PMH is one of the great success stories of growing interoperability. The repository community, now more than 2000 strong, could certainly work on various problems related to file formats, interoperability and preservation.

Ultimately, repositories and open access journals will converge to form a new and powerful instrument of communication: in a sense, super-journals like PLOS ONE are repositories with a few added functions, most importantly peer review. How these functions can also be integrated into a repository – or, better, a network of repositories – is an important question.

Essentially, the computational dimension of digital documents will help bring about a co-ordinated and strategic alliance between the so-called ‘green road’ of repositories and the ‘gold road’ of open access journals. The trick will be to structure the ‘Great Conversation’ in such a way that researchers-as-authors will be amply rewarded and gratified without relying on a reward system, such as the IF, that distorts the ‘Great Conversation’.

Conclusion

Science seeks to describe reality; to do so it uses a method, plus language structured by a limited web of concepts. Humanity evolved a distributed process of scientific and scholarly research that is best represented by the metaphor of an unceasing conversation that propels humanity’s indefinitely extended journey towards reality. Sustaining this conversation is the only way to overcome the inherent limits of individual talents.

The conditions under which the ‘Great Conversation’ takes place have never been, and probably never will be, optimal. The functions of writing have evolved significantly from simple externalized memory to analytical tools in the last 5000 years, and nothing proves that this process is complete. But the ‘Great Conversation’ can be negatively affected by the circumstances in which it operates, and many of the factors affecting the life and functions of journals can be viewed as perturbations in that process. This is what the preceding pages began to explore.

With print, the communication of research results came to be located outside research sites, and a complex, sometimes uneasy, symbiosis emerged between researchers and publishers. It became even more complex when librarians interposed themselves in the process. Print, unlike manuscript production, was also deeply capitalistic in nature and this dimension began to interfere with the inherent objectives of the ‘Great Conversation’.39 Publishers gradually began to develop strategic position whose power reached entirely new summits in the last third of the twentieth century when they managed to locate a number of scientific journals within an inelastic market.

This trend has not been healthy for the ‘Great Conversation’: the position of publishers has become so dominant as to interfere with some of its objectives, such as a need for universal inclusiveness. Publishing functions are sometimes described as a ‘genuine contribution’ to research but, at best, they are a set of services that could be reconfigured differently, in particular within research institutions.40 Let us remember that the cost of publishing, including profits by commercial publishers, is a very small fraction of the costs of research, and that the know-how of digital publishing is well within the reach of academic skills.

Publishers, however, retain a strong trump card: the logo-function of a journal title, its ‘branding’ power. By making journals compete on the basis of IFs, and by unduly extending the use of IFs to the evaluation of individual scientists, the research world has evaluated scientific research not by reading individual articles but by looking at where they have been published. One consequence of this unhappy state of affairs is that the quest for excellence is often confused with the quest for quality.41 While competition is an essential part of some segments of scientific work, it should not pose as the exclusive tool to manage a science policy.

From a different perspective, looking at what works against the ‘Great Conversation’ may help to imagine where the future lies. Innovations such as PLOS ONE appear to be very important for the future of journals. They may signal the fact that traditional journals are about to recede in importance, in comparison with platforms. This also means that the present evaluation system based on the prestige of journal titles will be weakened. Evaluation will have to rely on the very quality of each article. As search engines such as Google Scholar can bring exposure to any article, independently of the journal in which it appears, research results from relatively unknown authors, and from unusual places, will have a better chance of reaching high levels of visibility, and even prominence.

It is possible to say a little more about the fate of journals within a healthy ‘Great Conversation’: super-journals, or rather platforms, will probably multiply to some moderate extent in the near future. As platforms, they will act more as filters than as logos. In fact, with the support of repositories, platforms should facilitate the development of better evaluation techniques and should also work against dividing the world into first- and second-tier science. A platform for Africa, for example, could help involve African scientists in the ‘Great Conversation’. It would immediately take its place next to PLOS ONE and Scientific Reports, and also next to SciELO and Redalyc.

We may also imagine the ultimate demise of journals from the perspective offered by Wikipedia. Wikipedia, as a device to build consensus, is not part of the ‘Great Conversation’. The ‘Great Conversation’ of science, by contrast, works by regulated and authorial dissensus to bring out the best critical reactions in peers. It is the result of complex mixtures of competition and co-operation.

However different in their goals, the ‘Great Conversation’ of science and Wikipedia may end up resembling each other, at least in a superficial way. Because both enterprises are distributed enterprises, and because both rely on a kind of conversation, neither sits well with a conversational structure full of syncopated hiccups such as the one generated by print. Digital technologies, by contrast, favour a continuous conversation, and the possibility for readers to interact with authors is a symptom of this trend. In the end, living documents should emerge – constantly evolving, constantly growing, constantly reflecting the best of the research fronts. They will be fuelled by the sound and fury of individuals seeking some degree of distinction through controversies,42 and they will besiege ever finer details of reality ever more closely, at an ever-accelerating pace. But getting there may take some time, and much more than technology…


1.Koyre, A. (1973) Perspectives sur l’histoire les sciences. In Etudes d’histoire de la Pensée Scientifique (pp. 390–9). Paris: Gallimard.

2.And it partially came back in the twentieth century when Einstein introduced photons in light waves to explain the photoelectric effect.

3.Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Volume 2. Foundations of the Unity of Science 2. Chicago, IL: The University of Chicago Press.

4.Wouters, P. (1999) The Citation Culture. Amsterdam: University of Amsterdam.

5.See http://ketupa.net/maxwell.htm.

6.Eugene Garfield interviewed by Robert V. Williams, July 1997. See http://garfield.librarv.upenn.edu/papers/oralhistorvbywilliams.pdf.

7.Wikipedia gives an excellent and clear example of the impact factor based on a hypothetical journal in year 2008: A = the number of times articles published in 2006 and 2007 were cited by indexed journals during 2008; B = the total number of ‘citable items’ published by that journal in 2006 and 2007. (‘Citable items’ are usually articles, reviews, proceedings, or notes; not editorials or Letters-to-the-Editor.) 2008 impact factor = AIB.

8.The following passage reveals this point clearly: ‘we received evidence to suggest that the measures used in the RAE distorted authors’ choice of where to publish. Although RAE panels are supposed to assess the quality of the content of each journal article submitted for assessment, we reported in 2002 that “there is still the suspicion that place of publication was given greater weight than the papers’ content”. This is certainly how the RAE was perceived to operate by the panel of academics we saw on 21 April. Professor Williams told us that he chose to publish in journals with high impact factors because “that is how I am measured every three years or every five years; RAE or a review, it is the quality of the journals on that list”.’ See Select Committee on Science and Technology Tenth Report: http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/39912.htm.

9.See Stuart Shieber’s recent blog (14 June 2012): More reasons to outlaw impact factors from personnel discussion. Available from: http://law.harvard.edu/pamphlet/2012/06/14/more-reason-to-outlaw-impact-factors-from-personnel-discussions/. It provides a number of links to the most common and fundamental criticisms of the IF. Stuart Shieber’s conclusion and advice is simple: ‘Given all this, promotion and tenure committees should proscribe consideration of journal-level metrics – including Impact Factor – in their deliberations. Instead, if they must use metrics, they should use article-level metrics only, or better yet, read the articles themselves.

10.For a revealing discussion of this issue, see Garfield, E. (1997) RESPONSE by Eugene Garfield to Richard Smith’s article ‘Journal accused of manipulating impact factor’, British Medical Journal 314. Available from: http://garfield.library.upenn.edu/papers/bmj14june1997.pdf. Once more, the desire of editors and publishers to promote their journals is shown not to coincide entirely with the objectives of the ‘Great Conversation’.

11.Garfield, E. (1971) The mystery of the transposed journal lists – wherein Bradford’s Law of Scattering is generalized according to Garfield’s Law of Concentration, Current Contents 17 (4 August). Available from: http://www.garfield.library.upenn.edu/essays/V1p222y1962-73.pdf.

12.For a recent example of this practice, see http://www.elsevier.com/wps/find/authored_newsitem.cws_home/companynews05_02385. Most publishers follow this practice.

13.On 1 October 2004, the International Coalition of Library Consortia (ICOLC) passed a resolution which included language against non-disclosure agreements. See http://icolc.net/statement/statement-current-perspective-and-preferred-practices-selection-and-purchase-electronic. The Canadian Consortium, CRKN, is not a signatory to it, although one of its members, the Quebec-based CREPUQ, did sign it. CARL, the Canadian Association of Research Libraries, adopted a policy against non-disclosure agreements only on 28 January 2010. See http://carl-abrc.ca/en/about-carl/principles.html.

14.See http://www.who.int/hinari/en/. Access is modulated by the level of GDP/ inhabitant, but even this rule leaves some glaring exceptions such as India, China, Pakistan and Indonesia, because these countries, although poor, manage to form a viable market for commercial publishers. For a critique of this situation, see Sarin, R. (2007) Denying open access to published healthcare research: WHO has the password? Journal of Cancer Research and Therapeutics 3(3): 133. Available from: http://www.cancerjournal.net/article.asp?issn=0973-1482;year=2007;volume=3;issue=3;spage=133;epage=134;aulast=Sarin.

15.In the print world, production costs are commonly divided between fixed costs and variable costs. Essentially, fixed costs correspond to producing the first copy of a book or journal issue, while variable costs are associated with producing more and more copies and disseminating them.

16.See, for example, Strangelove, M. and Kovacs, D. (1992) Directory of Electronic Journals, Newsletters, and Academic Discussion Lists (ed. A. Okerson), 2nd edition. Washington, DC: Association of Research Libraries, Office of Scientific & Academic Publishing.

17.Available at http://www.soros.org/openaccess/read.

18.See http://www.biomedcentral.com/about.

19.See http://www.scielo.org/php/index.php.

20.The expression ‘lost science’ comes from the following article: Gibbs, W.W (1995) Lost science in the Third World, Scientific American 273(2): 76–83.

21.For a comparison of publishing fees, see, for example, the figures provided by BioMed Central at http://www.biomedcentral.com/about/apccomparison/.

22.For most open access advocates, the gold standard is ‘CC-BY’ – a Creative Commons licence that only requires attribution.

23.This is a general phenomenon. In North America, Project MUSE (http://muse.jhu.edu) is a good example of this strategy, as are JSTOR and Persée for retrospective collections (http://www.jstor.org/ and http://www.persee.fr). We have already mentioned SciELO in Latin America and beyond, but Redalyc is a second excellent example of such platforms (http://redalyc.org). In France, revues.org pools a mixture of journals with subscription, with moving walls, and in open access (http://www.revues.org). French commercial publishers have joined forces to create the CAIRN platform (http://www.cairn.info/). In Canada, Érudit and Synergies pursue similar plans (http://www.erudit.org/ and http://www.synergiescanada.org) as do all major commercial publishers. And this represents but a small sampling of offerings structured in this manner.

24.Wikipedia offers a good survey of such journals, including a long list of examples. See http://en.wikipedia.org/wiki/Hybrid_open-accessJournal.

25.See http://en.wikipedia.org/wiki/Gold Open Access for an informative tour of open access journals.

26.Some details are provided at http://www.sourcewatch.org/index.php?title=American_Chemical_Society#ACS_activities_against_open_access.

27.See http://scoap3.org/.

28.See http://pkp.sfu.ca/about. There were about 11,500 journals using OJS as of December 2011. For a partial list, see http://pkp.sfu.ca/ojs-iournals. It must be remembered that not all OJS journals are open access journals.

29.See, for example, Van Nierop, E. (2009) Why do statistics journals have low impact factors? Statistica Neerlandica 63(1): 52–62.

30.Giles, J. (2007) Open-access journal will publish first, judge later, Nature 445(7123): 9.

31.See note 20 above.

32.See http://www.nature.com/srep/index.html.

33.Personal communication.

34.See for example http://www.biomedcentral.com/about/supportingdata.

35.See http://www.executablepapers.com/.

36.This point was already noted in 2007 by Donald J. Waters from the Andrew Mellon Foundation. See his position paper, Doing much more than we have so far attempted. Available from: http://www.sis.pitt.edu/~repwkshop/papers/waters.html.

37.Lynch, C.A. (2006) Open computation: beyond human-reader-centric views of scholarly literatures. In N. Jacobs (ed.) Open Access: Key Strategic,Technical and Economic Aspects (pp. 185–93). Oxford: Chandos Publishing. Available from: http://old.cni.org/staff/cliffpubs/opencomputation.htm.

38.Nicholas Carr quotes a Google engineer as saying: ‘We are not scanning all those books to be read by people … We are scanning them to be read by [our] AI.’ In Carr, N. (2008) The Big Switch: Rewiring the World From Edison To Google. New York: W.W. Norton & Co. See also: http://www.openbookalliance.org/what-experts-are-saying-about-the-settlement/.

39.The interference of a particular economic imperative in intellectual and cultural matters is demonstrated in exemplary fashion in Pettegree, A. (2010) The Book in the Renaissance. New Haven, CT: Yale University Press.

40.See, for example Elsevier’s mission statement at http://www.elsevier.com/wps/find/intro.cws_home/mission. Springer, for its part, speaks of working ‘… with the world’s best academics and authors in long-standing loyal partnerships based on mutual trust …’ (http://www.springer.com/about+springer/company+information?SGWID=0-174704-0-0-0).

41.On the distinction that ought to be maintained between quality and excellence, see Guédon, J.-C. (2009) Between excellence and quality. The European research area in search of itself. In Rethinking the University after Bologna. New Concepts and Practices Beyond Tradition and the Market (pp. 55–79). Antwerp: Universitair Centrum Sint-Ignatius Antwerpen. Available from: http://eprints.rclis.org/12791/.

42.Of networked individualism, see Guédon, J.-C. (2008) Network power and ‘phonemic’ individualism, Policy Futures for Education 6(2): 159–64. Symposium around Yochai Benkler’s The Wealth of Networks (2006). New Haven, CT: Yale University Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset