2

Changing knowledge ecologies and the transformation of the scholarly journal

Bill Cope and Mary Kalantzis

Abstract:

This chapter is an overview of the current state of scholarly journals, not (just) as an activity to be described in terms of its changing business processes but more fundamentally as the pivot point in a broader knowledge system that is itself in a process of transformation. After locating journals in what we characterize as a process of knowledge design, the chapter goes on to discuss some of the deeply disruptive aspects of the contemporary moment. These not only portend potential transformations in the form of the journal, but possibly also in the knowledge systems that the journal in its heritage form has supported. These disruptive forces are represented by changing technological, economic, distributional, geographic, interdisciplinary and social relations to knowledge.

The chapter goes on to examine three specific breaking points. The first breaking point is in business models – the unsustainable costs and inefficiencies of traditional commercial publishing, the rise of open access and the challenge of developing sustainable publishing models. The second potential breaking point is the credibility of the peer-review system: its accountability, its textual practices, the validity of its measures and its exclusionary network effects. The third breaking point is post-publication evaluation, centred primarily on citation analysis as a proxy for impact. We argue that the prevailing system of impact analysis is deeply flawed. Its validity as a measure of knowledge is questionable, as is the reliability of the data used as evidence.

The chapter ends with suggestions intended to contribute to discussion about the transformation of the academic journal and the creation of new knowledge systems: sustainable publishing models, frameworks for guardianship of intellectual property, criterion-referenced peer review, greater reflexivity in the review process, incremental knowledge refinement, more widely distributed sites of knowledge production and inclusive knowledge cultures, new types of scholarly text, and more reliable impact metrics.

Key words

academic journals

knowledge ecologies

publishing technologies

journal publishing business models

open access publishing

peer review

knowledge evaluation

citation analyses

impact metrics

impact factor

The knowledge business

Here are some quantifiable dimensions of the academic and scholarly knowledge business. An analysis of Ulrich’s periodicals list shows that the number of scholarly journals increased from 39,565 in 2003 to 122,273 in 2011; of these, the number of refereed journals rose from 17,649 in 2002 to 57,736 in 2011. The number of articles per journal rose from 72 per annum in 1972 to 123 in 1995, and the average length of an article increased from 7.41 pages in 1975 to 14.28 pages in 2011 (Tenopir and King, 2014: Chapter 6, this volume). Each year, as many as 1.9 million new articles are published (Phillips, 2014: Chapter 5, this volume). Worldwide, approximately 5.7 million people work in research and development, publishing on average one article per year and reading 97 articles per year (Mabe and Amin, 2002). The total value of the scholarly journals market is estimated to be US$6 billion per annum for STM (scientific, technical and medical) publishing alone. Universities spend between 0.5 per cent and 1.00 per cent of their budgets on journal subscriptions (Phillips, 2014: Chapter 5, this volume).

And here are some of the qualitative dimensions of the business of academic and scientific knowledge-making: the process of publication is an integral aspect of the business of knowledge-making. Far from being a neutral conduit for knowledge, the publication system defines the social processes through which knowledge is made, and gives tangible form to knowledge.

This chapter takes the academic journal as its reference point because changes in the journals system are symptoms of, and catalysts for, transformations that are underway in contemporary knowledge ecologies. In it, we examine changes occurring in the form of the academic journal in a moment of enormously uncertain, unsettling and perhaps also exciting times. We look at seismic stresses in the workings of the academic journal, and analyse these for signs of a deeper epistemic disruption.

But, first, to define ‘knowledge’. What do we mean by specifically scientific, academic or scholarly knowledge? After all, people have a wide range of ways of ‘knowing’ in everyday life which do not have the credibility of peculiarly academic knowledge. What are the out-of-the ordinary ways of academic or scholarly knowledge? Academic knowledge has an intensity of focus and a concentration of intellectual energies that is different from ordinary, everyday, common-sense or lay knowing. It relies on the ritualistic rigour and accumulated wisdoms of disciplinary communities and their practices. It entails, in short, a kind of systematicity that does not exist in casual experience. Husserl draws the distinction between the ‘lifeworld’ of everyday, lived experience and what is ‘transcendental’ about ‘science’ (Cope and Kalantzis, 2000; Husserl, 1954 [1970]). The transcendental of academic and scholarly knowledge stands in contradistinction to the common-sense knowing of the lifeworld, which by comparison is relatively unconscious and unreflexive. Academic and scholarly knowledge sets out to comprehend and create meanings in the world which extend more broadly and deeply than the everyday, amorphous pragmatics of the lifeworld. Such knowledge is systematic, premeditated, reflective, purposeful, disciplined and open to scrutiny by a community of experts. Science is more focused and more hard work than the knowing in and of the lifeworld (Kalantzis and Cope, 2012b).

The knowledge representation process is integral to the making of this peculiarly academic, scientific and scholarly knowledge. It is central to what we want to call epistemic design – a process that, we want to argue, has three representational moments.

Available designs of knowledge

The first aspect of epistemic design is what we would call ‘available designs’ (Cope and Kalantzis, 2000; Kress, 2000). The body of scholarly literature – the two million or so scholarly articles published each year and the hundreds of thousands of books – is the tangible starting point of all knowledge work. These representational designs work at a number of levels – at one level they are the tangible products of textual practices in which scholars describe, report, clarify concepts and argue to rhetorical effect. These designs also operate intertextually. No text stands alone because it draws upon and references other texts by way of conceptual distinction, or accretion of facts, or agreement on principle. In these and other ways, every text is integrally interconnected with other texts within evolving bodies of knowledge. These representational designs are the fundamental basis of all academic and scholarly knowledge work. They give tangible form to fields of interest. They are found objects that precede all new intellectual work and new knowledge representation.

Designing knowledge

The second aspect is the process of ‘designing’, or new knowledge representation. Immediately, designing uses the intellectual resource that is to be found in available knowledge designs. The knowledge worker starts with the textual and intertextual morphology of these works, or the genres of academic knowledge representation. In addition, they communicate substantive knowledge in a field. In these ways, the knowledge designer draws upon available designs as raw materials. They use already-represented knowledge or found knowledge objects as the basis for their new work. However, more than reproduction or replication of these available designs, the act of designing is the stuff of resythnesis. These practices involve certain kinds of knowledge representation – modes of argumentation, forms of reporting, descriptions of methods and data, ways of supplementing extant data, linking and distinguishing concepts, and critically reflecting on old and new ideas and facts. There is no knowledge-making of scholarly relevance without the representation of that knowledge. And that representation happens in a community of practice – with collaborators who co-author or comment upon drafts, with journal editors or book publishers who review manuscripts and send them out to referees, with referees who evaluate and comment, followed by the intricacies of textual revision, checking, copy-editing and publication. Knowledge contents and the social processes of knowledge representation are inseparable.

The designed: new knowledge becomes integrated into a body of knowledge

Then there is a third aspect of the process – ‘the (re)designed’ – when a knowledge artefact joins the body of knowledge. Private rights to ownership are established through publication. These do not inhere in the knowledge itself, but in the text which represents that knowledge (copyright) or through the invention that the representation describes (patents). Moral rights to attribution are established even when default private intellectual property rights are foregone by attaching a ‘commons’ licence. Meanwhile, copyright licences mostly allow quoting and paraphrasing in the public domain for the purposes of discussion, review and verification, as matters of ‘fair use’. This guarantees that a text – privately owned at the point of its creation by default – can be incorporated into a body of public knowledge and credited via citation. This is the point at which the process of designing metamorphoses into the universal library of knowledge, the repository of publicly declared knowledge, deeply interlinked by the practices of citation. At this point, the knowledge design becomes an ‘available design’, absorbed into the body of knowledge as raw materials for others in their design processes.

Of course, scholarly knowledge-making is by no means the only secular system of systematically validated knowing in modern societies. Media, literature and law all have their own design and review protocols. In this chapter, however, we want to focus specifically on the knowledge systems of science and academe as found in the physical sciences, the applied sciences and the professions, the social sciences, the liberal arts and the humanities. We are interested in the means of production of this form of knowledge, where the textual and social processes of representation give modern knowledge its peculiar shape and form (Gherab Martín and González Quirós, 2014: Chapter 4, this volume).

Forces of epistemic disruption

Our schematic outline of the knowledge representation processes – available designs/designing/the designed – could be taken to be an unexceptional truism but for the extraordinary social and epistemic instability of this moment. This chapter takes journals as a touchstone as it explores the dimensions of epistemic change – some well underway, others merely signs of things to come. What follows are some of the roots of epistemic shift.

Disruption 1: publishing technologies

The most visible force of epistemic disruption is technological. An information revolution has accompanied the digitization of text, image and sound and the sudden emergence of the Internet as a universal conduit for digital content. However, this information revolution does not in itself bring about change of social or epistemic significance. In the case of academic publishing, for instance, the Internet-accessible PDF file makes journal articles widely and cheaply accessible. But this form simply replicates the production processes and social relations of the print journal: a one-way production process which ends in the creation of a static, stable, page-bound object restricted to text and static image. This change is not enough to warrant the descriptor ‘disruptive’. This technological shift does not in itself produce a qualitative change in the social processes and relations of knowledge production.

There is no deterministic relationship, in other words, between technology and social change. New technologies can be used to do old things. In fact, in their initial phases, new technologies more often than not are simply put to work do old things – albeit, perhaps, somewhat more efficiently. However, technological change can also create new openings for essentially social affordances. Frequently, this happens in ways not even anticipated by the designers of those technologies.

So what is the range of affordances in digital technologies that open new possibilities for knowledge-making? We can see glimpses of possible new and more dynamic knowledge systems, not yet captured in the mainstream academic journal. For instance, in contrast to texts that replicate print and that are ordered using typographic mark-up, we can envisage more readily searchable and data-mineable texts structured with semantic mark-up (Phillips, 2014: Chapter 5, this volume). In contrast to knowledge production processes which force us to represent knowledge on the page, restricting us to text and static image, we can envision a broader, multimodal body of publishable knowledge with material objects of knowledge that could not have been captured in print or its digital analogue: datasets, video, dynamic models, multimedia displays. Things that were formerly represented as the external raw materials of knowledge can now be represented and incorporated within the knowledge text. And in contrast to linear, lock-step modes of dissemination of knowledge (Word to InDesign to frozen PDF), we can see the potential for scholarly knowledge in the more collaborative, dialogical and recursive forms of knowledge-making already found in less formal digital media spaces such as wikis, blogs and other readily-accessible self-managed website-based content systems. Most journals are still making PDFs, still bound to the world of print-lookalike knowledge representation, but a reading of technological affordances tells us that we don’t have to replicate traditional processes of knowledge representation – digital technologies allow us to do more than that. Some publishers are beginning to experiment with new forms of article production (Zudilova-Seinstra et al., 2014: Chapter 15, this volume). Others among us see huge and as yet unrealized potential for a new generation of ‘semantic publishing’ technologies (Cope et al., 2011).

Disruption 2: the economics of publishing

The second item on our list of potential disruptions is the economics of production. With the rise of the Internet, we have become accustomed to getting a wealth of electronic information for free. Of course, it is not really free because it takes human effort to create the content and physical infrastructure to manufacture, transmit and render the content – computers and storage devices and transmission networks. In reality, we have got used to a system of cross-subsidy, a kind of information socialism within a market economy. Wikipedia content is free because its authors donate their time and so must have other sources of income. Searching through Google is free because the company has copied other people’s content without permission and without payment, and has then made a business out of this by juxtaposing targetted advertising – as little as 13 per cent of a Google search page comprises non-commercial search results (Harris, 2013). Open access academic journal content is free because academics have taken on publishing as an additional task and universities pay academics’ salaries. This represents a profound shift in our expectations about knowledge markets, where printed content has traditionally been sold and bought. Today, however, when we reach a journal article on the Internet for which we do not have subscription access and it costs US$30 or US$50 to view, this breaks the norm of information socialism to which the Internet has recently accustomed us.

The rise of open access journals is but one symptom of a broader transition. It is estimated that approximately 20 per cent of peer-reviewed articles are published in open access formats (Willinsky and Moorhead, 2014: Chapter 8, this volume). These journals rely on the unpaid labour of scholars assuming the role of amateur publisher. Another symptom is the increasingly prevalent practice of posting pre-prints to discipline repositories. Informal pre-publication is eroding the significance of the post-publication text as both authors and readers find the immediacy of open discipline-based repositories more powerful and relevant than eventual publication. The ArXiv repository in high-energy physics is a case in point (Ginsparg, 2007). In some areas, conference proceedings are becoming more important than journal articles for their immediacy – computer science is a good example of this. In other areas, such as economics, where macroeconomic realities can change rapidly, reports are becoming more important than journals. And, in almost every discipline, academic authors and, increasingly, the institutions for which they work are insisting upon the right to post their published articles to institutional repositories or personal websites, either in typeset or original manuscript form (Shreeves, 2014: Chapter 12, this volume). More and more, scholars are taking it upon themselves to do this, legally or illegally, with or without reference to the publishing agreements they have signed. This trend accelerates as sites such as Academia.edu (http://www.academia.edu/) and ResearchGate (http://www.researchgate.net/) offer new opportunities for self-archiving. Bergstrom and Lavaty report using an Internet search to turn up freely available versions of 90 per cent of articles in the top 15 economics journals (Bergstrom and Lavaty, 2007). Similarly, Ginsparg (2007) reports that over one-third of a sample of articles from prominent biomedical journals was to be found at non-journal websites.

Disruption 3: the politics of knowledge

Then there is a new and vigorous politics of knowledge. For some time, the open access movement has argued that work that has been created as a by-product of massive public investment, or investment on the part of foundations, should as a matter of principle be made publicly accessible (Jackson and Richardson, 2014: Chapter 9, this volume). This case has now become a frequent policy refrain of the political class.

In the United States, the White House Office of Science and Technology Policy announced a new policy in February 2013, designed to increase access to the results of federally funded scientific research (White House Office of Science and Technology Policy, 2013). This was prompted in part by a We the People petition asking for expanded public access to the results of taxpayer-funded research that had been signed by 65,000 people. In a public statement, the White House said that:

the Obama Administration is committed to the proposition that citizens deserve easy access to the results of scientific research their tax dollars have paid for. That’s why, in a policy memorandum released today, OSTP Director John Holdren has directed Federal agencies with more than $100 M in R&D expenditures to develop plans to make the published results of federally funded research freely available to the public within one year of publication.

(Ibid.)

Legislation was also introduced to Congress in the form of the Fair Access to Science and Technology Research Act (FASTR) (Harvard Open Access Project, 2013).

In the United Kingdom, the report of a committee chaired by Janet Finch (the Finch Report) recognized that we are in a ‘period of transition to open access publishing worldwide’. In order to accelerate this process, the committee recommended ‘a clear commitment to support the costs of an innovative and sustainable research communications system, with a clear preference for publication in open access or hybrid journals’. In lieu of traditional subscription, the new resourcing model would involve central article-processing charges (APCs), funded by universities through campus-based open access funds or by research funders when choosing to allow or mandate a research budget line item for publication fees. The Finch Committee estimated that this would require an additional £50–60 million a year in expenditure in the UK higher education sector (Finch, 2012).

Disruption 4: more distributed knowledgemaking

Fourth in our list of disruptions is the broadening range of sites of knowledge-making. Universities and conventional research institutes today face significant challenges to their historical role as producers of socially privileged knowledge. More knowledge is being produced by corporations than was the case in the past. More knowledge is being produced in hospitals, in schools, in lawyers’ offices, in business consultancies, in local government, and in amateur associations whose members are tied together by common interest. More knowledge is being produced in the networked interstices of the social web, where knowing amateurs mix with academic professionals, in many cases without distinction of rank. In these places, the logic and logistics of knowledge production are disruptive of the traditional values of the scholarly work – the for-profit, protected knowledge of the corporation; the multimodal knowledge of audio-visual media; and the ‘wisdom of the crowd’ which ranks knowledge and makes it discoverable through the Internet according to its popularity. If one wishes to view these developments normatively, one could perhaps describe them as amounting to a democratization of knowledge. Or we could simply make this empirical observation: knowledge is being made in more widely dispersed institutional sites.

Disruption 5: the globalization of knowledge and unsustainable geographic inequities

Next in the list of disruptions is a geography of knowledge-making which unconscionably and unsustainably favours rich countries over poor, anglophone countries over predominantly non-English-speaking countries, intellectual centres over peripheries. The situation does not yet show significant signs of changing, but surely it must. For instance, despite the substantial growth in open access journals in Latin America, these journals have not fared well when it comes to visibility in mainstream, international bibliographical databases and citation analyses (Delgado-Troncoso and Fischman, 2014: Chapter 16, this volume). The position of academic publishing in Africa is bleak, and the representation of articles published by Africa-based authors in the mainstream journals’ world fell between 1995 and 2005 (Smart and Murray, 2014: Chapter 17, this volume). The impact of academic journals in China – even though they are going through a phase of burgeoning growth – has yet to reach the wider world of ideas (Wu and DongFa, 2014: Chapter 18, this volume).

Disruption 6: interdisciplinarity

Sixth is the disruptive force of interdisciplinarity. Journals have traditionally been definers of disciplines or subdisciplines, delineating the centre and edges of an area of inquiry in terms of its methodological modes and subject matter. The epistemic modes that gave shape to the heritage academic journal are being broken apart today as we address the large challenges and opportunities of our times – sustainability, globalization, diversity and learning, to name just a few expansive items on the contemporary intellectual agenda. Interdisciplinary approaches often need to be applied for reasons of principle, to disrupt the habitual narrowness of outlook of within-discipline knowledge work, and to challenge the ingrained, discipline-bound ways of thinking that may produce occlusion as well as insight. Interdisciplinary approaches also thrive in the interface of disciplinary and lay understandings. They are needed for the practical application of disciplined understandings to the actually-existing world. Robust applied knowledge demands an interdisciplinary holism, the broad epistemological engagement that is required simply to be able to deal with the complex contingencies of a really-integrated universe. However, conventional discipline-defining journals are, in their essential boundary-drawing logic, not well suited to this challenge.

Disruption 7: knowledge-producing, participatory cultures

There is one final disruptive force, potentially affecting the social processes of knowledge-making themselves. If trends can be read into the broader shifts in the new, digital media, they stand to undermine the characteristic epistemic mode of authoritativeness associated with the heritage scholarly journal. The historical dichotomy of author and reader, creator and consumer is everywhere being blurred. Authors blog, readers talk back, bloggers respond. Wiki users read, but also intervene to change the text if and when they feel they should. Game players become participants in narratives. iPod users create their own playlists. Digital television viewers create their own viewing sequences. Data presentations are not static, but are manipulable by users. These are aspects of a general and symptomatic shift in the balance of agency in which a flat world of users replaces a hierarchical world of culture and knowledge in which a few producers create content to transmit to a mass of receivers (Kalantzis and Cope, 2012a). What will academic journals be like when they escape their heritage constraints? There will be more knowledge collaborations between knowledge creators and knowledge users, in which perhaps user commentary can become part of the knowledge itself. Knowledge-making will escape its linear, lock-step, beginning-to-end process. The end point will not be a singular version of record – it will be something that can be re-versioned as much as needed. Knowledge-making will be more recursive, responsive and dynamic. Above all, it will be more collaborative and social rather than how it was in an earlier modernity which paid obeisance to the voice of the heroically individual author.

These represent some of the potentially profound shifts that may occur in our contemporary knowledge regime, as reflected in the representational processes of today’s academic journal. These shifts could portend nothing less than a revolution in the shape and form of academic knowledge ecologies. But for such change to occur, first something may have to break. Using our knowledge design paradigm, we will look at some specific fissures at three points of potential break in today’s academic knowledge systems: in the availability of designs of knowledge, in the design process, and in the ways in which we evaluate the significance of already-designed knowledge. At each of these knowledge-making moments we will examine points at which fault lines are already visible, signs perhaps of imminent breaking points. We will examine open access versus commercial publishing (available designs), the peer-review system (designing) and citation counts as a measure of scholarly value (the (re)designed).

Breaking point 1: how knowledge is made available

Academic knowledge today – manifest in the textual resources that frame scholarly work – is made available in three principal resourcing modes (with several intermediate hybrids): at a price paid for content purchase; for free; and using a rapidly emerging, new model, at a price paid by the author.

Resourcing mode 1: knowledge for sale by content purchase

Historically, scholarly journals have been resourced by subscriptions, mostly paid by libraries, but also to some degree by individual subscriptions or subscriptions associated with membership of a scholarly society. Most scholarly journal publishing still happens in this mode – approximately 80 per cent if one reverses Willinsky and Moorhead’s estimate of 20 per cent open access (Willinsky and Moorhead, 2014: Chapter 8, this volume). Some of the players in the pay-to-access-content mode are small publishers or associations which operate on an essentially self-sustaining model. However, the large journal publishers make up the bulk of the journals market. Holding a monopoly position on the titles of journals, they are able charge what are often considered to be excessive prices to university libraries for subscriptions, enjoying unusually high profit margins in the otherwise highly competitive media communications sector (Morgan Stanley, 2002). The resulting profits are a consequence in part of artificial scarcity created around the prestige and authoritativeness of well-established and well-positioned journals. Exploiting this position is particularly problematic when journal companies rely on the unpaid authoring and refereeing labour of academics – this is what gives a journal quality, not the mechanics of their production and distribution.

Here are the results of this system. The Economist reported that, in 2012, Elsevier, a Dutch firm and the world’s biggest journal publisher, had a margin of 38 per cent on revenues of US$3.2 billion. Springer, a German firm that is the second biggest journal publisher, made 36 per cent on sales of US$1.1 billion in 2011, the most recent year for which figures are available (as at 4 May 2013). As if there had been no global financial crisis, the revenues of the three largest scholarly publishers, Elsevier, Springer and Wiley, grew by 11.7 per cent between 2008 and 2011, from US$4.7 billion to US$5.3 billion, and their profits grew by 17 per cent, from US$1.6 billion to US$1.9 billion (Kakaes, 2012; Price, 2012). The last decade has also been a time of consolidation via mergers and acquisitions – Elsevier controls 2,211 journals, Springer 1,574, Blackwell 863, and John Wiley 776 (McCabe et al., 2006). Blackwell and Wiley have since merged, in 2007. These big three now publish 42 per cent of journal articles (Monbiot, 2011). ‘The current enterprise’ concludes The Economist, ‘selling the results of other people’s work, submitted free of charge and vetted for nothing by third parties in a process called peer review, has been immensely profitable’ (4 May 2013).

Key to these profits has been to charge libraries monopoly prices for subscriptions. The average annual subscription price of a chemistry journal in 2007 was US$3490, of a physics journal, US$3103, of an engineering journal, US$1919 and of a geography journal, US$1086 (Orsdel and Born, 2008). In January 2006, the editor of the Journal of Economic Studies resigned in protest at his journal’s US$9859 per annum subscription rate (Orsdel and Born, 2006). Elsevier’s Biochimica et Biophysica Acta costs US$20,930 per year (Monbiot, 2011). The prices of journals have risen rapidly over two decades. Between 1984 and 2001, during which time the consumer price index increased only by 70 per cent, the subscription rates of economics journals, for instance, rose 393 per cent, physics journals by 479 per cent and chemistry journals by 615 per cent (Edlin and Rubinfeld, 2004). Journal prices increased by 8 per cent in 2006 and by more than 9 per cent in 2007. Although learned societies as a general rule charge lower subscription prices, since 1989 prices for US society journals have increased by 7.3 per cent on average annually, well above inflation, with price increases continuing in recent years, even as library budgets have shrunk – by 7.5 per cent in 2011 and by 5.8 per cent in 2012 (Tillery, 2012).

‘Academic publishers make Murdoch look like a socialist’, says George Monbiot in the headline of an article in the UK’s Guardian newspaper. ‘You might resent Murdoch’s paywall policy, in which he charges £1 for 24 hours of access to The Times and The Sunday Times. But at least in that period you can read and download as many articles as you like. Reading a single article published by one of Elsevier’s journals will cost you $31.50. Springer charges €34.95, Wiley-Blackwell, $42. Read 10 and you pay 10 times. And the journals retain perpetual copyright’ (Monbiot, 2011).

Large publishing conglomerates have increased their subscription rates faster than small academic publishers, learned societies and non-profits. On average in 2005, commercial publishers charged university libraries several times as much per page as non-profit publishers (Bergstrom and Bergstrom, 2006). In an analysis of approximately 5000 journals, Bergstrom and McAfee created a value-for-money ranking system (http://www.journalprices.com), coming to the conclusion that the six largest STM publishers mostly fall into the bad value category (74 per cent on average), while an extremely low percentage of titles from the non-profits are rated as bad value (14 per cent) (Orsdel and Born, 2006). McCabe et al. (2006) found the average ratio of 1990–2000 prices for non-profits and for-profits to be 3.77 and 2.03 and respectively.

The consequence of this situation has been to create what is often referred to as the ‘journals crisis’ (Creaser, 2014: Chapter 13, this volume). Libraries are simply unable to afford these price hikes. The average total library budget grew at only 4.3 per cent per annum between 1991 and 2002, or 58 per cent in total, while journal prices grew several times faster (Edlin and Rubinfeld, 2004). This has left less money for monograph purchases, journals from smaller publishers and new journal titles. The protests from libraries have been loud. In October 2007, the Max Planck Institute, a leading European research institute, cancelled its subscription to 1200 Springer journals, not negotiating a new agreement until February 2008 (Orsdel and Born, 2008). According to the Association of Research Libraries, between 1986 and 2000, libraries cut the number of monographs they purchased by 17 per cent, but cut the number of journal titles by only 7 per cent (Edlin and Rubinfeld, 2004).

Alongside price hikes for subscriptions, ‘bundling’ of multiple titles into larger packages has also had a negative effect, tending to squeeze small and non-commercial publishers out of library purchases. Southern Illinois University decided to opt out of its bundling deals as a consequence of their increasing cost, consuming 24 per cent of their library’s collection budget in 2004 but rising to 33 per cent in 2008 (Tillery, 2012).

It might have been expected that the move to electronic subscriptions would have opened up cheaper access options. However, a case study of ecology journals showed no reduction in prices for online-only journals (Bergstrom and Bergstrom, 2006). Discounts for online-only subscriptions average at only 5 per cent, and some of the largest publishers offer no discount at all (Dewatripont et al., 2006). Publishers, in other words, are still basing their charges on the economics of traditional print publishing. Not only are their profits high, their cost structures are also high, reflecting perhaps a complacency which comes with their monopoly over prestige titles. The cost of producing an article is estimated to be between US$3000 and US$4000 for commercial journal publishers (Clarke, 2007; Phillips, 2014: Chapter 5, this volume). This is inexcusably high when the primary work of quality assessment and content development is with unpaid academic authors and peer reviewers. And for this high price, the publication process often remains painfully slow (compared, for instance, with the speed of new media spaces), and the final product is not particularly visible to Internet searching because it is hidden behind subscription walls.

This situation has prompted a widespread revolt in recent years. In 2012, British Mathematician Tim Gowers issued a manifesto, ‘The Cost of Knowledge’, enjoining colleagues to sign up to a ‘won’t publish, won’t referee, won’t do editorial work for Elsevier-published journals’ (Gowers, 2012). A year later, some 13,000 academics had signed on (http://thecostofknowledge.com/). Meanwhile, the Association of American Publishers was fighting a battle against the open access movement, supporting the Research Works Act, a bill introduced in the US in December 2012’s Congress by Representatives Carolyn Maloney, Democrat-New York and Darrell Issa, Republican-California. Had it become law, the Act would have prohibited government from mandating open access. Elsevier, it was revealed, had made campaign donations to Maloney, Issa and 29 other members of Congress. Maloney and Issa subsequently withdrew their support for the Research Works Act (Kakaes, 2012), partly as a consequence of widespread protest and the White House petition that prompted the White House announcement in support of open access.

Also powerfully in the news has been the story of computer programmer and activist Aaron Swartz, who was arrested in 2011 after downloading millions of academic articles from the JSTOR digital library, using his MIT library account. He was subsequently charged with thirteen counts of computer fraud, which could have resulted in a prison term of up to 35 years. He committed suicide in January 2013. In the days after Swartz’s death, the entire board of the Journal of Library Administration resigned, citing ‘a crisis of conscience about publishing in a journal that was not open access’. Then, in March 2013, the American Library Association posthumously awarded Swartz the ‘James Madison Freedom of Information Award’, citing his work as ‘an outspoken advocate for public participation in government and unrestricted access to peer-reviewed scholarly articles’. Demands have since been made under freedom of information laws that Secret Service files related to the charges against Swartz be released. In response to a judge’s ruling that they should be released, MIT intervened, citing fears for the safety of employees who may have provided information to federal investigators in the lead-up to laying charges against Swartz.

Resourcing mode 2: knowledge for free

The open access rejoinder to the commercial journal publishers has been strident and eloquent. ‘An old tradition and a new technology have converged to make possible an unprecedented public good’ (The Budapest Open Access Initiative, 2002). ‘The Internet has fundamentally changed the practical and economic realities of distributing scientific knowledge and cultural heritage’ (Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, 2003). The open access claim that academic knowledge should be made freely available through the Internet has been backed by cogent and at times impassioned argument (Bergman, 2006; Bethesda Statement on Open Access Publishing, 2003; Kapitzke and Peters, 2007; Peters and Britez, 2008; Willinsky, 2006a; Willinsky, 2006b).

John Willinsky speaks of the ‘access principle’ (Willinsky and Moorhead, 2014: Chapter 8, this volume). This represents ‘a commitment that the value and quality of research carries with it a responsibility to extend the circulation of such work as far as possible and ideally to all who are interested in and who might profit by it’ (Willinsky, 2006a: xii). And in the words of Stevan Harnad:

some think the most radical feature of post-Gutenberg journals will be the fact that they are digital and online, but that would be a much more modest development if their contents were to continue to be kept behind financial firewalls, with access denied to all who cannot or will not pay the tolls … [T]he optimal and inevitable outcome – for scientific and scholarly research, researchers, their institutions and funders, the vast research and development industry, and the society whose taxes support science and scholarship and for whose benefits the research is conducted – will be that all published research articles will be openly accessible online, free for all would-be users webwide.

(Harnad, 2014: Chapter 7, this volume)

These arguments have been supported by practical initiatives to build open access infrastructure. Prominent among these are the Open Journals System (OJS) software created by the US–Canadian Public Knowledge Project (http://pkp.sfu.ca/) and the DSpace open access repository software led by MIT (http://www.dspace.org/). The online Directory of Open Access Journals (http://www.doaj.org/) indexes many thousands of open access journals, and Open J-Gate (http://www.openj-gate.org) lists open access articles across more than 3000 journals. The Open Archives Initiative (http://www.openarchives.org) develops and promotes metadata standards to facilitate the accessibility of open access content.

Open access comes in several forms. In addition to ‘core open access journals’ (Clarke, 2007) offering an unqualified form of access now classified as ‘gold open access’, there are many somewhat qualified varieties of access, including delayed open access, in which articles are made freely available after a period of time, and hybrid open access journals, in which some authors or the sponsors of their research may choose to pay an additional fee to have their article available for free. There are also qualified forms of open access, classified as ‘green’, where publishers allow institutional archiving, or archiving in a central place such as PubMed Central, either in the form of the published typescript or the final version of the author’s manuscript. The SHERPA RoMEO initiative maintains a database of publishers and articles categorized by the kind of access provided (http://www.sherpa.ac.uk/romeo/).

Meanwhile, a succession of institutional mandates now supports one variety of open access or another. In December 2007, the US National Institutes of Health (NIH), which dispense some US$29 billion in grants resulting in some 80,000 articles annually, required grantees to provide open access to peer-reviewed articles within one year of publication. In January 2008, the European Research Council announced that grant recipients must post articles and data within six months of publication. There has also been action at the university level. Harvard University’s Faculties of Arts and Sciences voted unanimously to require faculty to retain rights to post copies of published articles on the university’s institutional repository in 2007, a proposal which was adopted as university policy in 2008. Cornell, Dartmouth, MIT and the University of California, Berkeley, followed in 2008. In the same year, 791 universities in 46 European countries voted unanimously to demand open access to the results of publicly-funded research (Orsdel and Born, 2008). University libraries have also been organizing more broadly in support of open access alternatives, with 56 universities signing up to the Coalition of Open Access Policy Institutions (COAPI) in the first year after its founding, in July 2012. The COAPI charter is for universities to work together to widen the scope of open access. However, in practice, scholars apply for waivers or just ignore these rulings when publishers do not allow institutional archiving.

In this context, repositories of various sorts are growing rapidly, both at an institutional level and by discipline, now totalling an estimated 2200 (Shreeves, 2014: Chapter 12, this volume). By mid-2013, 2.8 million articles were archived in PubMed Central, developed by the US National Library of Medicine (http://www.pubmedcentral.nih.gov/). The arXiv repository in physics, mathematics, computer science, quantitative biology and statistics (http://arxiv.org/) contained 850,000 articles. Research Papers in Economics contained 1.4 million items (http://repec.org/). To a significant degree, the development of these repositories involves the migration of content, legally and sometimes illegally, which has already been published or which is subsequently published in commercial journals (Bergstrom and Lavaty, 2007).

The shift to open access scholarly journals is paralleled in many areas of cultural production and intellectual work in the era of new digital media. Yochai Benkler speaks of a burgeoning domain of ‘social production’ or ‘commons-based peer production’ in which ‘cooperative and coordinate action carried out through radically distributed, nonmarket mechanisms … does not depend on proprietary strategies’ (Benkler, 2006: 18–19). Computers and network access have become cheap and ubiquitous, placing ‘the material means of information and cultural production in the hands of a significant fraction of the world’s population’ (ibid.). Benkler considers this to be no less than ‘a new mode of production emerging in the middle of the most advanced economies in the world’, in which ‘the primary raw materials in the information economy, unlike the industrial economy, are public goods – existing information, knowledge and culture’ (ibid.). Benkler claims that:

[the] emergence of a substantial component of nonmarket production at the very core of our economic engine – the production and exchange of information … suggests a genuine limit on the extent of the market … [and] a genuine shift in direction for what appeared to be the ever-increasing global reach of the market economy and society in the past half century.

(Ibid.)

Wikipedia is a paradigmatic case of social production. Print encyclopaedias were big business. For many households in the era of print literacy, this paper monster was their largest knowledge investment. Encyclopaedia entries were written by invited, professional experts. Wikipedia, by contrast, is free. It is written by anyone, knowledge professional or amateur, without pay and without distinction of rank. Academic knowledge does not fit the Wikipedia paradigm of social production and mass collaboration in a number of respects, including the non-attribution of authorship and the idea that any aspiring knowledge contributor can write, regardless of formal credentials. What it shares in common with the majority of open access journals is the unpaid, non-market mode of production.

Culture and information are taken out of the market economy in the paradigm of social production by theoretical fiat of their unique status as non-rivalrous goods, or goods where there is no marginal cost of providing them to another person. Lawrence Lessig quotes Thomas Jefferson:

He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature.

(Lessig, 2008: 290)

In a similar manner, John Willinsky quotes economist Fritz Machlup: ‘If a public or social good is defined as one that can be used by additional persons without causing any additional cost, then knowledge is such a good of the purest type’ (Willinsky, 2006a: 9). Non-rivalrous goods are like the lighthouse, providing guidance to all ships equally, whether few or many ships happen to pass (Willinsky, 2006b). Michael Peters quotes Joseph Stiglitz: ‘Knowledge is a public good because it is non-rivalrous, that is, knowledge once discovered and made public, operates expansively to defy the normal law of scarcity that governs most commodity markets’ (Peters and Britez, 2008: 15). Lessig concludes:

The system of control we erect for rivalrous resources (land, cars, computers) is not necessarily appropriate for nonrivalrous resources (ideas, music, expression) … Thus a legal system, or a society generally, must be careful to tailor the kind of control to the kind of resource … The digital world is closer to the world of ideas than the world of things.

(Lessig, 2001: 95, 116)

The peculiar features thus ascribed to knowledge, culture and ideas become the basis for a new and burgeoning ‘gift economy’ outside of the market (Raymond, 2001). Bauwens describes the consequent development of a ‘political economy of peer production’ as the ‘widespread participation by equipotential participants’, a ‘third mode of production’ different from for-profit or public production by state-owned enterprises. ‘Its product is not exchange-value for a market, but use-value for a community of users … [who] make use-value freely accessible on a universal basis, through new common property regimes’ (Bauwens 2005). Again, the sites of academic knowledge production are not like this in some important respects, for they are primarily not-for-profit or state-owned spaces, and they do not, by and large, use or need to use the new common property regimes to which Bauwens refers.

However, one thing does carry over into academic knowledge from the political economy of peer-to-peer production – the idea that knowledge should be free. With this comes a series of common assumptions about the nature of non-market motivations. In the domain of social production, social motivations displace monetary motivations (Benkler, 2006: 93–4). Or, in Opderbeck’s words, ‘Traditional proprietary rights are supposed to incentivize innovation through the prospect of monopoly rents. The incentive to innovate in a purely open source community, in contrast, is based on “reputational” or “psychosocial” rewards’ (Opderbeck, 2007: 126). Translated into academe, Willinsky argues that ‘the recognition of one’s peers is the principal measure of one’s contribution to a field of inquiry’. Less charitably, he calls this is an ‘ego economy’ driven by ‘the necessary vanity of academic life’ (Willinsky, 2006a: 20–2).

There are, however, some serious theoretical as well as practical difficulties with these ideas of social production and the creation of non-rivalrous goods. We will consider these before returning to the question of the alternative ways in which scholarly journals can be made, and made available. On the question of ‘social production’, this new economy is also a kind of anti-economy. For its every inroad, it removes the economic basis for knowledge and culture-making as a form of employment. Tens of thousands of people used to work for encyclopaedia publishers, even if some of the jobs, such as that of the proverbial door-to-door salesperson, were less than ideal. Everybody who writes for Wikipedia needs to have another source of income. What would happen to the global scholarly publishing industry if academics assumed collective and universal responsibility for self-publishing, an industry that in 2004 was reported to support 250,000 employees worldwide, with a US$65 billion turnover (Peters, 2007)? What would happen to the scholarly associations and research institutes that have historically gained revenue from the sale of periodicals and books? An ironic consequence of a move to social production in the much-trumpeted era of the knowledge or creative economy is to value knowledge-making and creativity at zero, when coal and corn still cost whatever they do per tonne. How do knowledge workers eat and pay for a place to live? Without doing away with the market entirely, we are consigning a good deal of knowledge work to involuntary volunteerism, unaccounted cross-subsidy, charity or just penury. We know from experience the fate of workers in other domains of unpaid labour, such as the unpaid domestic work of women and carers. Making some kinds of labour free means that they are exploited. In the case of the knowledge economy, the exploiters are the likes of the content hosts, aggregators and search companies who take the unpaid work of social producers and make a fortune from it.

And on the distinction between rivalrous and non-rivalrous goods, the key theoretical problem is to base one’s case on the circumstantial aspects of knowledge distribution rather than the practical logistics of knowledge production. Rivalrous and non-rivalrous goods are equally things that must be made. They cost their makers labour time, time which otherwise could be spent making buildings or food. Ostensibly non-rivalrous goods also need physical spaces, as well as tools, storage devices and distribution networks, all of which have to be made by people who for their practical sustenance need buildings and food. In these fundamental respects, knowledge or cultural goods are in this respect not different from any other goods. In fact, knowledge and material domains are never so neatly separable. Buildings and food have design in them (and when we go to architects and restaurants we are in part purchasing intellectual property). Equally, all cultural products have to be made, delivered and rendered in an irreducibly material world, of workspaces and devices and network infrastructures.

Taking this perspective, in the era of digital media, we might be witnessing no more than one of the old marvels of industrial capitalism – a technology that improves productivity. In the case of knowledge-making, the efficiencies are great – print encyclopaedias versus Wikipedia,celluloid movies versus digital movies posted to YouTube, PDF journal articles versus print journals – so great, at times, that we can get the impression that costs have reduced to nothing. But they have not. They have only been lowered. So low are these costs at times that we can even afford to make these cultural products in our spare time, and not worry too much about giving away the fruits of our labours to companies who have found ways to exploit them in newly-emerging, parasitical information markets.

Knowledge is a product of human labour and it needs human labour to make it available. There can never be zero cost of production and distribution of knowledge and culture, in theory nor in practice. At most, there are productivity improvements. Far from ushering in a new mode of production, the driving force is more of the same engine that over the past few centuries has made capitalism what it is.

So how do we move forward? In the most general of terms, there are two options. The first is socialism in all sectors. If knowledge and culture are to be free, so too must be coal and corn or buildings and food. Everything has to be free if we are not to advantage the industries of the old economy over those of the new, if we are not to consign knowledge and culture work selectively to the readily exploitable gift economy. The second option is to build an economics of self-sustainable, autonomous cultural production, where there is space for small stallholders (publishers, musicians, writers, knowledge workers). Alternatively, the cross-subsidies need to be made transparent and explicit – including the economics of academic socialism in a mixed economy.

Returning now to the particularities of scholarly journals, no doubt the excessive cost of commercial journal content represents both profiteering on the part of the big publishers and lagging inefficiencies when they have not retooled their fundamental business processes for the digital era. Clarke (2007) estimates that the production cost of a commercial journal article is US$3400, compared with US$730 for an open access article. Van Noorden shows that the costs remain about the same in 2013 (Van Noorden, 2013).

However, even if its cost structures are lower, open access publishing is still bedevilled by problems of resourcing. Where does the US$730 come from to produce the open access article? Without some kind of fee structure, open access publishing has to rely on academic volunteerism or cross-subsidy by taxpayers or fee-paying students who support the university system. John Willinsky (2006a: 191) speaks lyrically of a return to the days when authors worked beside printers to produce their books. However, academics do not have the requisite skills or resources to be publishers. Having to be an amateur publisher adds another burden to an already challenging job. Nor is playing amateur publisher necessarily the best use of time that could otherwise be devoted to research, writing and teaching. Publishing takes a lot of work, specialized work. Someone has to provide the labour time. That time always comes at a direct or indirect cost. The problem with the ethereal ‘reputational’ economy is not that it is without costs, but that it shifts its costs often silently and unaccountably to places that are often not well prepared to bear additional cost. And it may not be an effective and efficient resource use – indeed, it could be more costly to do things this way. In other words, there are key questions about the sustainability, equity and, in fact, the openness, of open access business models.

Resourcing mode 3: knowledge at a price (again), but this time the author pays

A newer and rapidly growing resourcing model is the ‘article processing fee’, where the author pays for the cost of open access publishing. In a report to the Scholarly Publishing and Academic Resources Coalition (SPARC), Raym Crow perhaps euphemistically calls this a ‘supply-side’ pricing model, as opposed to the demand-side logic of conventional markets (Crow, 2009). In this model the author pays, or the author’s sponsor in the form of a granting agency, or the author’s host institution (Tananbaum, 2010).

The earliest, most successful and now largest of these supply-side journal operations are BioMed Central and Public Library of Science (Willinsky and Moorhead, 2014: Chapter 8, this volume). BioMed Central was founded in the UK as an open access publisher in 2000, introducing author fees in 2002. In 2008, Springer purchased BioMed Central, which by 2013 included 250 journals and had published 150,000 articles. Article processing charges range from US$1300 to US$2300, depending on the journal. Public Library of Science is a non-profit organization launched in 2003, initially funded by US$13 m in foundation grants. By 2013, it consisted of seven journals, publishing over 26,000 articles in 2012. Gross revenue grew 49 per cent in 2011, to US$24.7 million, with expenses in that year of US$20.8 million. Article processing charges in 2013 were US$1350 for PLOS ONE, offering a lower bar to publication, and between US$2250 to US$2900 for the other six journals. Authors from lower-income countries could submit either at no charge for the very poorest, or a US$500 charge for a second tier of poor countries. In 2011, the value of partial or full waivers amounted to US$2.2 million. Publication fee discounts were offered to ‘institutional members’.

This resourcing model is yet to take off in the social sciences. Archives of Scientific Psychology was launched by the American Psychological Association in 2013, with a submission fee of US$350 and, if accepted, a publication fee of US$1950. The American Educational Research Association (AERA) offered AERA Open in conjunction with California publisher, Sage, starting in 2014. Article payment charges are set at US$700 for non-members and US$400 for members. AERA Executive Director, Felice Levine, has taken a leading role in conversations about new resourcing models for scholarly societies, historically dependent for their sources of income on journal subscriptions and memberships-with-subscriptions (Levine, 2012). (The lead author of this chapter was Chair of AERA’s Journal Publications Committee when the decision was made to establish AERA Open.)

Other variations on this business model are also emerging. A former PLOS ONE editor founded PeerJ in 2012, publishing its first papers in 2013. PeerJ offers individual ‘memberships’ from US$99 (one publication per year) to US$299 (unlimited publication), or institutional memberships, where the university pass for access to PeerJ services for its faculty. Individual members must review at least one paper per year (Van Noorden, 2013).

In addition, there is a half-way position between subscription-based and open access journals, often called ‘hybrid open access’. Crow calls this the ‘author discretionary model’ (Crow, 2009). In this case, regular subscription-funded journals make individual articles available through open access if the author pays an open access fee: the Cambridge University Press ‘Open Option’ costs US$2700, ‘Oxford Open’ costs US$3000, ‘Springer Open Choice’ costs US$3000, and over 1000 Elsevier journals offer open access for fees of US$500 to US$5000 per article. In their analysis, Jackson and Richardson argue that this approach has not been particularly successful, and suggest that it is perhaps in decline (Jackson and Richardson, 2014: Chapter 9, this volume).

Despite the apparent success of the author-pays approach in recent years, it is by no means a foregone conclusion that it will become the dominant approach in the future, supplanting content sale and open access alternatives. For a start, there is something immediately counter-intuitive about the author having to do all the work, unpaid, then being required to pay more on top of that to publish, or having to go looking for and seeking the funds so that their institution or funder can pay on their behalf. There is also a question about the nature and depth of the review process; in the case of PLOS ONE, the approach is ‘publish first, judge later’, in the hope that post-publication ratings will perform a quality filtering function to compensate for the lack of rigorous pre-publication review. This initial ‘review lite’ approach supports publication based on the measure of demonstrated ‘competence’, rather than originality or significance. Publishing 23,000 articles in 2012, PLOS ONE is now called a ‘mega journal’, an unfortunate epithet perhaps, conjuring up images of ‘big box’ megastores.

Notwithstanding the beneficence offered to a handful of researchers in very poor countries, one could argue that this variant of open access is another form of socialism for the affluent – if you work as a professor in a big, well-resourced research university and are a recipient of generous research funding, you can more readily arrange the publication of your article. However, if you work in a mostly teaching institution, in the humanities and social sciences, in a country that is not very poor, or in an institution struggling with its budgets, this system may not work so well for you. Already, universities which have set up funds to cover author fees have faced challenges in terms of priorities, selection processes and selection criteria. And many research grant schemes still do not regard publication fees as a legitimate line item in budgets.

Towards sustainable scholarly publishing

How might we develop an economics of sustainability for academic knowledge systems? This is a time of enormously disruptive change in the businesses of knowledge and culture. For scholarly journal publishing, there is no doubt that new models, and new balances between models, need to be developed. There is a case for the development of all three resourcing models, and the recalibration of the balance between models.

In the area of content-for-sale, lightweight, self-sustaining publication funding models can possibly be created in this space. There is no reason given today’s digital infrastructure costs why subscription fees or per-article purchase prices should be so high. How many academics would pay (for example) US$10 per year for journal access and publication alerts? How many students would as willingly pay US$1 for an article as they do for a song in the iTunes store? The key to today’s journals impasse may be to develop low-cost digital infrastructures and self-sustaining business models that reduce the costs of inefficient and sometimes profiteering middle people.

There is also room for the exploration of open access models which do not require author fees but which are based on other forms of institutional investment. Here it is important to draw some distinctions between scholarly work and other sources of free content on the Internet. Universities are not like other content creation spaces in the new media in some important respects. They are not like Wikipedia or YouTube insofar as universities are systems of public resourcing and elaborate cross-subsidy whose purpose is to fund the idea-generation process. They are not like public peer-to-peer production insofar as university-based knowledge workers are funded by the public or not-for-profit private institutions that pay their salaries. To this extent, author and institutional involvement in the publication process is justifiable. It is a small step to build funding for specific publication media and services into the infrastructure of universities. This, in fact, may be a new role for university libraries and rejuvenated university presses.

Finally, this last decade has demonstrated that there can be a place for the author-pays resourcing model. The key is to build institutional supports which are equitable for all aspiring authors, no matter their discipline, their institutional base or their geographical location. However, given that the main part of the work – peer review – remains unremunerated, and given the potential efficiencies inherent in cloudbased workflows, surely there is no reason why this should cost any more than US$100 or US$200?

Breaking point 2: designing knowledge credibly

The system of peer review is a pivotal point in the knowledge design process: the moment at which textual representations of knowledge are independently evaluated. Up to this point, knowledge work is of no formal significance beyond the private activities of a researcher or intellectual. Peer review is a critical step towards knowledge becoming socially validated, confirmed as knowledge-of-record and made more widely available.

A key point in our argument about modern knowledge systems is that representations of knowledge are being evaluated, not an object that might itself be called knowledge. Knowledge is not simply made of the stuff that happened in the laboratory, or what was found in the archive, or what transpired in social observation, or what is figured theoretically. Rather, it is what a scholar tells us has happened or was found or transpired. And, adding a further layer of abstraction of representation away from the referent, the person and context of the scholar is removed at the point of evaluation through anonymous review. The text is examined simply as a representation, and the reviewer interpolates hypothetical connections between the representations and possible referents. The reviewer does not know the identity of the author, and thus the location of their work, nor their interests nor motivations. All a reviewer has as s/he evaluates a knowledge representation is what the text itself reveals.

Here are some of the characteristic features of the peer-review system. A journal editor receives a manuscript. They examine the text in order to decide on referees whose expertise, as demonstrated by what they have already published, may be relevant to the content of the article to be reviewed. Reviewers are selected because they are qualified to review – in fact, often more qualified than the editor – and this judgement is based on the fact that the reviewer publishes into a proximal zone of discourse. The key question is not whether they have relevant substantive knowledge, rather whether they will be able to understand the text. Reviewing also spreads the work around, creating a more distributed knowledge system than one that is publisher- or editor-centric. Typically, the identity of the author is removed and the text sent to more than one reviewer. Reviewers are asked to declare conflicts of interest of which the journal editor may be unaware – if they happen to be able to identify the author, or if they cannot give a work a sympathetic hearing because their understandings are diametrically opposed, for instance. The key motif of good peer reviewing, one of its intertextual tropes in fact, is independence and impartiality – a sense that the reviewer will read a text for its intellectual merit alone, without prejudice to opposed paradigms or politics or personal views. The reviewer promises not to disclose the paper’s contents before publication, nor to disclose their identity. After reading the text, they might recommend publication without qualification, or rewriting based on suggestions, or rejection of the paper. Whatever their judgement, reviewers are expected to support their recommendations with a cogent rationale and, if the recommendation is to revise, with specific advice. Further, multiple reviewers of a particular work do not know of each other’s identity, and so they cannot conspire to agree on the worth of a text. Multiple reviewers are sought in order to corroborate recommendations, in case, for instance, one reviewer’s judgement transpires to be unsound. When there are conflicting opinions among the reviewers, the editor may weigh the assessments of the reviewers’ reports or, if uncertain, send the text out to additional reviewers.

Prototypes of these textual practices pre-date the rise of the modern academic journal. In the domain of Islamic science, Ishap bin Ali Al Rahwl (854–931) in his book, Ethics of a Physician, discussed a procedure whereby a physician’s notes were examined by a council of physicians to judge whether a patient had been treated according to appropriate standards (Meyers, 2004; Spier, 2002). The scientific method of Francis Bacon in his The New Organon of 1620 included a process akin to peer review in which a reader of scientific speculations patiently reconstructs the scientist’s thoughts so he can come to the same judgement as to the veracity of the scientist’s claim (Bacon, 1620). These are conceptual precursors to peer review.

Pre-publication peer review in a form more recognizable today began to evolve as a method of scientific knowledge validation from the seventeenth century, starting with Oldenberg’s editorship of the Philosophical Transactions of the Royal Society (Biagioli, 2002; Guédon, 2001; Peters, 2007; Willinsky, 2006a). However, institutionalization of peer-review processes did not become widespread until the twentieth century, either as a consequence of having to handle the increasing numbers of articles or in order to find appropriately qualified experts as areas of knowledge became more specialized (Burnham, 1990). A more dispersed peer-review process in which reviewers had a degree of independence from the journal editor was not widely applied until after the photocopier became readily accessible from the late 1950s (Spier, 2002).

There is some evidence, however, that the present day may be a moment of decline in peer review, in part for the most practical of reasons. In the forms in which it has been practised in conventional publishing processes, peer review is slow. This is one of the principal reasons why repositories have been growing rapidly – as a means of faster publication of scholarly content. It is estimated that only 13 per cent of material in institutional repositories has been peer reviewed. In the physics community, for instance, arXiv does not arrange or require peer review, and pre-prints published there may or may not be subsequently submitted for peer review. To be able to post content at arXiv, all you need is the endorsement of a current contributor, a process of some concern insofar as it creates a kind of private club in which the substantive scholarly criteria for membership are not explicitly spelt out. The repository’s founder, Paul Ginsparg, also speaks of ‘heuristic screening mechanisms’ which include the worryingly vague admonition, ‘of refereeable quality’ (Ginsparg, 2007). The processes and criteria by which the unacceptability of content are determined by ‘moderators’ are not spelt out. Meanwhile, the open access journal PLOS ONE uses a pre-publication review process which we termed ‘review lite’ earlier in this chapter, relying increasingly on post-publication ratings as a supplementary quality filter.

Speed of publication in the digital era is one factor that is reducing the significance of peer review in today’s knowledge systems. However, there are four, more fundamental, concerns which need to be raised about the process, each one of which is less defensible in the era of digital communications: the discursive features of the heritage peer-review process; the textual forms being assessed; the validity of its measures; and inequitable network effects.

Review concern 1: accountability in pre-publication processes

First, to take the discursive features of the peer-review process, these track the linearity and post-publication fixity of text manufacturing processes in the era of print. Peer review is at the pre-publication phase of the social process of text production, drawing a clear distinction of pre- and post-publication at the moment of commitment to print. Pre-publication processes are hidden in confidential spaces, leading to publication of a text in which readers are unable to uncover the intertextuality, and thus dialogue, that went into this aspect of the process of knowledge design. The happenings in this space remain invisible to public scrutiny and thus are unaccountable. In most part, this is for practical reasons – until recently, it would be cumbersome and expensive to make these processes public. In the digital era, however, the incidental recording of communicative interchanges of all sorts is pervasive and cheap. Reviews could be made part of the public record, or at least could be made available for independent audit in a confidential record.

Then, in the post-publication phase there is very little chance for dialogue that can have an impact upon the statement of record – the printed article – beyond subsequent publication of errata. Reviews, citations and follow-on articles may reinforce, revise or repudiate the content of the publication of record, but these are all new publications, equally the products of a linear textual workflow. Moving to PDF as a digital analogue of print does little to change this mode of textual and knowledge production.

Key flaws in this knowledge system are the lack of transparency in pre-publication processes, the lack of meta-moderation or audit of peer-review reports or editor–referee deliberations, and the relative closure of a one-step, one-way publication process. If we posit that greater reflexivity and dialogue will make for more powerful, effective and responsive knowledge processes, then we have to say that we have yet barely exploited the affordances of digital media. Sosteric discusses Habermas’s ideal speech situation in which both interlocutors have equal opportunity to initiate speech; there is mutual understanding, there is space for clarification, interlocutors can use any speech act, and there is equal power over the exchange (Sosteric, 1996). In each of these respects, the peer-review process is less than ideal as a discursive framework. Instead, we find a space of interaction where power asymmetries are in play, identities are not revealed, dialogue between referee and author is prevented, the arbiter-editor is unaccountable, consensus is not necessarily reached, and these processes are not open to scrutiny in the public record.

We can see some of what might be possible in the ways in which some of the new media integrally incorporate continuous review in their ranking and sorting mechanisms – from the simple ranking and viewing metrics of YouTube to more sophisticated moderation and meta-moderation methods at web publishing sites such as the web-based IT news publication, Slashdot (http://slashdot.org/moderation.shtml). Social evaluations of text that were practically impossible for print are now easy to do using digital media. Is it just habits of knowledge-making practice that prevent us moving in these directions? What about setting up a more dialogical relationship between authors and reviewers? Let the author speak to reviewer and editor, with or without identities revealed: How useful did you find this review? If you found it useful, perhaps you might acknowledge a specific debt? Or do you think the reviewer’s judgement might have been clouded by ideological or paradigmatic antipathy? Much of the time, such dialogues are foreclosed by the current peer-review system. At best, the author takes on board some of the reviewer’s suggestions in the rewriting process, usually unacknowledged.

Tentative experiments in open peer review, not too unlike post-publication review in a traditional publishing workflow, have been mooted (Cassella and Calvi, 2010; Whitworth and Friedman, 2009a, 2009b). These are intended to grant greater recognition to the role of reviewers and, in order to create greater transparency, discourage abusive reviews and reduce the chances of ideas being stolen by anonymous reviewers before they can be published (Rowland, 2002). Why should reviewers be less honest in their assessments when their identities are revealed? They may be just as honest. In fact, the cloak of anonymity has its own discursive dangers including non-disclosure of interests, unfairly motivated criticisms and theft of ideas. Moreover, there is some question today as to whether anonymity is even possible. It doesn’t take a lot of detective work to uncover the identity of an author these days. A web search will likely turn up a key phrase or even the title of a work which the author may have posted as a PowerPoint on a website, or used for a conference presentation, or blogged about. Even more powerful are the plagiarism checkers that are available nowadays to many university teachers. It’s not hard to look up a self-citation by title, or use fuzzy search to find a previously floated idea, or a turn of phrase, or forms of words that have been legitimately reused or self-cited. And one thing worse than the cloak of anonymity is feigned anonymity, where the reviewer knows the identity of the author but does not disclose it. Professional ethics would demand that a reviewer pulls out when, at this point, they encounter a conflict of interest. But systems cannot rely on ethics alone, particularly when there is no way of knowing that the reviewer is operating unethically.

Alternative evaluation modes are emerging in the new media, and these innovations may offer useful lessons for scholarly journals. In the new media, reviewers can be ranked by people whose work has been reviewed, and their reviews in turn ranked and weighted for their credibility in subsequent reviews. This is roughly how trusted super-authors/reviewers emerge in Wikipedia. In a revamped journal system, there could also be multiple points of review, blurring the pre- and post-publication distinction. Initial texts could be published sooner, and re-versioning could occur indefinitely. In this way, published texts need not ossify, and the lines of their development can be traced because changes are disclosed in a publicly accessible record of versions. These are some of the discursive possibilities that digital media allow, all of which may make for more open, dynamic and responsive knowledge dialogue, where the speed of the dialogue is not slowed down by the media in which it is carried.

Review concern 2: textual practices

The second major flaw in the traditional peer-review process, and a flaw that need not exist in the world of digital media, is in the textual form of the article itself. Here is a central contradiction in its mode of textuality: the canonical scholarly article speaks in a voice of empirical transparency, paradigmatic definitiveness and rhetorical neutrality – this last oxymoron capturing precisely a core contradiction, epistemic hypocrisy even. The textual form of the article abstracts knowledge away from its reference points. At best, the article only contains a selective sampling of the data. The article is not the knowledge, nor even the direct representation of knowledge – it is a rhetorical re-presentation of knowledge. For this most practical of reasons, this has to be the case for print and print lookalikes.

However, in the digital world there is very little cost in presenting full datasets along with their interpretation, a complete record of the observations in point alongside replicable steps-in-observation, the archive itself alongside its exegesis. In other words, reviewers in the era of digital recording are not limited to examining the knowledge representation. They could come a good deal closer to the world to which those representations point in the form of immediate recordings of that world. This can occur multimodally through the amalgamation of manipulable datasets, static image, moving image, and sound with text – captions, tags and narrative glosses. Journal of Visualized Experiments (www.jove.com) is an interesting case in point, publishing peer-reviewed videos – totalling more than 2500 by 2013. Much of what is in this journal, such as surgery, was never so readily representable in conventional journal article formats.

Ideally, it should be possible to embed video, audio and manipulable datasets inline within articles; however, the tools we use today to make articles do not easily allow this. There need be no page constraints (shape and textual form) or page limits (size and extent) in the digital record. This changes the reviewers’ relationship with the knowledge itself, making them more able to judge the relations between the purported knowledge and its textual forms, and for this reason also more able to make a contribution to its shaping as represented knowledge. This would also allow a greater deal of transparency in the dialectics of the empirical record and its interpretation. It may also lead to a more honest separation of represented data from the interpretative voice of the author, thus creating a more open and plausible environment for empirical work. In a provocative and widely cited article, John Ioannidis argues that ‘most published research findings are false’. Exposing data would invite critical reinterpretation of represented results and reduce the rates and margins of error in the published knowledge record (Ioannidis, 2005).

Review concern 3: peer-review measures

A third major flaw in the heritage peer-review process is its validity. What does the peer-review system purport to measure? Ostensibly it evaluates the quality of a contribution to knowledge (Jefferson et al., 2002; Wager and Jefferson, 2001). But precisely what are the rubrics of knowledge? In today’s review system these are buried in the under-articulated depths of implicit knowledge acquired during the privileged processes of initiation into a peer community. Mostly, reviewing is just a three-point scale – accept, accept with revisions, reject – accompanied by an open-ended narrative rationale. In the review narrative, the tropes of objectivity can hide – although none too effectively at times – a multitude of ideological, paradigmatic and personal agendas. These are exacerbated by the fact that reviewers operate under a cloak of anonymity. There are times, moreover, when the last person who you want to review your work, the last person who is likely to be ‘objective’, is someone in a proximal discourse zone (Judson, 1994). For these reasons, the texts of peer review and the judgements that are made, are often by no means valid. One possible solution to this problem is to develop explicit, general knowledge rubrics at a number of subdisciplinary, disciplinary and metadisciplinary levels, and to require that referees defend the validity of their judgements against the criteria spelt out in the rubrics. This would also have the incidental benefit of making the rules of the epistemic game explicit, and in so doing making them more accessible to network outsiders … which brings us to the fourth major flaw in the peer-review system: its network effects.

Review concern 4: network effects

Peer-review pools generally work like this. A paper is sent to a journal editor. The editor is the initial ‘gatekeeper’, making a peremptory judgement of relevance to the area of knowledge and the quality of the work. At this point, the paper goes through a process called ‘desk rejection’, as a consequence of which perhaps half the submitted papers, often more, are sent a form rejection saying something like ‘out of scope’. A hasty judgement is made by a gatekeeper who can see that the author is just a graduate student, or maybe just another person from a non-English speaking country writing poorly in English, or just from an institution without a big research reputation, or just someone they’ve never heard of.

If the author passes this hurdle, the editor chooses suitable reviewers for the work. This choice can reflect content or methodological expertise. But it can also be a choice of friends and enemies of ideas, positions and paradigms – another point of potential closure in the knowledge process. Given that reviewers are not paid, the bias among those who accept the task will be established broadly in context where they owe something to the patronage of the editor, or they are friendly with the editor and exist in some kind of relationship of reciprocal obligation. If the author has returned to them reviews that they consider to be unfair or plain wrong, they have no one to whom to appeal other than the editor of the journal who selected the referees in the first place – there are no independent adjudication processes, and, more broadly, no processes for auditing the reliability of the journal as a knowledge validation system (Lee and Bero, 2006).

The overall logic of such a system is to create self-replicating network effects in which a distributed system in fact becomes a system of informal, unstated, unaccountable power (Galloway and Thacker, 2007). Journals come to operate like insider networks more than as places where knowledge subsists on its merits. Or at least that’s often the way it feels to outsiders. Their tendency, then, is to maintain consensus, control the field, suppress dissent, reinforce the disciplinary ramparts and maintain institutional and intellectual inertia (Horrobin, 1990). The practical effect is to exclude thinkers who, regardless of their merit, may be from a non-English speaking country, or who teach in a liberal arts college, or who do not work in a university, or who are young or an early career researcher, or who speak to an innovative paradigm, or who have unusual source data (Stanley, 2007). The network effect, in other words, is to exclude a lot of potentially valuable knowledge work conducted in rich knowledge spaces.

Open access publishing does not necessarily reduce these points of closure in scholarly knowledge-making. The question of the cultural and epistemic openness of a knowledge system is a completely different question from the economics of its production. In fact, as we have seen, open access may be accompanied by greater closure, when for instance the heritage peer-review system, whatever its defects, is eroded and replaced by fewer, more powerful and even less accountable gatekeepers.

Reputational economies can be more viciously closed than commercial ones because they are driven by purely ideological interests. Ironically, cultural systems grounded in material sustainability often operate in practice with less ideological prejudice. It is important, in other words, not to mix discussions of business models and the epistemic conditions of openness – the latter does not necessarily follow from the former. New resourcing models can as be closed as old ones from an epistemic point of view.

Breaking point 3: evaluating knowledge, once designed

As the Second World War came to an end, the Director of the US Office of Scientific Research and Development, Vannevar Bush, published an article for The Atlantic magazine foreshadowing a new role for science once the war had concluded. He had co-ordinated scientific efforts to support a mighty engine of destruction that was to culminate weeks later in the explosion of the first atom bomb. In his article, ‘As we may think’, he said that the time had now come for science to return to peaceful pursuits, and one of its central challenges would be to find better ways to manage the masses of rapidly accumulating human knowledge. His ‘memex’ proposal – a box of microfilm sitting on one’s desk – seems as quaintly anachronistic today as it is prescient about the general mechanisms of knowledge interconnection that would become the World Wide Web (Bush, 1945).

Bush proposed the mechanization of knowledge in a fashion that has indeed been realized in the Internet. He lamented how slow it still was in 1945 to connect the components of knowledge. Physical libraries may contain ‘millions of fine thoughts’ but each book is filed in one place in the library and the processes of finding these thoughts are cumbersome and slow. ‘The human mind does not work that way’, he said; it operates by association, across an ‘intricate web of trails’ and at awe-inspiring speeds (ibid.).

The conventional physical library is, by comparison with the Internet, a cumbersome information technology. However, since the end of the fifteenth century, the technologies of recorded text have not worked without their own intricate, multifaceted associative links. Perhaps the most pervasive and effective of these is the citation, linking one thought to an antecedent thought, one author’s claim to a previous author’s authority, or a fact mentioned in one text to the site of its original documentation. In the physical library, the technologies of association are cataloguing, indexing, tables of content, and that most revolutionary of all hypertextual technologies, the page number, allowing as it does the possibility of pointing from one precise point in the record of knowledge to another (Cope and Kalantzis, 2010; Grafton, 1997).

The Internet is no more intricate in principle than the physical library. It does no more than what citation does, which is to link one point in the human record with another. Vannevar Bush promised the mechanization of knowledge, and we can be grateful to the World Wide Web for allowing us to follow associative links faster than we did in the case of the physical library. However, despite the promises of the ‘semantic web’ (Cope et al., 2011), the broader possibilities raised by Bush for thinking machines which perform logical operations as they do their associative work have not yet materialized. So far, we have only managed to mechanize a form of citation – the hyperlink.

If the associative lateral links of citation – now in the form of hyperlinking – are the key mechanism binding together the web of knowledge, then surely those nodal points to which more links are made will be significant locations in the web of knowledge. This was the underlying idea behind Eugene Garfield’s 1955 proposal to create a citation index. His idea sprang in part from the practices of legal case citation, an integral aspect of common law precedent, and specifically the publication in the US since 1873 of Shephard’s Citations. Important cases are cited more often, and knowing which are these cases makes them more important still. For science, Garfield proposed a similar index which would count the citations to an article and use this as a measure of the article’s influence, its ‘impact factor’ (IF) (Cope and Kalantzis, 2010; Grafton, 1997). By 1960, Garfield had founded a company to do just this, the Institute for Scientific Communications (ISI). ISI grew to become the dominant collector and counter of citations, its IF data providing the primary quantitative measure of the worth of the work of a scholar or their institution, and the prestige of a journal represented by the Journal Impact Factor. Sold by Garfield in 1992, ISI is now an arm of the multinational media corporation, Thomson Reuters.

When in 1998 Larry Page, son of a Michigan State University Computer Science Professor, and his fellow Stanford Ph.D. student, Sergey Brin, published an algorithm called PageRank, they took the kernel of their idea from the citation logic of Garfield’s impact factor. The significance of a web page can be evaluated by the number of pages citing it by link. To this, they added the idea that not all citation ‘votes’ for a page are equal. The ‘votes’ of citations from pages that are themselves ranked as being more significant are weighted so that they count more than citations from lightly cited pages (Brin and Page, 1998).

In these various ways, the citation system has been integral to our knowledge ecologies for 500 years. Recently, we’ve got better at mechanizing the links so we can reach points of knowledge more quickly through the World Wide Web. However, we have not yet devised systems that are smarter in a qualitative sense. And we have got into the habit of counting links to determine points of nodal significance, partly because the mechanization of citation now makes large-scale counting more practicable.

In this section of the chapter we will focus mainly on the ISI Web of Knowledge and its IF because it dominates other citation counts as a measure of the scholarly value of a journal article. Other citation databases have emerged which in some cases may be more comprehensive and more (or less) rigorous. These include Scopus (http://www.scopus.com), CiteSeerX (http://citeseerx.ist.psu.edu) and Google Scholar (http://scholar.google.com) (Craig et al., 2014: Chapter 11, this volume;Harzing and Van der Wal, 2008; Kousha and Thelwall, 2007; Norris and Oppenheim, 2007; Schroeder, 2007). However, as the logic of citation counting is fundamentally the same, we will focus principally on the ISI Web of Knowledge.

Here’s a rationale for citation analysis: on a time dimension, knowledge is an iterative thing. Knowledge workers read the texts of others as reference points for their own knowledge work – to find out what has already been discovered and thought, and to determine which questions still need to be addressed. This is the basis of ‘progress’ in science, and the evolution of frames of thinking. On a structural dimension, and for all the rhetorical heroism of discovery and analytical voice, knowledge is a social product. ‘Standing on the shoulders of giants’ was Isaac Newton’s famous expression. This is why there is a deep and intrinsic intertextuality to formal knowledge representations: this question arises from that (insert citation); this method comes from here (insert citation); this idea or discovery builds on that (insert citation); this idea or discovery corroborates that (insert citation); this idea or discovery contradicts that (insert citation). The interplay of intellectual debt and new intellectual contribution is at the heart of scholarly work (Grafton, 1997). Integrating one’s work into a body of knowledge requires a rhetorical play between this text and that (insert citation).

And this is how the ISI Web of Knowledge works. Thomson ISI collects citations in a sample 7300 science and technology journals, 2500 social science journals and 1500 arts and humanities journals. The sample is not a representative sample. Rather it is a sample consisting of what, via relatively non-transparent processes of selection, Thomson ISI staff have deemed to be the best. There are then two main ways of evaluating the value of knowledge. The first is simply a matter of counting the number of citations of articles a scholar or the people in a department have attracted. The second is to weight the value of these articles according to a prestige index, the Journal Impact Factor. This is calculated by dividing the number of citations to a journal in the two previous years by the number of articles published in that journal in a year (Cameron, 2005; Craig et al., 2014: Chapter 11, this volume; Meho, 2007). So, if in a year a journal publishes 100 articles which attract 300 citations in the subsequent two years, it is assigned an IF of 3. But if the 100 articles only attract 100 citations, it is assigned an IF of 1. Citations on articles that are more than two years old are not counted.

Citation counts and weighting the value of published articles by the Journal Impact Factor have become all-powerful bases with which to evaluate the worth of a knowledge worker’s output. Or they are aggregated to determine the quality of a journal or an academic department. We want to make the case that these citation metrics are a very poor measure of epistemic impact and value. In fact, citation count is so flawed a proxy for knowledge value that we should rethink entirely these citation-based processes for analysing the value of knowledge.

We will use the two canons of assessment theory to interrogate the bases of citation measures: their reliability and their validity (Pellegrino et al., 2001). A reliable assessment will consistently produce the same results when repeated in the same or similar populations. The assessment, in other words, is not fraught by inaccuracy in its implementation. A valid assessment is one where the evidence collected can support the interpretative burden placed upon it. The assessment, in other words, measures what it purports to measure. We want to mount four fundamental challenges to today’s citation count regime. The first pertains to reliability: the citation numbers often do not add up. The other three address underlying questions of validity: one citation does not equal one (implied) unit of knowledge value; knowledge is not validly evaluated according to popularity or supply-and-demand metrics; and we look at the network effects that privilege position over quality.

Knowledge evaluation challenge 1: the citation numbers often do not add up

To start with the question of reliability: the count mechanisms for calculating citations are in some important respects quite broken. Incorrectly referenced items may be as high as one-third, lowering the chance of a citation being counted (Todd and Ladle, 2008). ‘Homographs’ occur frequently when initials are used instead of whole first names, as is the predominant practice in the Thomson ISI databases. This leads to a failure to distinguish scholars who have the same last name and initial. Citations are also more likely to be counted when they are in English or when an author has a conventional English name (Harzing and Van der Wal, 2008).

Meanwhile, the Journal Impact Factor is open to the simplest of manipulations (Favaloro, 2008, 2009; Krell, 2010). If authors are advised to cite papers published in the previous two years in the same or related journal, the IF will rise. ‘Editors of some journals’, report Hemmingsson et al., ‘are sending copies of articles previously published in their journals together with the review copy of another article to the referees and are asking them if it is possible to include those published articles in the reference list’ (Hemmingsson et al., 2002). Smith haracterizes these practices as constituting ‘citation cartels’ (Smith, 1997).

Further, there can be manipulation of the denominator in the equation. Ask Thomson ISI to remove more supposedly ancillary articles such as editorial matter and reviews from the denominator of total published articles, but leave them to be counted in the numerator, and the Journal Impact Factor will go up (Hemmingsson et al., 2002). Moreover, as Craig et al. point out, ‘ISI does not include all document types in the denominator of their calculations of the impact factor (Equation 1), whereas all citations to any document type are counted on the numerator. This can lead to situations where some citations are not offset by the presence of a publication on the denominator, effectively meaning these citations are “free” citations’ (Craig et al., 2014: Chapter 11, this volume).

Brumback analyses the case of the high-impact medical journal The Lancet in these terms:

The Journal Citation Reports listed the 2006 Impact Factor for Lancet as 25.8, based on a calculation of 20,021 citations to 776 ‘source’ items in the year 2005 and 416 items in year 2004.Meanwhile, the Web of Science lists for Lancet in the year 2005 a total of 1772 published items categorized into editorial material (723), letter (474), article (348), review (86), biographical item (77), correction (43), news item (20), and software review.Interestingly, the Journal Citation Reports only considered 360 or just 20 per cent of these total 1772 published items as ‘source2 items for the denominator’ (what are those 360 items?). Adjusting the denominator for the other 80 per cent of the published material (much of which received citations and counted in the numerator) would reduce the Impact Factor of Lancet from the lofty 25 to a more lowly 5. Interestingly, over the past 5 years of Journal Impact Factor calculations for Lancet, the denominator has gotten progressively smaller (by nearly 40 per cent) causing the Impact Factor to rise by more than 65 per cent … [R]ecently … some editors [have] gone so far as to change the designation of published items (to reduce the likelihood that Thomson Scientific will count them in the denominator for the calculations) and to require authors to add extra citations to recent articles in their journals before accepting papers. Unfortunately, the opacity in Thomson Scientific’s refusal to reveal the details of their calculations only serves to increase suspicion about possible data manipulations.

(Brumback, 2008: 366)

Smith similarly concludes: ‘It is not clear what should be included in the denominator, and many editors have discovered that the best way to increase the Impact Factor of your journal is to persuade the Institute for Scientific Information … to exclude as much as possible from the denominator. By doing this editors can more than double the Impact Factors of their journals’ (Smith, 2006).

Playing the numerator/denominator game can have other unhealthily distorting effects. The renowned Physical Review Letters publishes over 4000 papers per year, and has an impact factor of approximately 7. Reviews of Modern Physics publishes 30 papers per year and has an impact factor of 33.5. If Physical Review Letters only published its 500 most popular papers per year, its impact factor would go up to 20. ‘In essence’, conclude Antonoyiannakis and Mitra, ‘you should aim to publish [fewer] papers … and focus on areas that are trendy and have adherents with good citing practices’ (Antonoyiannakis and Mitra, 2009). If Physical Review Letters were to take this path, the actual impact of its most popular papers would not change. But 3500 excellent and at times highly specialized –and perhaps for this reason, lightly cited – apers would not have seen the light of day, no matter how glowing the accolades of their reviewers. Physics would be very much the worse for that.

Fersht plays this logic through to its unhealthy conclusion:

What … is the most influential of the … following journals: A, which publishes just 1 paper a year and has a stellar IF of 100; B, which published 1,000,000 papers per year and has a dismal IF of 0.1 but 100,000 citations; or C, which publishes 5,000 papers a year with an IF of 10? … C is likely to be the most influential journal. Clearly neither IF nor total number of citations is, per se, the metric of the overall influence of a journal.

(Fersht, 2009)

Add to this sampling and other statistical distortions and you have a situation where citation counts are hard to believe even on their own terms. Neff and Olden note a generalized increase in citation, and we would hazard to suggest that this is related to the relative ease today with which one can import citations into personal bibliographical databases and insert citations in word processor programs. This produces the phenomenon of impact factor inflation, as does an increase in the number of journals that are counted (Neff and Olden, 2010). Improved impact factors may encourage journal editors to feel predisposed to view the impact factor positively, but these raised scores may not be what they purport to be.

The editor of Nature concluded his analysis of the impact factor attributed to their magazine by Thomson ISI as follows: ‘Try as we might, my colleagues and I cannot reconcile our own counts of citable items in Nature’ (Campbell, 2008). And in his analysis of the field of communication studies, Levine concludes that ‘the results … show that Institute for Scientific Information citations are biased and do not accurately or evenly reflect citations’ (Levine, 2010).

The impact factor also varies enormously in its reliability across different disciplines. In molecular biology and biochemistry, 96 per cent of citations are to journal articles and the Web of Knowledge database covers 97 per cent of cited articles, resulting in a 92 per cent coverage of the field. However, in the humanities and arts, only 34 per cent of citations are to journal articles, of which only 50 per cent are counted in the Web of Knowledge, producing a mere 17 per cent coverage (Craig et al., 2014: Chapter 11, this volume; Tötösy de Zepetnek, 2010). Only 11 per cent of education journals are counted (Togia and Tsigilis, 2006).

Moreover, despite its name, bibliometrics mostly ignores books, and thus favours disciplines in which more journal articles are published, to the detriment of those where books are a significant publication venue. De Kemp and Rahm also show how disciplines which publish through conference papers, such as computer science, are neglected (De Kemp and Rahm, 2008). Butler concludes that for most disciplines in the social sciences and humanities standard bibliometric measures cannot be supported (Butler, 2008). Moreover, citation practices vary. Bornmann et al. report on research by Podlubny which estimates that one citation in mathematics is equivalent to 15 in chemistry and 78 in clinical medicine, practically precluding analyses across fields (Bornmann et al., 2008). Citation practices vary between disciplines, thus producing incomparable metrics (Lancho-Barrantes et al., 2010). Small fields are also disadvantaged, with fewer citations to make and fewer people making citations. Low citation count, then, will be a function of the size of the field, not the impact of your work (Lawrence, 2008).

The time frame for the Journal Impact Factor, moreover, is limited as it refers to citations made within one year to the previous two years, and is so biased to favour disciplines with more transitory knowledge and faster uptake. It also favours shooting stars rather than knowledge whose uptake is longer term and more durable. As Lawrence points out, ‘truly original work usually takes longer than two years to be appreciated – the most important paper in biology of the 20th century was cited rarely for the first ten years’ (Lawrence, 2007). Here, then, is another collateral consequence: ‘the Impact Factor arbitrarily favors research in fields whose literature rapidly becomes obsolete’ (Banks and Dellavalle, 2008: 168; also Seglen, 1997). As Guédon suggests:

The very definition of the IF, by limiting citation counts to two years, independently of the life-cycles of articles in each discipline, reflects the urge to give results, any results, as soon as possible. Journal editors unhappy with the performance of their journal in the Journal of Citation Reports (JCR) can thus move quickly to redress the situation, i.e. improve the IF of their journal.

(Guédon, 2014: Chapter 3, this volume)

And of course, as mentioned above, it’s easy for editors to enjoin authors to cite articles from the same journal, in this and other journals, that have been published in the past few years. You only have to have authors accede to this request by the number of articles you have published in your journal in the past year to increase the impact factor by 1, which is generally regarded to be a hugely significant jump.

Moreover, averaged values for journals can be highly influenced by a few blockbuster articles in a particular two-year stretch. As Phillip Campbell, editor of Nature, has said: ‘our own internal research demonstrates how a high journal impact factor can be the skewed result of many citations of a few papers rather than the average level of the majority, reducing its value as an objective measure of an individual paper’ (Campbell, 2008: 5). According to Campbell, 89 per cent of Nature’s impact factor in 2004 was generated by just 25 per cent of its papers. As for the 75 per cent whose impact was relatively low, and thus who did Nature a disservice if the journal is to be judged by its impact factor, ‘they were in disciplines with characteristically low citation rates per paper like physics, or with citation rates that are typically slow to grow, like the earth sciences, or because they were excellent (e.g. visionary) but not “hot”’ (ibid.). Ogden and Bartley conclude from their study that ‘two-thirds or more of the JIF depend on the most-cited 25 per cent of papers’. The JIF of the journal where a paper is published is therefore a very poor guide to the paper’s citation performance or the success of the author (Ogden and Bartley, 2008).

Furthermore, the Thomson ISI databases include a limited number of journals, mostly in the English language from North America and Europe (Meho, 2007). They are by no means a representative sample, and the processes for selection of journals are opaque, to say the least. Some of the stated criteria are of no particular relevance to impact and intellectual quality, such as timeliness of publication – something that is irrelevant anyway in the digital environment where one article can be, and mostly is, published at a time. They also include some highly subjective criteria such as the stature of the members of the editorial board. A librarian colleague of ours emailed Thomson to ask them about their selection processes, and their answer was: ‘All journal evaluations are done solely by Thomson staff. We do receive recommendations for journals from researchers but they have no part in the evaluation process.’ Impact factors are not neutral; they generate impact in the form of an apparent prestige that gives an aura of respectable citability. Given Thomson Reuters’ position in the world of academic publishing, and the inaccessibility of the process to independent audit (Rossner et al., 2007), this should also be regarded as a serious case of conflict of interest.

‘Without exception’, concludes Stevan Harnad, ‘none of these metrics can be said to have face validity’ (Harnad, 2008: 105). ‘The sole reliance on citation data provides at best an incomplete and often shallow understanding of research – an understanding that is valid only when reinforced by other judgments’, says a report commissioned by the International Mathematical Union (Adler et al., 2008: 2). Vanclay concludes an exhaustive methodological analysis with, ‘The Thomson Reuters impact factor (TRIF) suffers so many weaknesses, that a major overhaul is warranted, and journal editors and other users should cease using the TRIF until Thomson Reuters has addressed these weaknesses’ (Vanclay, 2011: 230). Searching for a metric of academic outputs, the Higher Education Funding Council for England concludes in a sanguine tone that ‘bibliometrics are not sufficiently robust at this stage to be used formulaically or to replace expert review’ (Higher Education Funding Council for England, 2009: 3).

It is hardly surprising, then, that there has been a rising crescendo of complaint against citation counts in general, and the Journal Impact Factor in particular. The level of complaint has grown in proportion to the intensification of pressure in universities to have quantifiable ways in which to measure individual scholarly and institutional outputs.

Initiated by the American Society for Cell Biology, The San Francisco Declaration on Research Assessment (DORA) was issued in December 2012 (Rafols and Wilsdon, 2013). The declaration announced ‘the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations’, for these reasons:

The Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews; C) Journal Impact Factors can be manipulated (or ‘gamed’) by editorial policy; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.

(DORA, 2013)

A formidable list of representatives of scholarly societies, publishers and libraries signed on immediately. Within six months, 9000 individuals and 350 institutions had signed on to the declaration.

Knowledge evaluation challenge 2: one citation does not equal one (implied) unit of knowledge value

Our first knowledge evaluation challenge pertained to the question of reliability, or to the fact that the data do not work very well as measures of what they purport to assess. The next three of our challenges relate to the validity of citation counts, or whether they do even measure what they purport to measure, namely the value of a scholar’s work and their contribution to knowledge for the purposes of career evaluation, or the assessment of the intellectual quality of a group of academics.

Firstly, citation counts assume, fallaciously, that all citations are equivalent. Their implied formula is this: one citation equals one unit of knowledge value. Citation, of course, is an integral part of the process of making knowledge claims. A citation connects or distinguishes an author’s new data or conceptualizations from its antecedent sources or points of critical differentiation. However, the nature of these knowledge claims is so various as to make a mockery of the idea of an homogenized categorical unit of measurement, the citation.

The most widely noted and perhaps most obvious of these flaws is self-citation in the case of an individual scholar, or, in the case of a journal, citations to articles in that same journal that may have been solicited (Wilhite and Fong, 2012) or voluntarily inserted by a hopeful author to impress with their connectedness with that journal (Landoni et al., 2010). This is how a person or a journal votes for themself in the citation popularity stakes. Self-citation has been shown in some studies to comprise between 7 and 20 per cent of an article’s references. A study of orthopaedic journals shows a correlation between rates of self-citation and the Journal Impact Factor (Siebelt et al., 2010). Another study notes variations in self-citation rates between different countries, and the effects of multiple authorship of multiple articles on enhancing an individual’s overall citation count (Vitzthum et al., 2010). Self-citation is proper and necessary, except when for the wrong reasons. However, self-citation, even for the right reasons, is an utterly different kind of knowledge claim from the other kinds of claims underlying a citation. The general problem is to be aggregating qualitatively different kinds of claim. In the case of a Journal Impact Factor, Thomson ISI considers over 20 per cent cross-citation within a journal as possible abuse of the system (Mavrogenis et al., 2010). So extensive is the practice of journal selfcitation, that Craig et al. (2014) report in the ISI 2011 Journal Impact Reports published in summer 2012 that, ‘the Impact Factors of 50 journals were suppressed and not reported, due to excessive self-citation’ (Craig et al., 2014: Chapter 11, this volume).

Citation counts also include retractions, i.e., cases where even the authors subsequently agree they were wrong, or have been proven wrong. Rossner et al. cite what they consider to be ‘a particularly egregious example, Woo Suk Hwang’s stem cell papers in Science from 2004 and 2005, both subsequently retracted’ – these had been cited a total of 419 times by November 2007 (Rossner et al., 2008). Smith reports a study ‘of 211 retracted articles published between 1996 and 2000 [which] found that a third of their citations occurred after the articles were retracted. Of the 137 citations only five were negative: the vast majority cited the work affirmatively … [and] a recent article in Science has shown that many studies that are proved to be fraudulent are not even retracted’ (Smith, 2006). The problem with citation counts, says Lane, is that they ‘lump … together verified and discredited science’ (Lane, 2010). Cambridge University zoologist Peter Lawrence concludes thus: ‘Your paper may … have diverted and wasted the efforts of hundreds of scientists, but [the impact factor metric] will still look good on your CV and may land you a job’ (Lawrence, 2007).

Much citation, moreover, is not affirmative when, for instance, a climate scientist quotes a climate sceptic, or when a sociologist of immigration quotes an anti-immigration researcher. In order to represent the dimensions of the academic debate, tendentious minority positions may be disproportionately cited as paradigmatic reference points, straw people even, although these positions are widely regarded by the discipline to be unsustainable, especially when the article making the citation in fact argues precisely this. The apparent popularity represented in citation counts may in fact be notoriety. Or it may be vehement disagreement, such as when for instance Keynesian economists cite von Hayek or Friedman. Once again, critical epistemic distinctions are lumped together, giving no sense of the contours of the debate or the reasons –critical or affirmative, right or wrong – why some things may be more highly cited than others. Conversely, research in the medical sciences shows a tendency to cite positive results rather than significantly null results, producing a ‘phenomenon [that] may produce a biased evaluation of the effectiveness of treatments by readers of the scientific literature’ (Etter and Stapleton, 2009).

There is nothing intrinsically wrong with negative citation, or citation of science which is subsequently retracted. Each of these moves is an integral aspect of critical and dynamic knowledge ecologies. The problem is to aggregate these different kinds of knowledge claim into a single metric, whether that be an individual author’s citation count or a Journal Impact Factor. They are claims of such different orders that they defy valid aggregation.

To take our argument one step further, crucial distinctions are to be made between a citation of fact, a methodology, a concept or paradigm, and a quoted source – each of these citation types has enormously different qualities. As for their impact, the fact may be a minor and unexceptional datum, or a major interpretative cluster. It may be a fact generated by the author being cited, or a mere re-reporting of a fact first reported by someone else, in which the original author is offered no credit and the subsequent author gets undeserved credit. A method may be a mere reiteration of an established procedure that is the humdrum stock-in-trade of a discipline, or it may be dazzlingly new. A concept or theory may be a passing allusion or it may be pivotal to an argument. A citation might point to a reiteration of an idea first generated by someone else, or it might point to the place where a whole new conceptualization emerges. A quote may be to a primary source unearthed by an interviewer or archivist, or it may be a quote of a quote already quoted in a secondary source. The citation counting system also works as if the cited fact, concept, method or quote came from the latest author – it can only record one-to-one citations, and not chains of citation. In the case of secondary citations, the cited author may have cited their sources properly, but the actual source is now lost in the re-citation. In other words, the nature and significance of the citation can vary enormously. The intellectual qualities of new and derivative citations in most cases can only be clarified on a close reading of the text. There is also a difference when a key person in the field cites you or an unknown person, compared to a citation by a student, friend or acolyte.

Then there is perhaps the most fundamental distinction of all, at the very heart of the citation as an epistemic tool. Is the citation for the purpose of reiteration and acknowledgement of the derivative nature of a reported fact or concept? Or is it for the purposes of distinction, contrasting one’s own original datum or concept from those hitherto articulated in the field, the novelty of which one wants to distinguish by way of disagreement or difference. Thus the dialectic of agreement and distinction is an integral part of the process of ‘original thinking’, a discursive engine of intellectual innovation since the beginning of modernity (Grafton, 1997). By this means it has been possible to acknowledge intellectual debts while at the same time creating new intellectual capital.

These are all crucial variations in the form and function of the citation. However, they are occluded by citation counts because they only support one-to-one links and fallaciously create a flat earth in which every citation is equalized. One citation is one vote in the knowledge evaluation stakes.

Then there is the question of the extent of epistemic engagement reflected in a citation. It may be deep, but it may be so casual as to make a mockery of the idea that one citation is one vote. A study of ecology papers showed that only 76 per cent of cited articles supported the claim being made for them by the author making the citation. Another study of misprinted citations showed that perhaps only 20 per cent of cited papers are read, indicating that people are citing citations rather than sources they have read (Todd and Ladle, 2008). And there is a critical difference between name-dropping in the form of bracketed references by way of vague allusion to a set of ideas (Foucault, 1982) and the precision of referencing to a fact or turn of phrase to be located on a specific page.

The increasing reliance on meta-analyses and review articles exacerbates these problems. The most cited articles captured in citation counts are not original research or theoretical conceptualizations – they are review articles (Bornmann et al., 2008; Meho, 2007; Pauly and Stergiou, 2008; Simons, 2008). There is nothing necessarily wrong with review articles. They perform a useful role, particularly insofar as they help initiate novices into a field or subfield. Sometimes citing a review article is preferred because it dispenses with the need for long reference lists, a particularly important issue when word or page limits have been set (Pauly and Stergiou, 2008). However, to be uncharitable, they could often be characterized as academic journalism rather than the hard empirical or theoretical work that makes for intellectual breakthroughs. Review articles support citation practices which O’Connor characterizes as ‘the reporting of secondary data or analyses from literature reviews as if they were the results of primary research’ (O’Connor, 2010). Rogers points out that:

the top 25 journals, those with the highest Impact Factors, include many readily acknowledged elite publications, such as Nature, New England Journal of Medicine, Science, and Lancet. But, curiously, 60 per cent of the supposed top 25 are review journals, journals that publish only reviews and summaries of past research. That is to say, these journals report no fresh research, nothing new! How could that be? How can you have ‘impact’ if you don’t publish new research?

(Rogers, 2002)

Craig et al. note a ‘sixfold increase in review articles between 1991 and 2006 compared to a twofold increase in primary research articles’, and ‘a review of the whole of Web of Science in the decade from 2003 to 2012 shows that article counts grew by 51.1 per cent, while review counts grew by 86.5 per cent’ (Craig et al., 2014: Chapter 11, this volume).

Eugene Garfield set out to develop a science of science which had the intellectual rigour of legal precedent. The citation system we have today, however, has none of the rigours of legal citation. Legal precedent is based on a relatively consistent process which makes points of precise discursive agreement and distinction. A case is only cited because it makes a specific conceptual distinction, of direct applicability to another case. By contrast, the range of ‘knowledge claims’ (Budd, 1999) made by citations is so broad, various and at times so mutually contradictory, that a one-citation-equals-one-vote system of citation counting is simplistically reductionist.

In these ways, the single number reductionism of citation counts grossly oversimplifies a phenomenon as complex and multifaceted as human knowledge. The answer to the question of the meaning of the universe – concludes the whimsical science fiction story and film, The Hitchhiker’s Guide to the Galaxy – is 42. Thomson ISI comes to the same absurdly reductionist conclusions about a journal’s influence and, by extension, about the worth of a scholar who manages to get published in that journal.

For an individual academic, the raw publication and total citation count fuels a culture of ‘no thought left unpublished’, ‘salami publishing’ and ‘honorary authorship’ where additional authors (preferably famous) are added, even though their association with a work may be marginal, or less than the fraction of the work that might be assumed from dividing it by the number of authors. However, in the citation count business, fractions don’t matter; six authors on an article gives six authors each one full vote of intellectual confidence. Increasing your total number of publications also increases your visibility, thus enhancing your chances of being cited. Impact uses a simplistic quantity – an epistemic 42 – as proxy for the qualities of knowledge. And using raw numbers of any sort – publication counts, citation counts or impact factors – may turn out to be a pseudo-objective shortcut in hiring, promotion, tenure and departmental review, a metric by means of which you think you can evaluate a body of publications without having to read them (Haslam and Laham, 2010). None other than ISI founder Eugene Garfield comments of his most significant work:

as a confirmed citationist, I must point out that it is not my most cited work. It is my 1972 paper in Science, on using citation analysis to evaluate journals, which has attracted much more attention, although the 1955 paper is far more significant. In that sense, I am like many other authors who feel that their most-cited work is not necessarily their best.

(Garfield, 2006)

Moreover, if the majority of articles are lightly cited, does this mean that they have no value (Browman and Stergiou, 2008)? An article may demonstrate the strength of the data collection, and the analysis and synthesis capacities of an active researcher. It might demonstrate their research competence and clarity of thinking. It may be read and used without citation, contributing to a field of endeavour in a myriad of ways. It may flow into the author’s teaching or community service, to be read by students and others, who it may influence. And it may profoundly impact on the subject of its analysis – a school, or a community, or some other object of research and analysis.

Knowledge evaluation challenge 3: knowledge is not validly evaluated by popularity metrics, nor supply and demand

Citation counts operate in the same way as bestseller lists, or top-forty hit lists, or media audience size calculations. They work on the assumption that aggregate demand is a correlate of quality. On this logic, you would be advised to watch Fox for its news quality, or purchase only the bestselling magazines on the news-stand, or read bestselling novels because they must by definition of ‘best’ be literature of the highest quality. In the logic of the market, bestselling is indeed best (selling). But this conflation of quality with demand or popularity is an utterly irrelevant measure of intellectual quality.

Popularity may be an apt measure of aggregate demand in markets but it is completely inappropriate as a measure of knowledge. Indeed, we could argue that the most innovative and influential works might in their nature not be popular, particularly in the first instance. Breakthrough ideas often start in small, marginal or specialized discourse spaces. Powerful knowledge-making is more likely to be ‘unpopular’ in this sense, at least in its early days. Popularity, in fact, is as often as not a sign that something is derivative, stooping to a lowest common denominator to reach a wide market, or tainted by jockeying for promotional and positional effects. So, in the case of journal articles, high impact may have been attained by authors who have framed their work in a populist way, perhaps for the express purpose of getting into journals with the widest circulation.

Here are some of the effects of a popularity measure of knowledge. It values work which has hooks designed to reach a broader audience. It values work which is fashionable and reflects conventional wisdoms over work which is innovative and unconventional. It values large fields over small (in larger fields, such as medicine, there is more to cite, and more people who can cite you, than in smaller fields). Zoologist Peter Lawrence’s advice to the cynical, citation-needing scholar is: ‘Choose the most popular species; it may be easier to publish unsound but trendy work on humans than an incisive study on a zebrafish’ (Lawrence, 2007). The most viewed article at Pub Med is ‘Broad-spectrum anti-viral therapeutics’ (http://www.ncbi.nlm.nih.gov/pubmed/21818340), a topic that is sure to garner more interest and therefore attract more citations than an article about a single case of an unusual tropical virus in a poor country that could, for all we know, be the next AIDS. Craig et al. conclude that ‘an unofficial hierarchy of broad to niche, and high to low quality emerges’, as a consequence of which ‘an author will submit to a high impact, but broad-based, journal in preference to a journal of lower impact, but which is perhaps more suited to the subject matter of the manuscript in question’ (Craig et al., 2014: Chapter 11, this volume).

The logic of popularity that is the basis of citation counts can also influence editors’ decisions – they will be more likely to choose your paper if it has features which make it more popular, and thus enhance their journal’s impact factor. ‘Material that does not attract citations must be ditched’, says Smith (2006: 1130), ‘and editors must search for material and ways that will increase the Impact Factor of their journals’ (ibid.) He continues:

Malcolm Chiswick, at one time editor of Archives of Disease in Childhood, described how an obsession with Impact Factors can lead to what he termed an ‘impacted journal’. Everything readable and entertaining is cut in favour of material that will be cited. This means that a journal is designed for citing rather than reading and for authors (who can cite articles) rather than readers (who cannot). In the case of medical journals this means that the needs of researchers are put before the needs of ordinary doctors, even though for many general medical journals ordinary doctors far outnumber researchers as readers. A journal’s Impact Factor might rise but its readership declines.

(Ibid.)

Even the sources of popularity may be heavily skewed. Sometimes popularity (ostensible demand) is simply a function of availability (ready supply), and least of all a generalized acknowledgement of intellectual merit. For instance, high-ranking professional association journals may be that simply because they force-feed the market with free copies or print or email subscriptions sent (sometimes annoyingly) to members. Alternatively, a high ranking may be the result of heavy promotion and news-stand sales. Mass-circulation, quasi-scholarly magazines create impact for the articles they publish just because they are circulated so widely.

Open access papers have also been shown to be more frequently cited, even to the extent of doubling citations (Brody et al., 2007; Harnad, 2014: Chapter 7, this volume; Kaiser, 2010; Kousha and Abdoli, 2010). Once again, the greater citation rate is not necessarily because their content is intellectually superior or their impact on the world greater. It is simply because more people can access them, and more readily, without being deterred by subscription walls. In the case of hybrid open access journals, research shows that open access articles can generate between 25 per cent and 250 per cent more citations than articles that are not freely available (Orsdel and Born, 2006). This means that people who can afford to pay open access author fees are more frequenly cited, and this may be part of their calculation of return on investment.

Editors can also be influenced by the logic of popularity, selecting articles that are more likely to enhance their journal’s impact factor. Journal Impact Factors can be skewed by editors who, during the review process, suggest the inclusion of additional citations from the journal to which the author is submitting. After all, it is in the interest of the author publishing in that journal that its impact factor be raised, and citing other articles in that journal will do just that. Furthermore, a high impact factor as measured by citation metrics may be more the product of promotional opportunities and positional power in the market place of ideas than the quality of knowledge. This market-popularity logic creates a closed circle in which market visibility breeds market visibility.

Another frequently used quantitative supply-and-demand measure of journal quality is a journal’s rejection rate. The higher the rejection rate, it is assumed, the better the quality of the published article. However, a high rejection rate adds a level of arbitrariness to the review process – the mild reservations of one reviewer working for a journal with a high rejection rate might lead to the rejection of an excellent piece of work. Rejection rate measures reduce the journal quality calculus to contingencies of supply and demand. This is a hangover from the era of print – a relation of the number of pages of text submitted to the number of pages available in the journal in a given year. In the digital era, anything that meets a certain standard can be published readily. There are no fixed limits in the supply of publishing space as there were in the era of print journals – the denominator in this equation. On the other hand, the size of the numerator is no more than a function of the size of a field. Of course, journals with names as expansive Science and Nature, and with infrastructures that assure wide public exposure, will have high rejection rates. But small fields may produce consistently excellent work, a high proportion of which should be published. Why should a low rejection rate cast aspersions on a journal in a specialist field?

At the beginning of the second half of the twentieth century, Eugene Garfield articulated his ‘law of concentration’, based on a logic of core versus peripheral knowledge. Core knowledge was evidenced by the considerable cross-citation between authors and their articles in elite journals. The periphery, meanwhile, cited the core but was little cited itself (Guédon, 2014: Chapter 3, this volume). These tendencies to concentration aligned with the cultural logic of a century that also spawned mass production, mass markets, mass-uniform culture. This was a century when the logic of concentration was the logic of society. Perhaps, however, this cultural logic is at best unacceptable and at worst anachronistic in the following century. Mass markets have differentiated into a myriad of niches, and our production and product strategies now support customization. In popular and media cultures we increasingly recognize and honour diversity. We can support a kaleidoscope of fluid differences in digital culture because our costs of distribution are negligible. So it is with knowledge cultures, these are trends we can and should follow. Finely grained, highly specified, localized representations of knowledge may be as impactful in the sites of their development and application as knowledge that has wide sources and broad application. For this reason, the key to the evaluation of knowledge must now be its epistemic perspicacity, not its qualities of ‘concentration’. If there are merely positional or circumstantial concentrations we should make it our business to try to reduce these – for people in developing countries, for emerging scholars, or for people who are doing good work but not at high-prestige research universities.

This self-fulfilling system for privileging a concentrated knowledge ‘core’ is also poorly suited to a new media environment in which knowledge and cultural creation is more broadly distributed. In this sense, citation-popularity rankings track the logic of the old media world which valued economies of scale, not the highly distributed world of contemporary new media and dispersed knowledge ecologies.

Knowledge evaluation challenge 4: network effects that privilege positional power over quality

Citations are not necessarily about the intellectual quality or social impact of a text, but the degree to which an author and a text have been noticed and have positioned themselves to be noticed. Georg Franck calls this ‘the scientific economy of attention’ (Saukko, 2009). Citation counts reflect network biases and amplify the effects of circumstantial positional power. Distortions are produced by self-magnifying network effects.

The citation system rewards people who can forcefully work networks and find their way into journals with wider circulation, thus skewing its results to favour academic entrepreneurship ahead of intellectual content (Lawrence, 2007). It favours people intensely connected in a domain. After a while, you get to know who you really should cite in order to have an article which is respectably, conventionally articulated into the consensus view of the key players in a field. It encourages a citation barter system in which authors feel they need to mention friends, patrons and people to whom they owe a positional debt. You dutifully quote leaders in the field. You don’t openly confront contrasting views or conflicting results in case the people you mention might be your reviewer or a friend of your reviewer, and you don’t upset people who might cite you. It is also a good idea to quote people who are heavily cited in the hope that they might notice you and cite you, thus enhancing your visibility. It is a good idea to cite heavily the journal to which you are submitting, particularly across the previous two years that are counted in the Journal Impact Factor. Citation counts, in other words, often come to measure academic network positions and active network moves, but not necessarily the ultimate social utility of knowledge, nor its originality, nor its implications and consequences in terms of anticipated or unanticipated applications.

In these and other ways, citation metrics measure social power dynamics which are largely unrelated to criteria of intellectual merit or knowledge validity (Bornmann et al., 2008). ‘Creative discovery is not helped by measures that select for tough fighters and against more reflective modest people’, concludes Lawrence (2007). This is a system that works against women, younger researchers (Brischoux and Cook, 2009), people from non-Anglophone countries (Fischman et al., 2010; González-Alcaide et al., 2012; López-Illescas et al., 2009; Schuermans et al., 2010), and people with ideas and data that do not mesh well with the conventional wisdoms of those who dominate a field.

Positional network advantage is further exaggerated by the Journal Impact Factor. This has a secondary, circular influence on the number of citations that the article will attract (Perneger, 2010). So does citation of a paper in a high impact journal, a phenomenon called ‘induced citations’ by Braun et al. (2010).

Far from being a measure of intellectual impact, then, citation counts become a self-reinforcing, solipsistic system of boosterism. The already inappropriate measure of popularity is exaggerated when popularity breeds further popularity. This algorithm not only reflects network positional distortions, it exacerbates them. A high personal citation count and Journal Impact Factor may be more a function of positional power in the market place than the quality of knowledge. This market-popularity logic creates a closed circle in which market visibility breeds market visibility.

This logic also fosters a herd mentality that is entirely inappropriate to a culture of innovation. One is tempted to cite what everyone else has cited because they have cited it (including the temptation to cite citations without having examined those texts sufficiently, or even at all, just because others have given them a vote of confidence by citing them). Examining a database of 34 million articles published between 1945 and 2005, Evans shows that as more articles become accessible online, either through open access or commercial subscription, the articles and journals cited tended to be fewer and more recent. How does he explain this? He says that scholars are becoming more influenced by others’ choices of citation than by a close reading of the texts on their merits (Evans, 2008). As a consequence, fields hasten to consensus and conformity. ‘The way the reward system in science is set up presents an inhibitor to any research-driven change in the scientific communication system that focuses on its communicative function’ (Velden and Lagoze, 2009).

Joining the debate following The San Francisco Declaration on Research Assessment, the editor of Science, Bruce Alberts, summed up the prevailing view of citation counts and the Journal Impact Factor in the following terms – and with this, we will conclude this section of the chapter:

The misuse of the journal impact factor is highly destructive, inviting gaming of the metric that can bias journals against publishing important papers in fields (such as social sciences and ecology) that are much less cited than others (such as biomedicine). And it wastes the time of scientists by overloading highly cited journals such as Science with inappropriate submissions from researchers who are desperate to gain points from their evaluators. But perhaps the most destructive result of any automated scoring of a researcher’s quality is the ‘me-too-science’ that it encourages. Any evaluation system in which the mere number of a researcher’s publications increases his or her score creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected. Such metrics further block innovation because they encourage scientists to work in areas of science that are already highly populated, as it is only in these fields that large numbers of scientists can be expected to reference one’s work, no matter how outstanding.

(Alberts, 2013: 787)

Framing knowledge futures

If today’s knowledge systems are broken in places and on the verge of breaking in others, what, then, is to be done? Below, we present an agenda for the making of future knowledge systems which may optimize the affordances of the new, digital media. But, to begin this section, we need to make a declaration of interest. The first author is the Director of Common Ground Publishing, located in the Research Park at the University of Illinois (http://commongroundpublishing.com). We publish 69 journals in English, ten in Spanish. We have a backlist of nearly 20,000 articles. We have developed a cloud-based ‘semantic publishing’ system, Scholar (http://cgscholar.com). The first author was also Chair of the Journals Publication Committee of the American Educational Research Association from 2010 to 2013, with oversight of eight of the top-ranked journals in the field of education. Both these roles have required struggling with all the issues outlined so far in this paper. As a consequence, the hopes and aspirations for framing knowledge futures we express here are things we have been trying to achieve, at least in part, in both of these practical capacities. We articulate these hopes and aspirations now in the form of the following agendas for the future of the academic journal.

Agenda 1: sustainable scholarly publishing

Beyond the open access/commercial publishing dichotomy, there is a question of resourcing models and sustainability. Academics’ time is not well spent playing amateur publisher. The key question is how to build sustainable resourcing models that neither require cross-subsidy of academics’ time nor the unjustifiable and unsustainable cost and price structures of the big publishers, nor punishing author fees. The challenge is to develop new business models, either in the form of academic socialism (institutional support for publishing by libraries or university presses paid for by government or institutions) or lightweight commercial models which do not charge unconscionable author fees, subscription rates or per-article purchase prices.

Agenda 2: guardianship of intellectual property

How does one balance academics’ and universities’ interest in intellectual property with the public knowledge interest? The ‘gift economy’ also supports a ‘theft economy’ in which private companies profit from the supply content provided at no charge. Google copies content, mostly without permission and always without payment, and makes money from advertising alongside this content. The October 2008 settlement between Google and the Authors Guild, which distributes revenues from books Google has scanned in a number of US libraries, may in the course of time create as many new problems as it solves older ones (Albanese, 2008).

The key question here is how to establish an intellectual property regime which sustains intellectual autonomy, rather than a ‘giveaway’ economy which undervalues the work of the academy. Moreover, journal articles and scholarly monographs do not need to have one or other of the ‘free’ copyright licences upon which many of the new domains of social production depend, such as the Creative Commons licence (Lessig, 2001) that underwrites Wikipedia, or the General Public License (Stallman, 2002) that locks free or open source software and its derivatives into communal ownership (Fitzgerald and Pappalardo, 2007). This is because authors are strongly named in academic knowledge regimes – the credibility of a work is closely connected to the credentials of an author, and copyright strengthens this claim to credibility (Saunders, 2014: Chapter 10, this volume).

Furthermore, the imperatives of attribution and ‘moral rights’ are rigorously maintained through academic citation systems. A (re)user of copyrighted knowledge, conversely, has extraordinary latitude in ‘fair use’, quoting and paraphrasing for the purposes of review and criticism. A version of ‘remix culture’, to use Lessig’s portrayal of the new world of digital creativity (Lessig, 2008), has always been integral to academic knowledge systems. However, to the extent that it is essential to build on the work of others, this is already built into conventional copyright regimes (Cope, 2001). Moreover, private author-ownership is integral to academic freedom, where authors in universities are allowed to retain individual ownership of copyright of published works, but not the rights to patents or course materials (Foray, 2004). This is also why many open access journals retain traditional copyright licenses. Moreover, academics are not necessarily good stewards of copyright, such as when they hand over these rights for no return to commercial publishers who subsequently sell this self-same content back to the institution for which they work, and at monopoly prices. As universities take a greater interest in content production in the regime of academic socialism, they should in all probability take a greater interest in copyright – whether that be libraries managing repositories or university presses publishing content – which they can then make available for free or sell at a reasonable price.

Agenda 3: criterion-referenced review

What does it mean to perform high-quality intellectual work? Rather than unstructured commentary, we should require referees to consider multiple criteria, and score for each: the significance of questions addressed, setting an intellectual agenda, rigour of investigation, originality of ideas, contribution to understanding, practical utility; these are some criteria that emerged in research as part of the British Research Assessment Exercise (Wooding and Grant, 2003). Or, with a more practical text focus, we might ask reviewers systematically to address clarity of thematic focus, relationships to the literature, research design, data quality, development or application of theory, clarity of conclusions and quality of communication. Or, with an eye to more general knowledge processes, we might ask referees to evaluate a report of intellectual work for its specifically experiential, empirical, categorical, theoretical, analytical, critical, applicable and innovative qualities (Kalantzis and Cope, 2012b). Clear disciplinary and metadisciplinary criteria will increase referees’ accountability and may afford outsiders an equitable opportunity to break into insider networks.

Agenda 4: greater reflexivity and recursiveness in the peer-review process

Digital technologies and new media cultures suggest a number of possibilities for renovation of the knowledge system of the scholarly journal. Open peer review where authors and referees know each other’s identities, or blind reviews that are made public, may well produce greater accountability on the part of editors and referees, and provide evidence of and credit the contribution a referee has made to the reconstruction of a text. Reviews could be dialogical, with or without the reviewer’s identity declared, instead of the unidirectional finality of an accept/reject/rewrite judgement. The referee could be reviewed – by authors, or moderators, or other third-party referees – and their reviews weighted for their accumulated, community-ascribed value as a referee. In addition, whether review texts and decision dialogues are on public record or not, they should be open to independent audit for abuses of positional power.

Cloud-based digital workflow opens exciting possibilities for the emergence of a new kind of knowledge artefact – an article that evolves endlessly under the influence of public and private dialogue, the public parts of which would be visible. Instead of a lock-step march to a single point of publication, then a near-irrevocable fixity to the published record, a more incremental process of knowledge recording and refinement is straightforwardly possible in the digital era. This could even end the distinction between pre-publication refereeing and post-publication review. Re-versioning would allow initial, pre-refereeing formulations to be made visible, as well as the dialogue that contributed to rewriting for publication. Then, as further commentary and reviews come in, the author could correct and reformulate, thus opening the published text to continuous improvement.

Agenda 5: more integrative, collaborative and inclusive knowledge cultures

Instead of the heroic author shepherding a text to a singular moment of publication, the ‘social web’ and interactive potentials intrinsic to the new media point to more broadly distributed, more collaborative knowledge futures. What has been called Web 2.0 (Hannay, 2007; O’Reilly, 2005), or the more interactive and extensively sociable application of the Internet, points to wider networks of participation, greater responsiveness to commentary, more deeply integrated bodies of knowledge and more dynamic, reflexive and faster-moving knowledge cultures.

The effect of a more open system would be to open entry to the republic of scholarly knowledge for people currently outside the self-enclosing circles of prestigious research institutions and highly-ranked journals. Make scholarly knowledge affordable to people without access through libraries to expensive institutional journal subscriptions, make the knowledge criteria explicit, add more accountability to the review process, allow all comers to get started in the process of the incremental refinement of rigorously validated knowledge, and you will find new knowledge – some adjudged to be manifestly sound and some not – emerging from enterprises, schools, hospitals, government agencies, professional offices, hobbyist organizations, business consultants and voluntary groups. Digital media infrastructures make this a viable possibility.

Another effect would be to change the global biases favouring the centre over the periphery in the journals system. Approximately one-quarter of the world’s universities are in the anglophone world. However, the vast majority of the world’s academic journal articles are from academics working in anglophone countries. A more comprehensive and equitable global knowledge system would reduce this systemic bias. Openings in the new media include developments in machine translation and the role of knowledge schemas, semantic mark-up and tagging to assist discovery and access across different languages. They also speak to a greater tolerance for ‘accented’ writing in English as a non-native language.

Agenda 6: new types of multimodal scholarly text

Four decades ago, J.C.R. Linklider wrote of the deficiencies of the book as a source of knowledge, and imagined a future of ‘procognitive systems’ (Linklider, 1965). He was anticipating a completely new knowledge system. That system is not with us yet. In the words of Jean-Claude Guédon, we are still in the era of digital incunabula (Guédon, 2001). In escaping the confines of print lookalike formats, however, expansive possibilities present themselves. With semantic mark-up, large corpora of text might be opened up to data-mining and cybermashups (Cope et al., 2011; Sompel and Lagoze, 2007; Wilbanks, 2007).Knowledge representations can present more of the world in less mediated form in datasets, images, videos and sound recordings (Fink and Bourne, 2007; Kalantzis and Cope, 2012a; Lynch, 2007). Whole disciplines limited in their publication opportunities by traditional textual exegesis – such as the arts, media and design – might formally be brought into academic knowledge systems in the actual modalities of their practice. New units of knowledge might be created at levels of granularity other than the singular article of today’s journals system; fragments of evidence and ideas contributed by an author within an article, and curated collections and mashups above the level of an article, with sources duly credited by virtue of electronically tagged tracings of textual and data provenance.

Agenda 7: reliable use metrics

The citation count business that we have described in this chapter is not just a bad business. It is deeply damaging to the principles of scholarly work and the values of science. Are the fundamental premises of citation counts so flawed that they are beyond redemption? Or can they be improved sufficiently to be salvaged?

The frequently drawn conclusion that citation counting lacks validity has resulted in design improvements in the mechanics of counting, without changing its basic premises. Thomson ISI has been working on its databases with some additional metrics such as cited half-life, the five-year impact factor and the article influence score (Andres, 2009; Gorraiz and Schloegl, 2008; Papavlasopoulos et al., 2010). Competitor Elsevier is working to catch up, and perhaps at some point out-compete, Thomson’s Web of Knowledge with its Scopus database. Physicist Jorge Hirsch has developed the h-index, where h = 5 if you have published five articles in your career, each of which has received five citations. This measure is designed to evaluate whole careers of individual scholars, or groups of scholars, or journals which have produced consistently highly-cited articles (Durieux and Gevenois, 2010; Hunt et al., 2010; Rieder et al., 2010). And yet another metric, the ‘Eigenfactor’, is ‘built on an algorithm that positions journals as hubs in a network where journal impact is based not only on the number of citations received, but also the quality and level of connectivity in the network (“well connected journals”) of the citing journals’ (Stewart, 2010) – in the manner of Google’s PageRank.

Meanwhile, other citation counting services have been established, including CiteSeerX (http://citeseerx.ist.psu.edu) and Google Scholar (http://scholar.google.com) (de Bellis, 2009; Falagas et al., 2008; Harzing and Van der Wal, 2008; Kousha and Thelwall, 2007; Levine-Clark and Gil, 2009; Norris and Oppenheim, 2007; Schroeder, 2007). For all its touted openness, Google Scholar may be little better. In response to a query by a scholarly publisher as to why its 20 or so journals had not yet been indexed despite years of requests, the Google email respondent simply replied: ‘We are currently unable to provide a time frame for when your content will be made available on Google Scholar.’

Usage counts are now also being brought into the mix, including MESUR (http://www.mesur.org/MESUR.html) (Banks and Dellavalle, 2008), as are download counts, or the number of times an article is accessed by users (Davies, 2009). Standards for the measurement of downloads have been established by the not-for-profit COUNTER organization (http://www.projectcounter.org/). Download metrics at least come closer for use as a record of readership, although they still do not tell you whether the paper was actually read, nor whether the downloaded item was the one the reader was looking for, nor how far downloaded papers get subsequently circulated or appear in institutional repositories, nor whether people come back to the same article multiple times rather than download and store; in other words, downloads may also be a flawed proxy for use.

Entering the broader realm of web metrics, altmetrics is a series of apps which analyse a range of web interactions, including Twitter and Mendeley (http://altmetrics.org/tools/). One of these tools is ImpactStory. When a user uploads slides, code, datasets and articles the software combs the web to create an impact report based not only on citations, but on bookmarks, downloads and tweets (http://impactstory.org/). Peer Evaluation provides a platform for multifaceted academic evaluation (www.peerevaluation.org).

More and better counting is certainly needed if we are to evaluate in more reliably quantitative terms the impact of published scholarly work. We need to review not Thomson-selected citations nor unreliably collected Google citations, but every citation collected into a database and unambiguously verified at the time of authorship. We could ask authors to tag for the kind of citation (agreement, distinction, disagreement, etc.). We could collect download statistics more extensively and consistently. We could ask readers to rate articles, and weight their ratings by their rater-ratings. We could ask for a quick review of every article read, and record and rate the breadth and depth of a scholar’s reading or a reader’s rating credentials. We could harvest qualitative commentary found alongside citations.

Much work still needs to be done. In an era of self-calibrating social media, sophisticated data mashups and reflexive information algorithms, the citation-count impact factor seems a crude throwback to a simpler era. We have made the case in this chapter that the raw citation counting practices of today’s impact factor are inexcusably flawed, providing unnecessarily poor service to our contemporary epistemic cultures. At times, the impact factor even corrupts these cultures. The time has come for it to be replaced.

Agenda 8: valid impact measures

Citation is important. It is a key mechanism for making the associative links that constitute webs of knowledge. However, we need to be able to assess the varied qualities of citation, locating citation as just one form of evidence in a balanced and holistic analysis of scholarly impact.

The ultimate utility of knowledge – its actual impact – is on the broader social world, not the self-enclosed world mutual citation. How does one evaluate empirically demonstrable evidence of the utility of knowledge – its actual impact, in other words – rather than the tendentious proxy that is the impact factor, a number which is so shoddily derived, this 42 of knowledge evaluation?

To answer this question, we need to direct our attention to the substantive dimensions of impact. Whether it was in relation to a single work or a portfolio of works, an individual or a group, we would need to know about:

1. The origins and context of the work: including connections with earlier work and the context in which this work emerged.

2. The processes of the creation of the work: including a description of its review history; referee reports; responses to these reports; and, in the case of jointly authored works, an estimate of proportional contributions.

3. Disciplinary and interdisciplinary impacts: including who has cited this work, how they have cited it, and the significance of their citations; reviews; other dissemination activities, such as conference presentations and workshops based on the work, and feedback or evaluation data from these; and other local, national and international impacts upon the field.

4. Pedagogical impacts: including the number of students using this work, how they use it, and evidence of student learning outcomes.

5. Community impacts: including stakeholder data: surveys, documented feedback, public commentary; applications (products, practices,policies, public attitudes, etc.); the magnitude of community impacts to date; potential future magnitudes; optimal conditions of wide applicability; risk assessment; and public intellectual leadership and communicating discoveries to a broader public.

6. Underlying research: including the context of this work in relation to research programmes and grants; flow-on research activity, including grant applications and grants awarded; intellectual property; and commericialization potentials.

7. Indicators of collegial ties: including collaborations involved in thiswork or sparked by this work; evidence of impact of the work on departments and research groupings; and the establishment of interdisciplinary and international relationships.

8. Related publications: including published and projected successor works that build a body of work; connections and differentiating factors between this work and successor works; and productivity in which this work is set in the context of a number of works over a defined time period.

9. Significance in development of thinking: including this work in the context of an intellectual/academic career biography; significance/ relevance of the work in intellectual development; data/empirical discoveries and development of concepts, etc.

10. Future directions and trajectory: whether a line of thinking has been brought to a satisfactory conclusion and/or logical next steps taken in intellectual trajectory; the estimated half-life of this work and where it is now in that impact scenario; and plans to build on this work by extending, developing, seeking funding, etc.

We have suggested this ‘holistic impact metric’ as a more rigorously determined and systematically articulated form of ‘social proof’ (McCann, 2009). The process we have created requires an author, or a scholarly group, to demonstrate via a criterion-referenced retrospective exegesis the impact of a published work or a portfolio of published works, self-assessing and rating impact on ten dimensions of substantive impact. Peers then review the portfolio and exegesis, again reporting both in qualitative and quantitative terms by each impact criterion. Self- and peer-assessments can then be moderated. In other words, we would ask scholarly evaluators to read whole texts alongside author exegeses and independent assessments of the impact of their ideas (Wooding and Grant, 2003). What did this research or these ideas actually do in a field? Instead of the dubious numerical proxies, we would ask the question directly: what was the actual impact of this intellectual work on the world?

Concluding questions

If it is the role of the scholarly knowledge system to produce deeper, broader and more reliable knowledge than is possible in everyday, casual experience, what do we need to do to honour and extend this tradition rather than allow it to break, a victim to the disruptive forces of the new media?

The answers will not only demand the development of new publishing processes. They will require the construction of new knowledge systems. This inevitably leads us to an even larger question: how might renewed scholarly knowledge systems support a broader social agenda of intellectual risk-taking, creativity and innovation? How is renovation of our academic knowledge systems a way to address the heightened expectations of a ‘knowledge society’? And what are the affordances of the digital media which might support reform?

Whatever the models that emerge, the knowledge systems of the near future could and should be very different from those of our recent past. The sites of formal knowledge validation and documentation will be more dispersed across varied social sites. They will be more global. The knowledge processes they use will be more reflexive, and so more thorough and reliable. Knowledge will be made available more quickly. Through semantic publishing, knowledge will be more discoverable and open to disaggregation, reaggregation and reinterpretation. There will be much more of it, but it will be much easier to navigate. The Internet offers us these affordances. It will allow us to define and apply new epistemic virtues. It is our task as knowledge workers to realize the promise of our times and to create more responsive, equitable and powerful knowledge ecologies.

References

Aalbersberg, I. J., Heeman, F. The article of the future. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Adler, R., Ewing, J., Taylor, P., Citation statistics: a report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS), 2008.

Albanese, A. Harvard slams Google settlement; others react with caution. Library Journal. 2008. [30 October].

Alberts, B. Impact factor distortions. Science. 2013; 340:787.

Andres, A.Measuring Academic Research: How to Undertake a Bibliometric Study. Oxford, UK: Chandos Publishing, 2009.

Antonoyiannakis, M., Mitra, S., Editorial: is PRL too large to have an ‘impact’? Physical Review Letters 2009; 102 Available from: http://publish.aps.org/edannounce/PhysRevLett.102.060001

Bacon, F. The New Organon, 1620.

Banks, M. A., Dellavalle, R. Emerging alternatives to the impact factor. OCLC Systems & Services. 2008; 24:167–173.

Bauwens, M., The political economy of peer production. CTheory 2005; . http://www.ctheory.net/printer.aspx?id=499 [Available from:].

Benkler, Y.The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press, 2006.

Bergman, S. S. The scholarly communication movement: highlights and recent developments. Collection Building. 2006; 25:108–128.

Bergstrom, C. T., Bergstrom, T. C. The economics of ecology journals. Frontiers in Ecology and Evolution. 2006; 4:488–495.

Bergstrom, T. C., Lavaty, R. How often do economists self-archive?. Santa Barbara: Department of Economics, University of California; 2007.

Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, 2003.

Bethesda Statement on Open Access Publishing, 2003.

Biagioli, M. From book censorship to academic peer review. Emergences: Journal for the Study of Media & Composite Cultures. 2002; 12:11–45.

Bornmann, L., Mutz, R., Neuhaus, C., Daniel, H.-D. Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics. 2008; 8:93–102.

Braun, T., Glänzel, W., Schubert, A. On sleeping beauties, princes and other tales of citation distributions. Research Evaluation. 2010; 19:195–202.

Brin, S., Page, L. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems: Special Issue on the Seventh International World-Wide Web Conference, Brisbane, Australia. 1998; 30(1–7):107–117.

Brischoux, F., Cook, T. R. Juniors seek an end to the impact factor race. Bioscience. 2009; 59:638–639.

Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S., et al. Incentivizing the open access research web: publication archiving, data-archiving and scientometrics. CTWatch Quarterly. 3, 2007.

Browman, H. I., Stergiou, K. I. Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely. Ethics in Science and Environmental Politics. 2008; 8:1–3.

Brumback, R. A. Worshiping false idols: the impact factor dilemma. Journal of Child Neurology. 2008; 23:365–367.

Budd, J. M. Citations and knowledge claims: sociology of knowledge as a case in point. Journal of Information Science. 1999; 25:265–274.

Burnham, J. C. The evolution of editorial peer review. The Journal of the American Medical Association. 263, 1990.

Bush, V., As we may think. The Atlantic Magazine 1945; . http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881 [Available from:].

Butler, L. Using a balanced approach to bibliometrics: quantitative performance measures in the Australian Research Quality Framework. Ethics in Science and Environmental Politics. 2008; 8:83–92.

Cameron, B. D. Trends in the usage of ISI bibliometric data: uses, abuses, and implications. Ryerson University; 2005.

Campbell, P. Escape from the impact factor. Ethics in Science and Environmental Politics. 2008; 8:5–7.

Cassella, M., Calvi, L. New journal models and publishing perspectives in the evolving digital environment. IFLA Journal. 2010; 36:7–15.

Clarke, R. The cost profiles of alternative approaches to journal publishing. First Monday. 12, 2007.

Cope, B. Content development and rights in a digital environment. In: Cope B., Freeman R., eds. Digital Rights Management and Content Development, Vol. 2.4, Technology Drivers Across the Book Production Supply Chain, From the Creator to the Consumer. Melbourne: Common Ground; 2001:3–16.

Cope, B., Kalantzis, M. Designs for social futures. In: Cope B., Kalantzis M., eds. Multiliteracies: Literacy Learning and the Design of Social Futures. London: Routledge; 2000:203–234.

Cope, B., Kalantzis, M. From Gutenberg to the Internet: how digitisation transforms culture and knowledge. Logos: The Journal of the World Book Community. 2010; 21:103–130.

Cope, B., Kalantzis, M., Magee, L.Towards a Semantic Web: Connecting Knowledge in Academic Research. Cambridge, UK: Woodhead Publishing, 2011.

Craig, I. D., Ferguson, L., Finch, A. T. Journals ranking and impact factors: how the performance of journals is measured. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Creaser, C. The role of the academic library. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Crow, R. Income models for open access. Washington DC: Scholarly Publishing & Academic Resources Coalition; 2009.

Davies, J. E. Libraries and the future of the journal: dodging the crossfire in the e-revolution; or leading the charge? In: Cope B., Phillips A., eds. The Future of the Academic Journal. Oxford, UK: Chandos Publishing, 2009.

De Bellis, N.Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics. Lanham, MD: Scarecrow Press, 2009.

De Kemp, A., Rahm, E. Comparing the scientific impact of conference and journal publications in computer science. Information Services & Use. 2008; 28:127–128.

Delgado, J. E., Fischman, G. E. The future of Latin American academic journals. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Dewatripont, M., Ginsburgh, V., Legros, P., Walckiers, A. Study on the economic and technical evolution of the scientific publication markets in Europe. Brussels: European Commission; 2006.

DORA, The San Francisco Declaration on Research Assessment, 2013.

Durieux, V., Gevenois, P. A. Bibliometric indicators: quality measurements of scientific publication. Radiology. 2010; 255:342–351.

Edlin, A. S., Rubinfeld, D. L. Exclusion or efficient pricing? The ‘big deal’ bundling of academic journals. Berkeley: University of California; 2004.

Etter, J.-F., Stapleton, J. Citations to trials of nicotine replacement therapy were biased toward positive results and high-impact-factor journals. Journal of Clinical Epidemiology. 2009; 62:831–837.

Evans, J. A. Electronic publication and the narrowing of science and scholarship. Science. 2008; 321:395–399.

Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., Karageorgopoulos, D. E. Comparison of SCImago Journal Rank Indicator with Journal Impact Factor. FASEB Journal. 2008; 22:2623–2628.

Favaloro, E. J. Measuring the quality of journals and journal articles: the impact factor tells but a portion of the story. Seminars in Thrombosis & Hemostasis. 2008; 34:007–025.

Favaloro, E. J. The Journal Impact Factor: don’t expect its demise any time soon. Clinical Chemistry & Laboratory Medicine. 2009; 47:1319–1324.

Fersht, A. The most influential journals: impact factor and Eigenfactor. Proceedings of the National Academy of Sciences of the United States of America. 2009; 106(17):6883–6884.

Finch, J., Accessibility, sustainability, excellence: how to expand access to research publications: report of the Working Group on Expanding Access to Published Research Findings, 2012. [London].

Fink, J. L., Bourne, P. E. Reinventing scholarly communication for the electronic age. CTWatch Quarterly. 3, 2007.

Fischman, G. E., Alperin, J. P., Willinsky, J. Visibility and quality in Spanish-language Latin American scholarly publishing. Information Technologies and International Development. 6, 2010.

Fitzgerald, B., Pappalardo, K. The law as cyberinfrastructure. CTWatch Quarterly. 3, 2007.

Foray, D.The Economics of Knowledge. Cambridge, MA: MIT Press, 2004.

Foucault, M.The Archaeology of Knowledge and The Discourse on Language. New York: Vintage Books, 1982.

Galloway, A. R., Thacker, E.The Exploit: A Theory of Networks. Minneapolis, MN: University of Minnesota Press, 2007.

Garfield, E. Commentary: fifty years of citation indexing. International Journal of Epidemiology. 2006; 35:1127–1128.

Gherab Martín, K., González Quirós, J. L. Academic journals in the e-science era. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Ginsparg, P. Next-generation implications of open access. CTWatch Quarterly. 3, 2007.

González-Alcaide, G., Valderrama-Zurián, J. C., Aleixandre-Benavent, R. The impact factor in non-English-speaking countries. Scientometrics. 2012; 92:297–311.

Gorraiz, J., Schloegl, C. A bibliometric analysis of pharmacology and pharmacy journals: Scopus versus Web of Science. Journal of Information Science. 2008; 34:715–725.

Gowers, Tim, The cost of knowledge, 2012. http://gowers.files.wordpress.com/2012/02/elsevierstatementfinal.pdf [Available from:].

Grafton, A.The Footnote: A Curious History. London: Faber and Faber, 1997.

Guédon, J.-C., In Oldenburg’s long shadow: librarians, research scientists, publishers, and the control of scientific publishing. Association of Research Libraries, Conference Proceedings. 2001.

Guédon, J.-C. Sustaining the ‘great conversation’: the future of scholarly and scientific journals. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Hannay, T. Web 2.0 in science. CTWatch Quarterly. 3, 2007.

Harnad, S. Validating research performance metrics against peer rankings. Ethics in Science and Environmental Politics. 2008; 8:103–107.

Harnad, S. The post-Gutenberg open access journal. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Harris, A., How Google is killing organic search, 2013. http://blog.tutorspree.com/post/54349646327/death-of-organic-search [Available from:].

Harvard Open Access Project, Notes on the Fair Access to Science and Technology Research Act, 2013.

Harzing, A.-W. K., Van der Wal, R. Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics. 2008; 8:61–73.

Haslam, N., Laham, S. M. Quality, quantity, and impact in academic publication. European Journal of Social Psychology. 2010; 40:216–220.

Hemmingsson, A., Mygind, T., Skjennald, A., Edgren, J., Rogers, L. F. Manipulation of impact factors by editors of scientific journals. Am. J. Roentgenol. 2002; 178:767.

Higher Education Funding Council for England. Report on the Pilot Exercise to Develop Bibliometric Indicators for the Research Excellence Framework. 2009; 3.

Horrobin, D. F. The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association. 263, 1990.

Hunt, G. E., Cleary, M., Walter, G. Psychiatry and the Hirsch h-index: the relationship between Journal Impact Factors and accrued citations. Harvard Review of Psychiatry. 2010; 18:207–219.

Husserl, E.The Crisis of European Sciences and Transcendental Phenomenology. Evanston, IL: Northwestern University Press, 1970. [[1954]].

Ioannidis, J. P.A. Why most published research findings are false. PLoS Med. 2005; 2:696–701.

Jackson, R., Richardson, M. Gold OA: the future of the academic journal? In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Jefferson, T., Wager, E., Davidoff, F. Measuring the quality of editorial peer review. JAMA. 2002; 287:2786–2790.

Judson, H. F. Structural transformations of the sciences and the end of peer review. JAMA. 1994; 272:92–94.

Kaiser, J. Free journals grow amid ongoing debate. Science. 2010; 329:896–898.

Kakaes, K., The other academic freedom movement. Slate 2012; . http://www.slate.com/articles/technology/future_tense/2012/02/federal_research_public_access_act_the_research_works_act_and_the_open_access_movement_.html [Available from:].

Kalantzis, M., Cope, B.Literacies. Cambridge, UK: Cambridge University Press, 2012.

Kalantzis, M., Cope, B.New Learning: Elements of a Science of Education. Cambridge, UK: Cambridge University Press, 2012.

Kapitzke C., Peters M.A., eds. Global Knowledge Cultures. Rotterdam: Sense Publishers, 2007.

Kousha, K., Abdoli, M. The citation impact of open access agricultural research: a comparison between OA and non-OA publications. Online Information Review. 2010; 34:772–785.

Kousha, K., Thelwall, M. Google Scholar citations and Google Web/ URL citations: a multi-discipline exploratory analysis. Journal of the American Society for Information Science and Technology. 2007; 58:1055–1065.

Krell, F.-T. Should editors influence journal impact factors? Learned Publishing. 2010; 23:59–62.

Kress, G. Design and transformation: new theories of meaning. In: Cope B., Kalantzis M., eds. Multiliteracies: Literacy Learning and the Design of Social Futures. London: Routledge; 2000:153–161.

Lancho-Barrantes, B. S., Guerrero-Bote, V. P., Moya-Anegón, F. What lies behind the averages and significance of citation indicators in different disciplines? Journal of Information Science. 2010; 36:371–382.

Landoni, G., Pieri, M., Nicolotti, D., Silvetti, S., Landoni, P., et al. Self-citation in anaesthesia and critical care journals: introducing a flat tax. British Journal of Anaesthesia. 2010; 105:386–387.

Lane, J. Let’s make science metrics more scientific. Nature. 2010; 464:488–489.

Lawrence, P. A. The mismeasurement of science. Current Biology. 2007; 17:583–585.

Lawrence, P. A. Lost in publication: how measurement harms science. Ethics in Science and Environmental Politics. 2008; 8:9–11.

Lee, K., Bero, L., What authors, editors and reviewers should do to improve peer review. Nature 2006; Available from: http://www.nature.com/nature/peerreview/debate/nature05007.html

Lessig, L.The Future of Ideas: The Fate of the Commons in a Connected World. New York: Random House, 2001.

Lessig, L.Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin Press, 2008.

Levine, F. J., The Finch Report and open access in social science from the US side of the pond. Paper presented at the Academy of Social Sciences Conference on Implementing Finch, 30 November 2012, London. 2012.

Levine, T. R. Rankings and trends in citation patterns of communication journals. Communication Education. 2010; 59:41–51.

Levine-Clark, M., Gil, E. L. A comparative citation analysis of Web of Science. Scopus, and Google Scholar, Journal of Business & Finance Librarianship. 2009; 14:32–46.

Linklider, J. C.R.Libraries of the Future. Cambridge, MA: MIT Press, 1965.

López-Illescas, C., de Moya Aneg, F., Moed, H. F. Comparing bibliometric country-by-country rankings derived from the Web of Science and Scopus: the effect of poorly cited journals in oncology. Journal of Information Science. 2009; 35:244–256.

Lynch, C. The shape of the scientific article in the developing cyberinfrastructure. CTWatch Quarterly. 3, 2007.

Mabe, M. A., Amin, M. Dr Jekyll and Dr Hyde: author-reader asymmetries in scholarly publishing. Aslib Proceedings. 2002; 54:149–157.

Mavrogenis, A. F., Ruggieri, P., Papagelopoulos, P. J. Editorial: self-citation in publishing. Clin. Orthop. Relat. Res. 2010; 468(10):2803–2807.

McCabe, M. J., Nevo, A., Rubinfeld, D. L. The pricing of academic journals. Berkeley: University of California; 2006.

McCann, S., Social proof: a tool for determining authority. In the Library with the Lead Pipe 2009; . http://www.inthelibrarywiththeleadpipe.org [Available from:].

Meho, L. I. The rise and rise of citation analysis. Physics World. 2007; 20:32–36.

Meyers, B. Peer review software: has it made a mark on the world of scholarly journals?. Aries Systems Corporation; 2004.

Monbiot, G., Academic publishers make Murdoch look like a socialist. The Guardian 2011; . http://www.theguardian.com/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist [29 August. Available from:].

Stanley, Morgan. Scientific publishing: knowledge is power. London: Morgan Stanley Equity Research Europe; 2002.

Neff, B. D., Olden, J. D. Not so fast: inflation in impact factors contributes to apparent improvements in journal quality. Bioscience. 2010; 60:455–459.

Norris, M., Oppenheim, C. Comparing alternatives to the Web of Science for coverage of the social sciences’ literature. Journal of Informetrics. 2007; 1(2):161–169.

O’Connor, S. J. Citations, impact factors and shady publication practices: how should the lasting clinical and social value of research really be measured? European Journal of Cancer Care. 2010; 19:141–143.

Office of Science and Technology Policy. Increasing access to the results of federally funded scientific research. Washington DC: Executive Office of the President; 2013.

Ogden, T. L., Bartley, D. L. The ups and downs of journal impact factors. Annals of Occupational Hygiene. 2008; 52:73–82.

Opderbeck, D. W. The penguin’s paradox: the political economy of international intellectual property and the paradox of open intellectual property models. Stanford Law & Policy Review. 2007; 18:101.

O’Reilly, T., What is Web 2.0? Design patterns and business models for the next generation of software, 2005. http://oreilly.com/web2/archive/what-is-web-20.html [O’Reilly, 30 September. Available from:].

Papavlasopoulos, S., Poulos, M., Korfiatis, N., Bokos, G. A non-linear index to evaluate a journal’s scientific impact. Information Sciences. 2010; 180:2156–2175.

Pauly, D., Stergiou, K. I. Re-interpretation of ‘influence weight’ as a citation-based index of new knowledge (INK). Ethics in Science andEnvironmental Politics. 2008; 8:75–78.

Pellegrino, J. W., Chudowsky, N., Glaser, R.Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academies Press, 2001.

Perneger, T. V. Citation analysis of identical consensus statements revealed journal-related bias. Journal of Clinical Epidemiology. 2010; 63:660–664.

Peters, M. A.Knowledge Economy, Development and the Future of Higher Education. Rotterdam: Sense Publishers, 2007.

Peters, M. A., Britez, R. G.Open Education and Education forOpenness. Rotterdam: Sense Publishers, 2008.

Phillips, A. Business models in journals publishing. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Price, R., The dangerous ‘Research Works Act’. Tech Crunch, 2012. http://techcrunch.com/2012/02/15/the-dangerous-research-works-act/ [Available from:].

Rafols, I., Wilsdon, J., Just say no to impact factors. The Guardian 2013; . http://www.theguardian.com/science/political-science/2013/may/17/science-policy [17 May. Available from:].

Raymond, E.The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Sebastapol, CA: O’Reilly, 2001.

Rieder, S., Bruse, C. S., Michalski, C. W., Kleeff, J., Friess, H. The impact factor ranking: a challenge for scientists and publishers. Langenbeck’s Archives of Surgery. 2010; 395:57–61.

Rogers, L. F. Impact factor: the numbers game. Am. J. Roentgenol. 2002; 178:541–542.

Rossner, M., Van Epps, H., Hill, E. Show me the data. J Cell Biol. 2007; 179:1091–1092.

Rossner, M., Van Epps, H., Hill, E. Irreproducible results: a response to Thomson Scientific. J. Cell Biol. 2008; 180:254–255.

Rowland, F. The peer-review process. Learned Publishing. 2002; 15:247–258.

Saukko, P. The role of international journals in legal/forensic medicine. Legal Medicine. 2009; 11:S9–S12.

Saunders, J., Smith, S. The future of copyright: what are the pressures on the present system? In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Schroeder, R. Pointing users toward citation searching: using Google Scholar and Web of Science. Libraries and the Academy. 2007; 7:243–248.

Schuermans, N., Meeus, B., De Maesschalck, F. Is there a world beyond the Web of Science? Publication practices outside the heartland of academic geography. Area. 2010; 42:417–424.

Seglen, P. O. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997; 314:497.

Shreeves, S. L. The role of repositories in the future of the journal. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Siebelt, M., Siebelt, T., Pilot, P., Bloem, R. M., Bhandari, M., et al. Citation analysis of orthopaedic literature: 18 major orthopaedic journals compared for impact factor and SCImago. BMC Musculoskeletal Disorders. 2010; 11:1–7.

Simons, K. The misused impact factor. Science. 2008; 322(5899):165.

Smart, P., Murray, S. The status and future of the African journal. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

Smith, R. Journal accused of manipulating impact factor. BMJ. 1997; 314:461.

Smith, R. Commentary: the power of the unrelenting impact factor: is it a force for good or harm? International Journal of Epidemiology. 2006; 35:1129–1130.

Sompel, H. van de, Lagoze, C., Interoperability for the discovery, use, and re-use of units of scholarly communication. CTWatch Quarterly. 2007; 3(3). http://www.ctwatch.org/quarterly/articles/2007/08/interoperability-for-the-discovery-use-and-re-use-of-units-of-scholarly-communication/ [Available from:].

Sosteric, M., Interactive peer review: a research note. Electronic Journal of Sociology 1996; . http://socserv.socsci.mcmaster.ca/EJS/vol002.001/SostericNote.vol002.001.html [Available from:].

Spier, R. The history of the peer-review process. Trends in Biotechnology. 2002; 20:357–358.

Stallman, R., The GNU Project, 2002. http://www.gnu.org/gnu/thegnuproject.html [Available from:].

Stanley, C. A. When counter narratives meet master narratives in the journal editorial-review process. Educational Researcher. 2007; 36:14–24.

Stewart, C. Whither metrics? Tools for assessing publication impact of academic library practitioners. Journal of Academic Librarianship. 2010; 36:449–453.

Tananbaum, G. Campus-based open-access publishing funds: a practical guide to design and implementation. Washington, DC: Scholarly Publishing & Academic Resources Coalition; 2010.

Tenopir, C., King, D. W. The growth of journals publishing. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

The Budapest Open Access Initiative, 2002.

Tillery, K.2012 Study of Subscription Prices for Scholarly Society ournals. Lawrence, KS: Allen Press, 2012.

Todd, P. A., Ladle, R. J. Hidden dangers of a ‘citation culture’. Ethics in Science and Environmental Politics. 2008; 8:13–16.

Togia, A., Tsigilis, N. Impact factor and education journals: a critical examination and analysis. International Journal of Educational Research. 2006; 45:362–379.

Tötösy de Zepetnek, S. The ‘impact factor’ and selected issues of content and technology in humanities scholarship published online. Journal of Scholarly Publishing. 2010; 42:70–78.

Vanclay, J. K. Impact factor: outdated artefact or stepping-stone to journal certification? Scientometrics. 2011; 92:211–238.

Van Noorden, R. Open access: the true cost of science publishing. Nature. 2013; 495(7442):426–429.

Van Orsdel, L. C., Born, K. Periodicals Price Survey 2006: journals in the time of Google. Library Journal. 2006; 131:39–44.

Van Orsdel, L. C., Born, K. Periodicals Price Survey 2008: embracing openness. Library Journal. 133(7), 2008.

Velden, T., Lagoze, C., The value of new scientific communication models for chemistry. White Paper from New Models for Scholarly Communication in Chemistry Workshop, Washington DC, 23-4 October 2008. 2009.

Vitzthum, K., Spallek, M., Mache, S., Quarcoo, D., Scutaru, C., et al. Cruciate ligament: density-equalizing mapping and scientometrics as a measure of the current scientific evaluation. European Journal of Orthopaedic Surgery and Traumatology. 2010; 20:217–224.

Wager, E., Jefferson, T. Shortcomings of peer review in biomedical journals. Learned Publishing. 2001; 14:257–263.

White House Office of Science and Technology Policy. Expanding Public Access to the Results of Federally Funded Research, 2013.

Whitworth, B., Friedman, R., Reinventing academic publishing online. Part I: rigor, relevance and practice, 2009. http://firstmonday.org/ojs/index.php/fm/article/view/2609/2248 [First Monday 14. Available from:].

Whitworth, B., Friedman, R., Reinventing academic publishing online. Part II: a socio-technical vision, 2009. http://firstmonday.org/ojs/index.php/fm/article/view/2642/2287 [First Monday 14. Available from:].

Wilbanks, J. Cyberinfrastructure for knowledge sharing. CTWatch Quarterly. 3, 2007.

Wilhite, A. W., Fong, E. A. Coercive citation in academic publishing. Science. 2012; 335:542–543.

Willinsky, J.The Access Principle: The Case for Open Research and Scholarship. Cambridge, MA: MIT Press, 2006.

Willinsky, J. The properties of Locke’s common-wealth of learning. Policy Futures in Education. 2006; 4:348–365.

Willinsky, J., Moorhead, L. How the rise of open access is altering journal publishing. In: Cope B., Phillips A., eds. The Future of the cademic Journal. Cambridge, UK: Chandos Publishing, 2014.

Wooding, S., Grant, J. Assessing research: the researchers’ view. UK: Joint Funding Bodies’ Review of Research Assessment; 2003.

Wu, L., DongFa, X. The future of the academic journal in China. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Cambridge, UK: Chandos Publishing, 2014.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset