4

Academic journals in a context of distributed knowledge

Karim J. Gherab Martín and José Luis González Quirós

Abstract:

This chapter gives a general overview of science. We propose to organize knowledge following a model based on Popper’s World 3, obviously keeping in mind the characteristics and social rules that guide the scientific activity and the behaviour of scholars. We believe that digital repositories and scholarly journals may be compatible if efficient technologies of recommendation are put in place. This chapter proposes a scientific publishing model that goes beyond open access, promoting also an open and innovative reuse of articles and challenging the ‘Ingelfinger rule’.

Key words

open access

innovative reuse

technologies of recommendation

sociology of science

repositories

Popper’s World 3

‘Ingelfinger rule’

Introduction

We are living in a decade of epistemic disruption in academic communication (Cope and Kalantzis: Chapter 2, this volume). The Internet and open access to digital objects have had a strong impact on various sectors of society, such as the music industry and the mass media industry (Elías, 2011), and are having an impact on the academic industry, of course. In academic communication, the advent of the Internet has been crucial to the expansion of the movement known as open access (OA), which advocates universal and cost-free access to all academic articles – those, at least, that have been financed with public or non-profit funds. Complete and detailed information on OA is available on Peter Suber’s website (http://www.earlham.edu/~peters/).

For a number of years, several OA champions have been in favour of the obligation to make freely available those articles whose results have been backed by public funds. The basic inspiration of this movement is the idea that digital technology permits us somehow to get back the spirit that infused the origins of modern science, a dialogue between scientists without mediation or obstacles. The enormous development of science has required the presence of market entities capable of promoting, storing and distributing the growing mass of academic information, but, together with its undeniable advantages, this system has also created a good number of problems of all kinds, not least of which is its cost.

A number of meetings of OA promoters have led to a series of public statements1 and recommendations that have set the standard of OA initiatives up to now. Thus, two roads were identified2 to reach the dream of full OA: the ‘gold’ road and the ‘green’ road.

The gold road would have the journals themselves digitize their past and present publications, so that the electronic versions would be available for free to anyone wishing to read them. The green road calls for scholars to store – or self-archive – their (usually) peer-reviewed pieces, in digital format, in institutional repositories, as the final step in their research efforts.

There have been (and still are) some self-archiving related issues, such as for instance scholars’ fear of not being given due credit, the violation of copyright laws, the need to make archiving mandatory or the use of incentives in order to get the scholars to store their pieces in the institutional repositories. These are important issues to consider when thinking of OA projects involving digital repositories, whether institutional or subject-based.

It is quite understandable that publishers are reluctant to relinquish the subscription business model. As it is, they have managed to set in motion a model which is both efficient and profitable, and they cannot see clear reasons to let go of it, particularly when some of the reasons given by OA advocates may seem somewhat immature. But it is not just a question of economic interest. There are also scholarly interests involved. Current developments represent a very serious challenge to the established order, however, and it should not be forgotten that this order is the result of many years’ work and experience. In order to keep generating income in the gold road, several commercial journals have turned to the ‘author-payment’ model, in which an author who wishes his/her work to have OA (or, usually, the institution funding his/her work) must pay the journal a certain amount. In July 2012 the UK Government backed a report (the Finch Report3) that recommended supporting the gold road to OA4. The funds to support the transition to OA in the UK would come from the UK science budget, starting from April 2013. Furthermore, the British Government also mandated the use of the Creative Commons ‘Attribution’ licence (CC-BY). The use of this Creative Commons licence means not only ‘gratis’ OA (in the sense of toll-free access) but ‘libre’ OA (in the sense of more freedoms than just ‘gratis’) Or, as Peter Suber puts it:5 ‘gratis removes price barriers alone and libre removes price barriers and permission barriers’. However, some authors who have championed the green road have criticized the Finch Report (Harnad, 2012).

As of 22 February 2013, an Obama White House directive6 asked the US funding agencies to develop OA mandates within the following semester:

Scientific research supported by the Federal Government catalyzes innovative breakthroughs that drive our economy. The results of that research become the grist for new insights and are assets for progress in areas such as health, energy, the environment, agriculture, and national security.

Access to digital data sets resulting from federally funded research allows companies to focus resources and efforts on understanding and exploiting discoveries. For example, open weather data underpins the forecasting industry, and making genome sequences publicly available has spawned many biotechnology innovations. In addition, wider availability of peer-reviewed publications and scientific data in digital formats will create innovative economic markets for services related to curation, preservation, analysis, and visualization. Policies that mobilize these publications and data for re-use through preservation and broader public access also maximize the impact and accountability of the Federal research investment. These policies will accelerate scientific breakthroughs and innovation, promote entrepreneurship, and enhance economic growth and job creation.

Regarding self-archiving, on one hand publishers warned that the green road might include no peer-reviewed papers, which in our contemporary understanding of science is essential, even for those who champion the green road (Harnad, 1998). On the other hand, the supporters of self-archiving repositories mentioned other advantages (Hajjem et al., 2005; Harnad and Brody, 2004; Kurtz and Brody, 2006: 49; Lawrence, 2001). There were even some other OA advocates (Guédon, 2004) who thought that the gold road and the green road are but stages on two paths leading to the same destination, in which journals will end up as kind of file repositories, with a large number of added services.

Institutional and subject-based repositories

A digital or academic repository7 is an online OA archive for collecting, preserving and disseminating intellectual outputs, usually scientific research articles and documents. By definition, archives contain primary sources, such as letters, scientific papers and theses (as well as patents, technical reports and even computer software), whereas libraries preserve secondary literature, such as books, and tertiary sources, such as encyclopaedias. There are essentially two kinds of digital repositories that allow for OA (i.e., cost-free) articles: subject-based repositories (SR) and institutional repositories (IR). In both cases, scholars self-archive their works as a final step in their research projects.

Physicists have developed mechanisms whereby pre-prints are exchanged prior to submission to journals. This is a successful cultural practice that has resulted in the percentage of physics articles rejected by leading journals being much lower than in other scientific fields. This culture has also meant that physics pre-prints look very much like post-prints. This is due to historical factors related to differences in culture and research practices within various disciplines (Till, 2001). For example, prior to the advent of the Internet, the distribution of pre-prints was part of the physics culture, but it was not a customary practice in other disciplines. In a way, the pre-print exchange culture is a continuation of the distant practice of exchanging letters that was so commonplace among scientists prior to the mid-seventeenth century when scientific societies and their associated journals came into being.

Hence, physicists view the SRs as a place to exchange, file and deposit their pre-prints. This is an evolving publishing practice that has now reached mathematics (González-Villa, 2011). More than two decades ago, the proposed model for pre-print exchange was based on email (Harnad, 2000), which was the disruptive technology at that time. Nowadays, websites have many advantages over email. The pre-print is deposited once into the SR and remains visible and accessible online to the entire community. Any addition to it, or modification of it, is made on the same website. In contrast, keeping track of versions can make emailing unduly complicated and chaotic. Also, with email it is always possible inadvertently to leave someone off the mailing list, whereas with SRs there is universal access.

The most well known example of an SR was created by physicists and is known as arXiv8. This online pre-print repository contains more than 850,288 papers9 and receives more than 50,000 new submissions every year. It is an arena in which physicists, mathematicians, computer scientists, quantitative biologists and statistics scientists self-archive and exchange articles. The tremendous success of arXiv has prompted scientists in other fields to join in the practice of exchanging pre-prints in this manner. Examples of other SRs are CogPrints,10 and the Nature Publishing Group’s Nature Precedings,11 which contains biology pre-prints.

SRs have two interdependent objectives: 1) to enable the researcher to present and discuss preliminary findings with peers prior to submitting the finalized copy to a journal; and 2) to establish a researcher as the first to discover a scientific finding. In contrast, the publication of an article in a journal constitutes the institutionalization of peer-reviewed results, which is crucial for obtaining recognition and prestige within one’s academic community. The peer-review process continues to be important in the sense that it represents a line of demarcation between official science and the science that is still being done in laboratories and workshops. Publishing in a journal is (or should be) the equivalent of establishing a boundary between ‘what is known’ and ‘what is still unknown but is under investigation’.

Self-archiving OA has been focusing more and more on (and giving greater importance to) the merits of individual articles rather than the journal’s ‘brand’. Many commercial publishers and scientific societies have claimed that this might be damaging to science, and thus have been retaining journal titles for branding reasons: journals add symbolic value (Guédon, 2004: 316), which is closely related to the well-known ‘impact factor’.

Nevertheless, defenders of OA have shown in a great number of case studies12 that OA eprints receive more citations and have greater academic visibility and influence on other researchers in that discipline; it is argued that this is of benefit to science as well as to the researchers themselves.

SRs are gradually shaped by the participants’ interventions and become trading zones when researchers exchange files. For instance, arXiv is a daily meeting place for the exchange of knowledge in the form of ideas, proposals and empirical data, with different sections fordifferent kinds of subjects. In a way, SRs are ‘markets’ in which scholars, following specific self-archiving protocols, deposit the products they want to show to their peers. Therefore, SRs are the proper place to carry out the research function of scientific communication.

IRs have a very different mission: they are online archives set up by academic institutions such as universities and research councils in order to meet four main goals:

1. To make the research carried out by scholars of an institution visible through OA.

2. To group all research output in a single location.

3. To store and preserve other documents like theses, lectures and technical reports.

4. To provide quantitative studies (scientometrics) for funding institutions.

Thus, IRs have (or should have) an institutional role that mirrors the institutional function of academic journals. The articles in an IR are often the same articles that have been reviewed and accepted by academic journals (post-prints). The recent success of IRs can be attributed to initiatives within the OA movement in response to the huge increase in price of academic journals belonging to commercial entities – the so-called ‘serials pricing crisis’, whose historical reconstruction can be found in Guédon (2001). For our purposes, we will focus on SRs and, more specifically, pre-prints.

From linguistic and disciplinary monopoly to the pluralism of languages and cultures

Any way you look at it, the system of prestigious journals generates an artificial scarcity with undesirable effects. To demand a careful selection of what is published is not the same thing as maintaining that only the pieces that can be fitted into a finite number of prestigious journals deserve to be published. A hierarchical ranking of journals has a number of advantages as it gives them financial autonomy and stimulates competitiveness, but it seems undeniable that in a context of fast-moving research the preservation of such a system will force some works into exile or into limbo, or will force the publication in low-impact journals of pieces that in a moment of less abundance of originals would undoubtedly have deserved a better fate. The Malthusian character of the system would be defendable if there were no alternatives, but it becomes absurd when there are other ways to publicize what does not fit into a system with obvious physical and functional limitations.

The existence of much more open publication repositories increases the possibilities of research and multiplies the significance of science. It could be argued that the proliferation of places in which science can be published will increase all kinds of risks: fraud, publication of irrelevant texts, etc. It is undoubtedly true that such risks exist, and may increase. But it is precisely an increase in risk, not a creation of new risks. For, as we know, the traditional system of academic publication, for all its virtues, is not free of these problems. We must trust that the various communities and institutions that will develop the new digital repositories and journals will put in adequate review and control systems, to compensate for the increase in risk mentioned above. But above all, we must be confident that by taking advantage of digital systems of search and recognition the visibility of new contributions will be enormously enlarged.

Not only is this the case for prestigious journals in English relating to well-established research fields, but such moves might also have really decisive repercussions in other research sectors, such as journals in other languages (as significant as Russian, French, Spanish, Chinese or German), in less established fields of research, and in the social sciences, humanities and interdisciplinary studies.

Many of the journals in which the work carried out in these sectors gets published are virtually inaccessible periodicals with a limited, local impact, and so cannot become a part of the ‘Great Conversation’ (to use an expression similar to Oakeshott’s, so often mentioned by Rorty). The new digital scene cannot be a ‘balm of Fierabras’ (the magic ointment often mentioned in Don Quixote which supposedly could heal any wound, similar to the ‘snake oil’ of Americans), but it is clear that it will offer very interesting possibilities to publications that, as things stand today, are stillborn in the presses (as Hume thought had happened with the first edition of his Treatise).

For this kind of work, the system of digital publication may have substantial advantages: its abundance, accessibility and immediacy will significantly increase the impact of each publication. To give but one example, Mendel’s writings probably would not have had to wait for years to be discovered by a curious biologist if they had appeared in a digital medium.

Granted, the abundance of voices makes for great noise. But that noise is already with us and does not seem likely to stop; nor does it seem reasonable to try to stop it. What the growth and maturity of new digital publication systems will provide is a new means to handle that noise, a new way to listen. Good indexes, by names and by subjects, permit us to find easily what may interest us in that new ocean of knowledge. And new systems of reading records, critical notes, experts’ opinions and so on – such as social networks (e.g., Twitter) – inform us about what may be of interest for us in a much richer and more pluralistic way than the traditional system does.

As could be expected, the new technological systems, the digital universe, will give us a portrait of the research world much more like its contemporary reality than that suggested by the traditional system of great journals. Academic journals have reached the limits of the printed world, which anyway will always end up as an encyclopaedic image of knowledge, a systematic portrait. The ideal image of that kind of representation is the interpretation of science offered by positivism, the Archimedean conception of science – tiered, hierarchical and reductive.

Such an idea of science may be defined by Sellars’ statement (1963: v, ix): ‘Science is the measure of all things, of what is that it is, and of what is not that is not.’ This way of describing science requires that its written presentation be a figure of perfect geometry, in an intelligible, harmonic space with no room for error or dispute: science measures and decides, and there is no more to be said. There is some sociological translation of this in a hierarchical academy, in which honours are given through equally objective and precise methods, such as indexes of impact, awards and honours of all kinds, etc. It is of course an exaggeration, to say the least, and anyway it is a portrait that may have been a faithful representation of science in the early years of last century, but certainly has nothing in common with our current world. Galison and Hacking, among others, have called attention to these kinds of new developments, this new diversity of science. Galison (1997: 781) wrote:

I will argue this: science is disunified, and – against our first intuitions – it is precisely the disunification of science that brings strength and stability. This argument stands in opposition to the tenets of two well-established philosophical movements: the logical positivists of the 1920s and 1930s, who argued that unification underlies the coherence and stability of the sciences, and the antipositivists of the 1950s and 1960s, who contended that disunification implies instability.

On the other hand, as Hacking (1983: 218) has argued, in our contemporary world:

The ideal of science does not have to lead towards the majestic unity that the positivists dreamed of. It leads instead, as our very image of life does, to a plethora of research programmes and scientific projects, all competing amongst themselves. It is the image that science offers when it is observed most closely, with greatest attention to its development, as opposed to focusing on an idealistic image of excellence and purity.

This ebullient image of current scientific activity, and, even more, of contemporary discussion at all levels of rational thinking, does not fit anymore within the narrow limits of a printed universe. Digital technology may offer us a much truer image of the very complex reality of contemporary research, thinking and debate. Today, it may seem as if we are entering chaos, and the preservation of what orderly spaces we have inherited may appear sensible: but there are no real grounds for fear. History shows us that technological revolutions always appear as threats but end up settling as opportunities. It falls to institutions, scholars and companies to perfect the instruments needed to create the necessary order, and to take advantage of all the possibilities that the new publication systems offer, to allow an unprecedented enlarging of our perspectives, so that formless noise will permit us to hear the polyphony of new forms of knowledge – new science. It will be a new and powerful melody that will encompass contributions coming from places seemingly very distant from the centres of debate, publications in all languages, texts conceived in new and suggestive cultures, in different disciplinary matrices, which at the moment we cannot even imagine.

Open access to articles published in science and technology is crucial to optimizing distributed knowledge. An OA infrastructure may result in a more cultured citizenry, which, in turn, enhances the distributed contribution to knowledge from these very citizens, regardless of their geographic location. We are speaking, then, not only of ubiquitous learning (Cope and Kalantzis, 2009) but also of ubiquitous contributions ‘in the construction of the collective intellect’ (Levy, 1997: 9). There is no reason why this ubiquity should refer to space alone – it may also include two other variables: time and language. The digitization of ancient works and works by authors now deceased may unearth forgotten knowledges (such as Mendel’s laws) that, when merged with current knowledge, may spawn innovations (González Quirós and Gherab Martín, 2008). Rapid improvements in machine translation and the help of communities of translating volunteers, such as those who help Wikipedia to grow, will show that the linguistic barriers separating many knowledge communities can be broken down in no time at all. Knowledge, then, is distributed across time and languages as well as geography. Translation studies and some changes in non-written scholarly publishing norms, such as the so-called ‘Ingelfinger rule’ (Relman, 1981), can make local knowledges widely available – local ideas may have global impact (Gherab Martín and Unceta, 2013). As it stands now, a translation of an original work published by the same author in another journal and another language is considered a derivative work, and so it implies re-publication, which is not allowed by the ‘Ingelfinger rule’. Thus, openness and accessibility should mean not only access to texts but also to ideas. Texts are no more than the materialization of ideas. From the ‘distributed knowledge’ perspective, publication of a translated text is not re-publication but somehow an original work in itself, because it is devoted to communicating the very idea to a new audience. It is perhaps not new for the author, but it is new for the audience that could not read the original because of the language barrier.

Although the history of human discoveries shows an amazing variety of situations and resources around the creation of any fruitful knowledge, the tendency to systematize what we know with certainty invites us to do a logical reconstruction of the history of discoveries, which very often is far from looking anything like what really happened. It is not easy to exclude the notions of coincidence or chance when looking at the course of human progress. Therefore, it is very likely that there have been as many lost occasions, at the very least, as casual successes; many findings that stayed beyond the reach of our hands due to sheer chance, unhappy chances in these cases.

This kind of speculation, by the way, goes against the impression that so many and such different thinkers have had at times: that we already know practically all there is to know. Historians tell us that Aristotle was convinced of it. Closer to us, a century ago physicists even thought that the pretension of finding new fields could damage the dignity of science. Not long ago, for very different reasons, it became fashionable to talk about the ‘end of science’. In any case, we carry so much on our shoulders that in many fields we move with difficulty. Digital technology can help us to carry that burden.

In many respects, science is a system, but for its creators it is, and should be, a nest of problems. Only as acquired knowledge can science be logically organized. This is a very important task, though perhaps a secondary one. The movement of time demands that we keep improving the look of what may be called normal science, established knowledge. Be that as it may, it is a very restrictive view of that conquered territory to present it as ground on which we may travel easily, with the help of an organized set of sufficient, orderly and consistent plans. No matter how orderly, acquired science is a ground that can never be travelled without surprises. As a living, active city, science goes back on its steps time and again, buries its ruins and explores new avenues, institutions and constructions. And that, added to the conquest of what is yet unknown, is highly problematic, and demands arduous work. A static description of science, organized as an ideally perfect system, far from a growing city, might present the image of an abandoned graveyard. Good science is always an invitation to reconsider problems, to state them anew, to think for ourselves. As the physicist Richard Feynman liked to say, in science ‘what is most interesting is that which does not fit, the part that does not work as expected’.

This kind of mismatch between what is expected and what actually happens occurs mainly when scientists work – as they do most of the time – in hitherto unknown fields, which they explore with the instruments discovered in other pursuits. Still, this type of mismatch also happens when one goes back on what was supposed to be said – or on what is taken for granted, for common doctrine – and attempts to trace those ideas to their source. Thus, it is essential to have easy access to the relevant texts, in order to be able to compare what is written in them with what it is said they say. Such a return to sources is a very interesting experience, especially important in the disciplines of the spirit, to use the old German expression.

Of course, digital technology can improve our access to any kind of sources, with hardly any restrictions, and it will permit us to enjoy, in a very short time, an immensely rich variety of sources that, though usually forgotten, are full of interest and opportunity for many studies. In addition, the new technological environment may offer scholars an approach emphasizing the problems more than the system, revision more than confirmation, diversification more than methodological and disciplinary orthodoxy. A digital environment furthers a number of hybridizing processes which now and then may produce some monstrosity, but which surely will make for the existence of new viable and exciting variations.

Work with multidisciplinary sources, always fertile and innovative, will become a possibility much more real than it has been up to now. Digital repositories are sure to offer many opportunities to renew our thinking, and will contribute to strengthening the tendency to learn from what others are doing, to listen to what others are saying, and to attempt hitherto unthinkable partnerships. The tree of knowledge grows and becomes more complex. Thanks to the new communications systems we can accelerate the processes of multidisciplinary diversification and enrichment that are already beginning; we can more easily break through ideological bounds, language, space and time barriers in order to get closer to the reality of a new Tower of Babel, where at long last we will be able to understand each other.

Strange as it may seem, objections to the presumed advantages of storing knowledge in new ways are almost as old as writing itself. Today, as we face an explosive expansion of available information – as well as the breakdown of the rule system developed during the nineteenth and twentieth centuries in order to rank and organize publications, and to index them for their preservation and use – the same objections appear again from those who try to discredit the possibilities for innovation that the use of digital technologies will bring. Such elitist thinking first opposed writing, and then printing. It echoes the madness of Don Quixote: Unamuno has his Quixote saying, ‘How true, Sancho, is what you say: that through reading and writing madness came into this world’ (Vida de Don Quijote y Sancho I, XI). And Borges (1979: 71) has written: ‘the printing press … is one of the greatest evils for man, because it has tended to multiply unnecessary texts’; testimonies of a well-known reluctance which now reappears and tries to renew its arguments in the face of the new technological possibilities.

The Popperian model of knowledge

Trying to avoid the withering of critical spirit that the excessive self-complacency of neo-positivism might bring, the greatest wisdom of Karl Popper’s post-positivism lies in affirming that together with the objectiveness of knowledge there must always be the tentativeness of conjecture, and that from a logical standpoint scientific activity is best understood as an attempt to disprove ill-established beliefs, rather than as an attempt – inevitably very weak – to confirm more or less eternal, indisputable (presumed) truths. In an ideal (that is, simplified) presentation of science, Popper underlined that our efforts to understand reality are, as to their epistemic value, mere conjectures, and that the canonical scientific spirit should seek not to verify them – a goal we must consider unreachable in principle, since there is always something new, a beyond – but to test them through their courageous exposure to what this Austrian philosopher called a ‘falsification’. This means that conjectures should be contrasted through experimentation, analyses, debates, and so on, that is, put through an ideally rational competition with alternative possibilities, taking care that opportunistic arguments or language traps do not undermine or undervalue circumstances and details that may be unfavourable. This is not to say, obviously, that the real activity of researchers should always follow the Popperian model: the reality of research is much richer, more complex and diverse than any programme, no matter how reasonable. This presentation of science as a model inviting heterodoxy, or at least not forbidding it, must be completed with another Popperian idea that calls for the value of objectivity and introduces equilibrium in the logic of the research system. This idea consists in assuming that the whole universe of conjectures, propositions, empirical data, arguments, disprovals, and so on, forms an ideal whole that nobody can encompass totally, due both to the immensity of the sub-wholes that form it and to our own intellectual limitations. Still, that whole is an ideal frame of reference that allows us to place each document in a specific place. That place is indeed very poorly described if we refer to it through a merely thematic analysis. The fact is that any document has a plurality of meanings, it may be read in many ways, but, ideally, all of them may somehow find their place in a logical universe such as Popper’s World 3.

This Popperian model describes very well the logical possibilities of the links between texts that digital technology permits. Any text is a specific theoretical choice among the myriad of existing possibilities, in order to say something consistent about some specific assumptions. There is never only one form to express that meaning, but, as usually happens in research work, a network of relevant opinions (a well-catalogued network in a digital environment) will allow us to place the decisive points through the convergence of readers’ judgements, critical reflections and text quotations. That logical model will be captured digitally in a series of tags, which may be grouped as the classic descriptions of the printed publication world have been grouped so far. Only, these tags will be a lot less conventional, and a lot richer, than traditional cataloguing tags (Gherab Martín, 2011a; González Quirós and Gherab Martín, 2008).

All comments about an interesting text may be used to tag its digital form, and will allow us to read any text in a much richer and more enlightening context. The strong numerical identity of a digital text makes it possible to attach to it any number of texts to clarify and qualify it, without confusing readers. Any digital text may aspire to be a critical edition. Readers, colleagues, critics, scholars of all kinds, and librarians prepared to understand texts, will be able to and forced to produce new alternative descriptions, and to perfect the contour of their cataloguing as they see fit.

The Popperian model of World 3 offers us an epistemological frame apt for any kind of discourse, and in the end will lead us to new forms of reading and writing science. To get there is not a matter of sheer technology, since practically all the necessary technology is already available; what is needed is to improve our institutions and to learn to manage the new systems with all the guarantees that may be necessary. When all of this starts to become a neatly defined reality, critical and sufficient readings of any text will be possible in easier and more complete ways, in addition to making access to any work much simpler and cheaper, as is already happening. And these advantages will allow researchers to concentrate on what really matters – in contributing something new and relevant.

Our Popperian model of World 3 may be interpreted along the lines of these forecasts. But a touch both epistemological and pragmatic must be added. Journals must become a crossroad of disciplines, a place where ideas and experimental data converge. Each idea and each relevant set of data will have a specific place in the Popperian jungle, so that future science historians will be able to evaluate minutely the extent to which a new set of experimental data was the cause of a change in theory, or a larger mutation, or even what we often term, somewhat metaphorically, a revolution. Future historians may also be able to evaluate more precisely any existing contrary influence. A growing interaction between online texts, data and simulations will show how science works. The interaction between theoretical changes and changes due to the ‘independent life’ of experiments will be easier to see if both are combined on the Internet under the umbrella of an adequate epistemological model. Quite independently of the kind of publications the future may bring, any epistemological model should be open and all publications should find their inspiration in that ever-adapting model that the progress of the various disciplines will deliver.

Journals as innovation in assembly

How could the Popperian model, whose physical representation might be reasonably exercised by digital repositories, be made compatible with the preservation of the advantages that scholarly journals offer? Our proposal aims to make both perspectives compatible, and advocates using journals to the limit, in order to get from them the best they can offer. For that, we propose that any given piece may be reprinted by as many journals as it may be deemed necessary. Let us suppose that a digital repository publishes a mathematician’s article presenting a new method to solve a differential equation that happens to be most useful to approach problems both in astrophysics and in molecular biology. In this case, the piece should be published simultaneously in interested mathematics journals and in astrophysics and molecular biology journals that may detect the news. This can be done in two ways:

1. The first is to have the journals themselves detect new developments that might be of interest to their readers. For that, they would have to invest in efficient search and retrieval technologies, as well as trust in the action of their (scholarly) experts’ network, who in a way would act as hunt-beaters, looking for potentially interesting pieces to publish. In a way, journals should invest both in automatic tools (technological infrastructure) and in an effective social network of scholars (human infrastructure) who will substitute for the current roles of referees.

2. The second is to have the author give his piece directly to the journal, provided the former knows the potential impact of his/her work on other specific disciplines – not very likely, but still possible. In this case, traditional referees would act in their usual way, with the difference being that other referees might be doing the same work for other competing journals. Far from being a disadvantage, this would make for competition and some pressure to make the right choice.

The aim is to transform the way in which journals work, so that picking pieces would be their main task no longer – sooner or later this would be done reasonably well by the repositories; but to present integrating discourses focused on a problem to be solved. As opposed to an amalgam of pieces, a discourse has a specific aim. It presents a story that is plural in its examples, sound in its arguments and consistent in their ordering. The order of the pieces and their mutual interdependence will have clear reasons, and the success of the discourse will depend on the coherence of its contents. In other words, the aim of journals must be to decrease the entropy generated by digital repositories (whether institutional or disciplinary) and decrease the noise to the point where the researcher can ‘hear’ the essence of the message: a coherent, well expressed and orderly discourse, with its pros and cons (if there are any), seeking to privilege knowledge above the mere selection of information. It is a formidable challenge, but the weapons offered us by digital technology are also admirable.

In such a way, since several journals may offer the same pieces, the researcher will look in them for the underlying discourse, the unified discourse that the editorial board has prepared for him/her. That is, the reader will look for a ‘photograph’ of the present state of the art on some specific problem related to his/her discipline. And there is no doubt that the best strategy to get the best portrait of a specific branch of science is to have the best group of experts – a reliable editorial board.

Scholars will reward the journals with sound editorial boards that gather experimental data, graphs, articles and comments related to the solution of common problems, forming optimal discourses on a specific issue. By taking them to the limit of their possibilities, these journals will give the best of themselves in this digital era, an era we have barely entered yet.

The value of the pieces published in journals, then, will be not just the work of filtering and selecting done by the peers, but the relations and the correct ordering that the editors may bring to them. Any electronic journal will be able to use and reuse all pieces, or critiques of them, as many times as they want if it benefits the discourse, since – unlike with printed journals – in the digital world space is not a limitation. Of course, peer review of each piece can be done by the traditional method, or it can be open to new and more democratic proposals with the help of computing tools. Still, it will be the editorial board’s responsibility to present an epistemological construct in agreement with objectivity and the current state of the question for the different chapters of normal science.

This reuse of scientific pieces by journals is what we call the recycling industry, or secondary market. This is a method that Tim O’Reilly, who popularized the expression ‘Web 2.0’, called ‘innovation in assembly’, by analogy with other industries in which value had shifted to the integration of components. Such is the case with older initiatives such as PC making, or more recently with open source. O’Reilly explained ‘innovation in assembly’ as follows:

When commodity components are abundant, you can create value simply by assembling them in novel or effective ways. Much as the PC revolution provided many opportunities for innovation in assembly of commodity hardware, with companies like Dell making a science out of such assembly, thereby defeating companies whose business model required innovation in product development, we believe that Web 2.0 will provide opportunities for companies to beat the competition by getting better at harnessing and integrating services provided by others.

(O’Reilly, 2005)

Just as O’Reilly saw in Web 2.0 a promising future for companies, we believe that our proposal opens the door to a new way to improve the contents of academic journals, a new scientific communication which, to borrow O’Reilly’s felicitous expression, will lead us to Science 2.0. Undoubtedly, the resemblance to his arguments is not just in the terminology, but also arises from the fact that our idea of Science 2.0 shares several of the characteristics which are pushing to success many projects faithful to the principles of Web 2.0: accessibility, openness to participation, immediacy, innovation in assembly, competitiveness, social networks, technological infrastructure, recommendation technologies, and even the notion of discourse as a constantly evolving entity, a kind of beta version that supposedly is getting ever closer to the desired scientific objectivity.

Let us now look at three examples of successful applications of content reuse, where content can refer to software, data, articles, and so on. We will look briefly at the first example before focusing in more detail on the second and third. The first example worth mentioning is Amazon, the famous online bookstore. As in the case of Barnes & Noble,13 the original Amazon database came from the ISBN registry provider R.R. Bowker. But Amazon continued to improve its data and increase the value of its content by adding other complementary information such as cover images, tables of content, indexes and sample materials. In order to provide even more value, Amazon encouraged its users to introduce comments and reviews. In this way, and after several years of making improvements, Amazon has overtaken Bowker and has become the reference for many scholars and librarians in their consultations of bibliographical data.

The second is a well-known example: the FLOSS industry.14 Here, the reused content is software. In this case, depending on the type of licence, developers who use a free software application are often obliged to keep the software they are constructing from it free as well. This simple procedure, made possible by a GNU General Public Licence – a particular type of copyleft licence – has resulted in innumerable innovations and is threatening the dominion of the large, multinational, proprietary software corporations. Notice that, in English, the word ‘free’ has two meanings – ‘free’ as in ‘freedom’ and ‘free’ as in ‘gratis’ – and this has often led to misunderstandings with regard to the free software philosophy. The FLOSS community is not opposed to the marketing of software nor the economic benefits, as long as free access to the software is preserved. For example, one business model might be to develop free software components and then reap the benefits by adapting these components to the client’s needs or installing them in the client’s servers and carrying out routine maintenance. Another might be to accept donations. In any case, open access (i.e., public access) to the software is the critical factor – not that it be no-cost software. This is why the Spanish word ‘libre’ was included in the acronym FLOSS – to emphasize that ‘free’ means ‘freedom’ rather than ‘no-cost’. As we have seen above, the word (and the concept of) ‘libre’ has been borrowed by several OA evangelists.

The FLOSS community was divided into two groups which had exactly the same objective but for different reasons. On one side are those, led by Richard M. Stallman, who took an ethical stance on the issue, believing that the emphasis should be on the freedom to reutilize software; on the other side there were those who inclined towards a pragmatic position, seeking only to promote the development and utilization of open source software because its innovative dynamism was of technological and economic benefit to society’s industrial framework as well as to society as a whole. This is why the latter group decided to substitute the term ‘open source software’ for ‘free software’.

The concept of freedom of reuse sits well with the idea that we want to clarify here – that the open reuse of content (or components or parts of a whole) and the freedom to combine it in various creative ways with other open content (or components or parts of another whole) may lead to new and useful products and services that are appealing to users. It seems reasonable to expect, of course, that a free policy – in the sense of ‘no cost’ – would enhance the practice of reuse. As we will highlight later, this is the case in science.

The third example tackles Public Sector Information (PSI). Public entities are usually the largest producers of information in Europe, and European governments gain income from fees for commercial licences that allow private investors to access and reuse this information. The goal of this licensing-based model is to recoup as soon as possible some of the investment of public funds. A study commissioned by the European Commission some years ago15 showed that this is not the best way to increase the return on investment, however, because these charges present artificial barriers to the private sector’s creation and development of value-added services and products for consumers. The removal of these barriers to the access and reuse of PSI yields higher taxation and employment benefits because of the higher volumes of commercial activity. In opposition, the US Government scenario has been summarized as ‘a strong freedom of information law, no government copyright, fees limited to recouping the cost of dissemination, and no restrictions on reuse’ (Weiss and Backlund, 1997: 307). For further details, see Gherab Martín (2011b).

The private sector finds ways of exploiting PSI for commercial gain by delivering products and services that benefit their consumers:

image by supporting the original mandate of public sector institutions but doing so more cost-effectively and more efficiently than the public sector itself;

image through aggregating and linking raw information from diverse sources into one location;

image by creating innovative services, processes and products such as indexes, catalogues and metadata;

image by adapting information for each specific academic field or commercial sector for a variety of purposes; for instance, by using analytical data software;

image by delivering information through new channels;

image by displaying information in creative and attractive ways such as viewer-friendly presentations, graphics, animations, simulations, interactive interfaces, and so on; and

image by merging this information with other sectors’ services and products.

The sociology of science teaches us that the values of science, and the goods traded by scientists, are essentially different from other sectors. The good that scientists trade is nothing other that the search for truth, and their currency is the articles they publish. The ethos of science, as pointed out by Robert K. Merton (1973: 270), is composed of communalism, universalism, disinterestedness and organized scepticism (CUDOS). As a by-product of the communalism16 and disinterestedness17 norms, scientists are not concerned about monetary compensation when they intend to publish an article in a respected journal. On the contrary, they often have to pay – and this would be even more true under the gold road’s author-pays model. The goal of scientists is to obtain the greatest possible impact by being cited the greatest number of times by others. Their professional prestige and, in turn, their income, power and influence depend on this and, in order to achieve their goal, they are willing to surrender their articles at no cost, with the sole condition of being cited. Therefore, if their articles are published in various journals and monographs they will not raise any objection. On the contrary, every time one of their articles is published, their chances of being read and subsequently cited increase.

Different journals will be able to choose identical articles, if they so desire, since their added value will reside in showing the relationships between them. In other words, there will be no exclusivity contracts with regard to these articles. There will be Creative Commons licences – a kind of copyleft instead of copyright.18 Electronic journals could reuse the articles as many times as they wish, if this is of benefit to the discussion they publish. Peer review of each article or commentary could continue as it has until now or could open up to new and more democratic formulas, but the editorial board would have the additional responsibility of constructing an epistemological building with those articles.

Our proposal challenges the ‘Ingelfinger rule’ – a policy promulgated in 1969 by the editor of the New England Journal of Medicine, Franz J. Ingelfinger (1969), to protect journals from publishing material that had already been published and, thus, had lost its originality. As Lawrence K. Altman (1996b: 1459) has pointed out, however, ‘many people overlook the fact that Ingelfinger’s economic motivation for imposing the rule was, as he said, a “selfish” concern for protecting the copyright’. Altman (1996a, 1996b) has shown that, far from being epistemically motivated, Ingelfinger’s primary objective was commercial in nature – namely, to keep the mass media and other journals from publishing articles that the authors wanted to publish first in The New England Journal of Medicine. As a result, under his mandate, subscriptions to the journal doubled between 1969 and 1977 and almost tripled in 1996.

Conclusion

Given the rise of IRs and SRs, we believe that the precise role of journals will be to eliminate the background noise, to decrease entropy. Focusing on problems instead of disciplines, the mission of academic journals will be to watch over the presentation of an integrating discourse aimed at solving a specific concrete problem, if necessary including in their pages (printed or, preferably, electronic) articles, graphs and animation from various disciplines. To put it briefly, journals should publish a coherently ordered selection of the state of the art on a given problem.

Continuing with the mercantile analogy, the repositories would be primary markets, and journals would play a more sophisticated role by selecting from that primary market the products best adapted to the deeper demands of their readers; a more demanding, competitive and expert market in their field. The journals would sell goods manufactured to certain specifications, and to a high level of quality. Paradoxical as it may seem, they would at once address the general public and be more selective. And they would have great prescriptive value, since a primary market that is not integrated to the value chain would tend to be sterile and disappear.

In this way, the delay in the dissemination of results among the experts could be avoided, although the presentation of those results to the global village would surely fall to the great journals, which therefore would still have a great political value, because they would still manage the information able to attract the attention of large sectors of public opinion, and, as a consequence, of businessmen and politicians. But the difference with what happens today would still be enormous. The risk of ignoring really valuable work would be greatly reduced, as would be the temptation to include mere ‘big names’. Journals would stake their prestige almost on every number. Literally anyone can get into that secondary market, because the raw matter is abundant and cheap, and there would be great competition between older, well-known prescriptors and new prescriptors. If this set of changes would come to pass, we would undoubtedly witness a real institutional mutation, a new defeat to the mandarins, made possible by the powerful increase of basic science and gigantic progress in the information distribution systems. Science would be democratized from below, and would become more international and more competitive. The significance and visibility of research would be enhanced, and its social influence would expand without jeopardizing the reliability and the honesty of its work.

References

Altman, L. K. The Ingelfinger rule, embargoes, and journal peer review – Part 1. The Lancet. 1996; 347(9012):1382–1386.

Altman, L. K. The Ingelfinger rule, embargoes, and journal peer review – Part 2. The Lancet. 1996; 347(9013):1459–1463.

Bailey, C. W., Jr. What is open access? In: Jacobs N., ed. Open Access: Key Strategic, Technical, and Economic Aspects. Oxford: Chandos Publishing; 2006:13–26.

Borges, J. L. Utopía de un hombre que está cansado. In: El Libro de Arena. Madrid: Alianza Editorial; 1979:69–75.

Cope B., Kalantzis M., eds. Ubiquitous Learning. Champaign, IL: University of Illinois Press, 2009.

Cope, B., Kalantzis, M. Changing knowledge ecologies and the transformation of the scholarly journal. In: Cope B., Phillips A., eds. The Future of the Academic Journal. Oxford: Chandos Publishing; 2014:9–83.

Elías, C. Emergent journalism and mass media paradigms in the digital society. In: Kalantzis-Cope P., Gherab Martín K., eds. Emerging Digital Spaces in Contemporary Society. Basingstoke: Palgrave-Macmillan; 2011:37–49.

Galison, P.Image and Logic. A Material Culture of Microphysics. Chicago, IL: The University of Chicago Press, 1997.

Gherab Martín, K. Digital repositories, folksonomies, and inter-disciplinary research: new social epistemology tools. In: Hesse-Biber S.N., ed. The Handbook of Emergent Technologies in Social Research. New York: Oxford University Press; 2011:231–254.

Gherab Martín, K. Toward a Science 2.0 based on technologies of recommendation, innovation, and reuse. In: Kalantzis-Cope P., Gherab Martín K., eds. Emerging Digital Spaces in Contemporary Society: Properties of Technology. New York: Palgrave-Macmillan; 2011:181–194.

Gherab Martín, K., Unceta, A. Open innovation and distributed knowledge: an analysis of their characteristics and prosumers’ motives. Knowledge Management: An International Journal. 2013; 12(1):57–69.

González Quirós, J. L., Gherab Martín, K.The New Temple of Knowledge: Towards a Universal Digital Library. Champaign, IL: Common Ground Publishing, 2008.

González-Villa. Evolving publishing practices in mathematics: Wiles, Perelman, and arXiv. In: Kalantzis-Cope P., Gherab Martín K., eds. Emerging Digital Spaces in Contemporary Society: Properties of Technology. New York: Palgrave-Macmillan; 2011:201–206.

Guédon, J. C., In Oldenburg’s long shadow: librarians, research scientists, publishers, and the control of scientific publishing, 2001. Available from: http://www.arl.org/resources/pubs/mmproceedings/138guedon.shtml

Guédon, J. C., The ‘green’ and ‘gold’ roads to open access: the case for mixing and matching. Serials Review. 2004; 30(4):315–328, doi: 10.1016/j.serrev.2004.09.005.

Hacking, I.Representing and Intervening. Introductory Topics in the Philosophy of Natural Science. Cambridge: Cambridge University Press, 1983.

Hajjem, C., Harnad, S., Gingras, Y., Ten-year cross-disciplinary comparison of the growth of open access and how it increases research citation impact. IEEE Data Engineering Bulletin. 2005; 28(4):39–47 Available from: http://eprints.ecs.soton.ac.uk/11688/

Harnad, S., Implementing peer review on the Net: scientific quality control in scholarly electronic journalsPeek R., Newby G., eds. Scholarly Publishing: The Electronic Frontier. MIT Press: Cambridge, MA, 1996:103–118 Available from: http://eprints.ecs.soton.ac.uk/2900/

Harnad, S., The invisible hand of peer review. Nature 1998; (5 November). Available from: http://www.nature.com/nature/webmatters/invisible/invisible.html#stevan

Harnad, S., The future of scholarly skywritingScammell A., ed. I in the Sky: Visions of the Information Future. Aslib: London, 2000 Available from: http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad99.aslib.html

Harnad, S. The self-archiving initiative. Nature. 2001; 410:1024–1025.

Harnad, S., Electronic preprints and postprintsEncyclopedia of Library and Information Science. Boca Raton, FL: CRC Press, 2003.

Harnad, S., Why the UK should not heed the Finch Report. LSE Impact of Social Sciences Blog. 2012 (summer issue). Available from: http://eprints.soton.ac.uk/341128/

Harnad, S., Brody, T., Comparing the impact of open access (OA) vs. non-OA articles in the same journals. D-Lib Magazine. 2004; 10(6) Available from: http://www.dlib.org/dlib/june04/harnad/06harnad.html

House of Commons Science and Technology Committee Scientific Publications: Free for All?. The Stationery Office Limited, London, 2004. Tenth Report of Session 2003–4, Volume I: Report. Available from: http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/399.pdf

Ingelfinger, F. J. Definition of ‘sole contribution’. The New England Journal of Medicine. 1969; 281:676–677.

Jacobs N., ed. Open Access: Key Strategic, Technical and Economic Aspects. Oxford: Chandos Publishing, 2006.

Kahin B., Nesson C., eds. Borders in Cyberspace: Information Policy and the Global Information Infrastructure. Cambridge, MA: MIT Press, 1997.

Kalantzis-Cope, P. Whose property? Mapping intellectual property rights, contextualizing digital technology and framing social justice. In: Kalantzis-Cope P., Gherab Martín K., eds. Emerging Digital Spaces in Contemporary Society: Properties of Technology. New York: Palgrave-Macmillan; 2011:131–144.

Kalantzis-Cope P., Gherab Martín K., eds. Emerging Digital Spaces in Contemporary Society: Properties of Technology. New York: Palgrave-Macmillan, 2011.

Kurtz, M., Brody, T., The impact loss to authors and researchJacobs N., ed. Open Access: Key Strategic, Technical and Economic Aspects. Chandos Publishing: Oxford, 2006:45–54 Available from: http://eprints.soton.ac.uk/40867/

Lawrence, S., Free online availability substantially increases a paper’s impact. Nature 2001; (31 May). Available from: http://www.nature.com/nature/debates/e-access/Articles/lawrence.html

Levy, P.Collective Intelligence: Mankind’s Emerging World in Cyberspace. New York: Plenum, 1997.

Merton, R. K.The Sociology of Science: Theoretical and Empirical Investigations. Chicago, IL: The University of Chicago Press, 1973.

O’Reilly, T., What Is Web 2.0? Design patterns and business models for the next generation of software, 2005. Available from: http://www.oreillynet.com/lpt/a/6228

Peek R., Newby G., eds. Scholarly Publishing: The Electronic Frontier. MIT Press: Cambridge, MA, 1996:103–118.

PIRA International, University of East Anglia and KnowledgeView Commercial Exploitation of Europe’s Public Sector Information: Executive Summary. Office for Official Publications of the European Communities, Luxembourg, 2000. Available from: ftp://ftp.cordis.lu/pub/econtent/docs/2000_1558_en.pdf

Relman, L. K. The Ingelfinger rule. The New England Journal of Medicine. 1981; 305:824–826.

Sellars, W.Science, Perception and Reality. London: Routledge and Kegan Paul, 1963.

Swan, A., et al, Overview of scholarly communication. Open Access: Key Strategic, Technical and Economic Aspects. Chandos Publishing: Oxford, 2006:3–12 Available from: http://eprints.ecs.soton.ac.uk/12427/

Till, J. E. Predecessors of preprint servers. Learned Publishing. 2001; 14:7–13.

Ware, M.Pathfinder Research on Web-Based Repositories. London: Publisher and Library/Learning Solutions, 2004.

Weiss, P. N., Backlund, P. International information policy in conflict: open and unrestricted access versus government commercialization. In: Kahin B., Nesson C., eds. Borders in Cyberspace. Cambridge, MA: MIT Press; 1997:300–321.


1.The Budapest Open Access Initiative (14 February 2002): http://www.budapestopenaccessinitiative.org/read sponsored by the Open Society Institute. And the Bethesda Statement on Open Access Publishing (20 June 2003): http://dash.harvard.edu/bitstream/handle/1/4725199/suber_bethesda.htm?sequence=1.

2.Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (22 October 2003): http://oa.mpg.de/lang/en-uk/berlin-prozess/berliner-erklarung/.

3.See http://www.researchinfonet.org/wp-content/uploads/2012/06/Finch-Group-report-FINAL-VERSION.pdf.

4.Another recent source of recommendations may be found at http://www.budapestopenaccessinitiative.org/boai-10-recommendations.

5.See http://dash.harvard.edu/bitstream/handle/1/8886691/06-02-12.htm?sequence=1#libre.

6.See http://www.whitehouse.gov/sites/default/files/microsites/ostp/ostp_public_access_memo_2013.pdf.

7.A list of all the world’s open access repositories may be viewed at http://www.opendoar.org/.

8.See http://arxiv.org/.

9.10 June 2013.

10.See http://cogprints.org.

11.See http://precedings.nature.com.

12.See http://opcit.eprints.org/oacitation-biblio.html for a complete and updated bibliography of case studies on the relationship between open access to scientific articles and the increase in citations received.

13.See http://www.barnesandnoble.com/.

14.FLOSS means Free Libre Open Source Software.

15.For an executive summary, see PIRA International, University of East Anglia and KnowledgeView Ltd (2000).

16.Communalism means that scientific results are the common property of the entire scientific community.

17.Disinterestedness means that the results presented by scientists should not be mingled with their financial interests, personal beliefs or activism for a cause.

18.For more information about intellectual property rights in the digital era, see Kalantzis-Cope (2011).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset