#AcBookWeek: The Academic Book of the Future: Evolution or Revolution?

 

This post reflects on one of the events that took place during Academic Book Week in Cambridge. A colloquium was the basis of multiple viewpoints airing thoughts on where the future of the academic book lies from perspectives of booksellers, librarians, and academics. 

During the week of the 9th November the CMT convened a one-day colloquium entitled ‘The Academic Book of the Future: Evolution or Revolution?’ The colloquium was part of Cambridge’s contribution to a host of events being held across the UK in celebration of the first ever Academic Book Week, which is itself an offshoot of the AHRC-funded ‘Academic Book of the Future’ project. The aim of that project is both to raise awareness of academic publishing and to explore how it might change in response to new digital technologies and changing academic cultures. We were delighted to have Samantha Rayner, the PI on the project, to introduce the event.

 

The first session kicked off with a talk from Rupert Gatti, Fellow in Economics at Trinity and one of the founders of Open Book Publishers, explaining ‘Why the Future is Open Access’. Gatti contrasted OA publishing with ‘legacy’ publishing and emphasized the different orders of magnitude of the audience for these models. Academic books published through the usual channels were, he contended, failing to reach 99% of their potential audience. They were also failing to take account of the possibilities opened up by digital media for embedding research materials and for turning the book  into an ongoing project rather than a finished article. The second speaker in this session, Alison Wood, a Mellon/Newton postdoctoral fellow at the Centre for Research in the Arts, Social Sciences and Humanities in Cambridge, reflected on the relationship between academic publishing and the changing institutional structures of the university. She urged us to look for historical precedents to help us cope with current upheavals, and called in the historian Anthony Grafton to testify to the importance of intellectual communities and institutions to the seemingly solitary labour of the academic monograph. In Wood’s analysis, we need to draw upon our knowledge of the changing shape of the university as a collective (far more postdocs, far more adjunct teachers, far more globalization) when thinking about how academic publishing might develop. We can expect scholarly books of the future to take some unusual forms in response to shifting material circumstances.

 

The day was punctuated by a series of ‘views’ from different Cambridge institutions. The first was offered by David Robinson, the Managing Director of Heffers, which has been selling books in Cambridge since 1876. Robinson focused on the extraordinary difference between his earlier job, in a university campus bookshop, and his current role. In the former post, in the heyday of the course textbook, before the demise of the net book agreement and the rise of the internet, selling books had felt a little like ‘playing shops’. Now that the textbook era is over, bookshops are less tightly bound into the warp and weft of universities, and academic books are becoming less and less visible on the shelves even of a bookshop like Heffers. Robinson pointed to the ‘crossover’ book, the academic book that achieves a large readership, as a crucial category in the current bookselling landscape. He cited Thomas Piketty’s Capital as a recent example of the genre.

 

Our second panel was devoted to thinking about the ‘Academic Book of the Near-Future’, and our speakers offered a series of reflections on the current state of play. The first speaker, Samantha Rayner (Senior Lecturer in the Department of Information Studies at UCL and ‘Academic Book of the Future’ PI) described the progress of the project to date. The first phase had involved starting conversations with numerous stakeholders at every point in the production process, to understand the nature of the systems in which the academic book is enmeshed. Rayner called attention to the volatility of the situation in which the project is unfolding—every new development in government higher education policy forces a rethink of possible futures. She also stressed the need for early-career scholars to receive training in the variety of publishing avenues that are open to them. Richard Fisher, former Managing Director of Academic Publishing at CUP, took up the baton with a talk about the ‘invisibles’ of traditional academic publishing—all the work that goes into making the reputation of an academic publisher that never gets seen by authors and readers. Those invisibles had in the past created certain kinds of stability—‘lines’ that libraries would need to subscribe to, periodicals whose names would be a byword for quality, reliable metadata for hard-pressed cataloguers. And the nature of these existing arrangements is having a powerful effect on the ways in which digital technology is (or is not) being adopted by particular publishing sectors. Peter Mandler, Professor of Modern Cultural History at Cambridge and President of the Royal Historical Society, began by singing the praises of the academic monograph; he saw considerable opportunities for evolutionary rather than revolutionary change in this format thanks to the move to digital. The threat to the monograph came, in his view, mostly from government-induced productivism. The scramble to publish for the REF as it is currently configured leads to a lower-quality product, and threatens to marginalize the book altogether. Danny Kingsley, Head of Scholarly Communication at Cambridge, discussed the failure of the academic community to embrace Open Access, and its unpreparedness for the imposition of OA by governments. She outlined Australian Open Access models that had given academic work a far greater impact, putting an end to the world in which intellectual prestige stood in inverse proportion to numbers of readers.

 

In the questions following this panel, some anxieties were aired about the extent to which the digital transition might encourage academic publishers to further devolve labour and costs to their authors, and to weaken processes of peer review. How can we ensure that any innovations bring us the best of academic life, rather than taking us on a race to the bottom? There was also discussion about the difficulties of tailoring Open Access to humanities disciplines that relied on images, given the current costs of digital licences; it was suggested that the use of lower-density (72 dpi) images might offer a way round the problem, but there was some vociferous dissent from this view.

 

After lunch, the University Librarian Anne Jarvis offered us ‘The View from the UL’. The remit of the UL, to safeguard the book’s past for future generations and to make it available to researchers, remains unchanged. But a great deal is changing. Readers no longer perceive the boundaries between different kinds of content (books, articles, websites), and the library is less concerned with drawing in readers and more concerned with pushing out content. The curation and preservation of digital materials, including materials that fall under the rules for legal deposit, has created a set of new challenges. Meanwhile the UL has been increasingly concerned about working with academics in order to understand how they are using old and new technologies in their day-to-day lives, and to ensure that it provides a service tailored to real rather than imagined needs.

 

The third panel session of the day brought together four academics from different humanities disciplines to discuss the publishing landscape as they perceive it. Abigail Brundin, from the Department of Italian, insisted that the future is collaborative; collaboration offers an immediate way out of the often closed-off worlds of our specialisms, fosters interdisciplinary exchanges and allows access to serious funding opportunities. She took issue with any idea that the initiative in pioneering new forms of academic writing should come from early-career academics; it is those who are safely tenured who have a responsibility to blaze a trail. Matthew Champion, a Research Fellow in History, drew attention to the care that has traditionally gone into the production of academic books—care over the quality of the finished product and over its physical appearance, down to details such as the font it is printed in. He wondered whether the move to digital and to a higher speed of publication would entail a kind of flattening of perspectives and an increased sense of alienation on all sides. Should we care how many people see our work? Champion thought not: what we want is not 50,000 careless clicks but the sustained attention of deeply-engaged readers. Our third speaker, Liana Chua reported on the situation in Anthropology, where conservative publishing imperatives are being challenged by digital communications. Anthropologists usually write about living subjects, and increasingly those subjects are able to answer back. This means that the ‘finished-product’ model of the book is starting to die off, with more fluid forms taking its place. Such forms (including film-making) are also better-suited to capturing the experience of fieldwork, which the book does a great deal to efface. Finally Orietta da Rold, from the Faculty of English, questioned the dominance of the book in academia. Digital projects that she had been involved in had been obliged, absurdly, to dress themselves up as books, with introductions and prefaces and conclusions. And collections of articles that might better be published as individual interventions were obliged to repackage themselves as books. The oppressive desire for the ‘big thing’ obscures the important work that is being done in a plethora of forms.

 

In discussion it was suggested that the book form was a valuable identifier, allowing unusual objects like CD-ROMs or databases to be recognized and catalogued and found (the book, in this view, provides the metadata or the paratextual information that gives an artefact a place in the world). There was perhaps a division between those who saw the book as giving ideas a compelling physical presence and those who were worried about the versions of authority at stake in the monograph. The monograph model perhaps discourages people from talking back; this will inevitably come under pressure in a more ‘oral’ digital economy.

 

Our final ‘view’ of the day was ‘The View from Plurabelle Books’, offered by Michael Cahn but read in his absence by Gemma Savage. Plurabelle is a second-hand academic bookseller based in Cambridge; it was founded in 1996. Cahn’s talk focused on a different kind of ‘future’ of the academic book—the future in which the book ages and its owner dies. The books that may have marked out a mental universe need to be treated with appropriate respect and offered the chance of a new lease of life. Sometimes they carry with them a rich sense of their past histories.

 

A concluding discussion drew out several themes from the day:

 

(1) A particular concern had been where the impetus for change would and should come from—from individual academics, from funding bodies, or from government. The conservatism and two-sizes-fit-almost-all nature of the REF act as a brake on innovation and experiment, although the rising significance of ‘impact’ might allow these to re-enter by the back door. The fact that North America has remained impervious to many of the pressures that are affecting British academics was noted with interest.

 

(2) The pros and cons of peer review were a subject of discussion—was it the key to scholarly integrity or a highly unreliable form of gatekeeping that would naturally wither in an online environment?

 

(3) Questions of value were raised—what would determine academic value in an Open Access world? The day’s discussions had veered between notions of value/prestige that were based on numbers of readers and those that were not. Where is the appropriate balance?

 

(4) A broad historical and technological question: are we entering a phase of perpetual change or do we expect that the digital domain will eventually slow down, developing protocols that seem as secure as those that we used to have for print. (And would that be a good or a bad thing?) Just as paper had to be engineered over centuries in order to become a reliable communications medium (or the basis for numerous media), so too the digital domain may take a long time to find any kind of settled form. It was also pointed out that the academic monograph as we know it today was a comparatively short-lived, post-World War II phenomenon.

 

(5) As befits a conference held under the aegis of the Centre for Material Texts, the physical form of the book was a matter of concern. Can lengthy digital books be made a pleasure to read? Can the book online ever substitute for the ‘theatres of memory’ that we have built in print? Is the very restrictiveness of print a source of strength?

 

(6) In the meantime, the one thing that all of the participants could agree on was that we will need to learn to live with (sometimes extreme) diversity.

 

With many thanks to our sponsors, Cambridge University Press, the Academic Book of the Future Project, and the Centre for Material Texts. The lead organizer of the day was Jason Scott-Warren (jes1003@cam.ac.uk); he was very grateful for the copious assistance of Sam Rayner, Rebecca Lyons, and Richard Fisher; for the help of the staff at the Pitt Building, where the colloquium took place; and for the contributions of all of our speakers.

 

Three hundred years of piracy: why academic books should be free

This is a repost from George Walkden’s personal blog about Open Access in the context of academic linguistics. The original post can be found here.

I think academic books should be free.

It’s not a radically new proposal, but I’d like to clarify what I mean by “free”. First, there’s the financial sense: books should be free in that there should be no cost to either the author or the reader. Secondly, and perhaps more importantly, books should be free in terms of what the reader can do with them: copying, sharing, creating derivative works, and more.

I’m not going to go down the murky road of what exactly a modern academic book actually is. I’m just going to take it for granted that there is such a thing, and that it will continue to have a niche in the scholarly ecosystem of the future, even if it doesn’t have the pre-eminent role it has at present in some disciplines, or even the same form and structure. (For instance, I’d be pretty keen to see an academic monograph written in Choose Your Own Adventure style.)

Another thing I’ll be assuming is that technology does change things, even if we’re rather it didn’t. If you’re reluctant to accept that, I’d like to point you to what happened with yellow pages. Or take a look at the University of Manchester’s premier catering space, Christie’s Bistro. Formerly a science library, this imposing chamber retains its bookshelves, which are all packed full of books that have no conceivable use to man or beast: multi-volume indexes of mid-20th-century scientific periodicals, for instance. In this day and age, print is still very much alive, but at the same time the effects of technological change aren’t hard to spot.

With those assumptions in place, then, let’s move on to thinking about the academic book of the future. To do that I’m going to start with the academic book of the past, so let’s rewind time by three centuries. In 1710, the world’s first copyright law, the UK’sStatute of Anne, was passed. This law was a direct consequence of the introduction and spread of the printing press, and the businesses that had sprung up around it. Publishers such as the rapacious Andrew Millar had taken to seizing on texts that, even now, could hardly be argued to be anything other than public-domain: for instance,Livy’s History of Rome. (Titus Livius died in AD 17.) What’s more, they then claimed an exclusive right to publish such texts – a right that extended into perpetuity. This perpetual version of copyright was based on the philosopher John Locke’s theory of property as a natural right. Locke himself was fiercely opposed to this interpretation of his work, but that didn’t dissuade the publishers, who saw the opportunity to make a quick buck (as well as a slow one).

Fortunately, the idea of perpetual copyright was defeated in the courts in 1774, in the landmark Donaldson v. Becket case. It’s reared its ugly head since, of course, for instance when the US was preparing its 1998 Copyright Term Extension Act: it was mentioned that the musician Sonny Bono believed that copyright should last forever(see also this execrable New York Times op-ed). What’s interesting is that this proposal was challenged at the time, by Edinburgh-based publisher Alexander Donaldson – and, for his efforts to make knowledge more widely available, Donaldson was labelled a “pirate”. The term has survived, and is now used – for instance – to describe those scientists who try to access paywalled research articles using the hashtag #ICanHazPDF, and those scientists who help them. What these people have in common with the cannon-firing, hook-toting, parrot-bearing sailors of the seven seas is not particularly clear, but it’s clearly high time that the term was reclaimed.

If you’re interested in the 18th century and its copyright trials and tribulations, I’d encourage you to take a look at Yamada Shōji’s excellent 2012 book “Pirate” Publishing: the Battle over Perpetual Copyright in eighteenth-century Britain, which, appropriately, is available online under a CC-BY-NC-ND license. And lest you think that this is a Whiggish interpretation of history, let me point out that contemporaries saw things in exactly the same way. The political economist Adam Smith, in his seminal work The Wealth of Nations, pointed out that, before the invention of printing, the goal of an academic writer was simply “communicating to other people the curious and useful knowledge which he had acquired himself“. Printing changed things.

Let’s come back to the present. In the present, academic authors make almost nothing from their work: royalties from monographs are a pittance. Meanwhile, it’s an economic truism that each electronic copy made of a work – at a cost of essentially nothing – increases total societal wealth. (This is one of the reasons that intellectual property is not real property.) What academic authors want is readership and recognition: they aren’t after the money, and don’t, for the most part, care about sales. The bizarre part is that they’re punished for trying to increase wealth and readership by the very organizations that supposedly exist to help them increase wealth and readership. Elsevier, for instance, filed a complaint earlier this year against the knowledge sharing site Sci-Hub.org, demanding compensation. It beggars belief that they have the audacity to do this, especially given their insane 37% profit margin in 2014.

So we can see that publishers, when profit-motivated, have interests that run counter to those of academics themselves. And, when we look at the actions of eighteenth-century publishers such as Millar, we can see that this is nothing new. Where does this leave us for the future? Here’s a brief sketch:

  • Publishers should be mission-oriented, and that mission should be the transmission of knowledge.
  • Funding should come neither from authors nor from readers. There are a great many business models compatible with this.
  • Copyright should remain with the author: it’s the only way of preventing exploitation. In practice, this means a CC-BY license, or something like it. Certain humanities academics claim that CC-BY licenses allow plagiarism. This is nonsense.

How far are we down this road? Not far enough; but if you’re a linguist, take a look atLanguage Science Press, if you haven’t already.

In conclusion, then, for-profit publishers should be afraid. If they can’t do their job, then academics will. Libraries will. Mission-oriented publishers will. Pirates will.

It’s sometimes said that “information wants to be free”. This is false: information doesn’t have agency. But if we want information to be free, and take steps in that direction… well, it’s a start.


Note: this post is a written-up version of a talk I gave on 11th Nov 2015 at the John Rylands Library, as part of a debate on “Opening the Book: the Future of the Academic Monograph”. Thanks to the audience, organizers and other panel members for their feedback.

Open Access and Academic Publishing

Independent information services professional Ian Lovecy suggests that there are a number of questions – philosophical and practical – which need to be answered before open access could be a sound and sustainable method of academic publishing. This post makes no attempt to answer them, but rather to identify them and perhaps open up some of the issues involved to discussion.

What do we mean by “open access”?

Time was, I could walk into my public library, ask for a book or a journal article, and if they didn’t have it they would obtain it for me through inter-library loan; that was open access to information, and it died in the ‘70s and ‘80s. In those decades, access remained open, but subject increasingly to charges, primarily to cover the administrative costs of the service. Increasingly, requests became subject to a form of censorship, requiring proof of need or (in Universities) a tutor’s signature.

Today we have the Internet, and access to much of the information on it is available to anyone with access to a computer. (This is theoretically anyone in the UK since computers are available in public libraries and Internet cafés, although opening hours, location, costs, line speed and computer literacy may all impose limitations.) Not all the information is available free of charge, but subject to questions of privacy and confidentiality, public interest, security and government policy on access, it is available to all.

Two questions relating to academic information immediately become apparent:

  • Do we mean free open access?
  • Do we mean open access to the entire world?

Equally, in the case of inter-library loans, it was understood that the material was governed by copyright legislation; frequently, especially in cases where material was provided as a photocopy, recipients had to sign a declaration that they would observe copyright. Items published on the Internet are, or at least can be declared to be, subject to the same legislation, but the enforcement is even harder than it is with library books (and I am sure many lecturers have used the occasional copyright photograph in their lectures without seeking permission). In theory, enforcement should be easier in the case of electronic access, since such access can be traced; in practice, with multiple access by people in several different jurisdictions control is effectively impossible. A further question is therefore:

  • Do we want to put restrictions on the use of the information?

 

What are the reasons for open access publishing?

A frequently-heard justification is that since public funding pays for the research the results should be publicly available. This is at best a slightly tenuous argument – even after the passing of the Freedom of Information Act there is still a great deal of publicly-funded information to which the public most decidedly do not have access. It can, in any case, apply only to a subset of research, primarily that funded wholly by the Research Councils. However, the current intention is that all material, if it is to be included in the REF, must be available on open access.

In the past, there has been an underlying assumption that all research undertaken in Universities is publicly funded; this is no longer tenable. Even ignoring the existence of entirely privately-funded Universities, much research – particularly in medicine, biochemistry and the social sciences – is jointly funded by research councils and either charities or business (or sometimes both); there may be restrictions on the amount of information which can be published because of commercial considerations. Many academic posts in the Humanities are now funded entirely by student fees – surely that cannot count as public funding?

It should not be forgotten that there exists also a group of independent researchers – retired academics, former students who have gone into non-academic work and self-taught members of the public with a keen interest in a specific topic. None of these is likely to be submitting material to the REF (with the possible exception of the first group) and they are not therefore under pressure to use open access publishing; they will, however, be affected by some of the consequences of it considered below.

There can be few researchers who do not wish their work to be read, appreciated and cited by others, and for many who publish in the form of journal articles this is indeed the only reward they have. It is understandable that they may feel exploited when they see the price charged for the journals in which they publish; it is even more understandable that institutions resent paying a high price to buy back the results of work which they feel they have funded. Is the correct answer to this problem making the information available to all? What about monographs? – in this case the authors may receive a (small) financial reward in the form of royalties. Are they to be denied this? After deciding what we mean by open access, the next question to answer is:

  • Is there any moral or philosophical justification for insisting on open access publishing?

What might be the practical effects of open access publishing?

The practical effects can be considered under five headings: the value of information, effects on conventional publishing, location and language of publication, universality of access and costs.

The value of information

A professor (of English literature, no less!) once told me there was no need for subject librarians because “all students had to do was use the Internet to find things”. I put the following fairly specific search into Bing: “studies in Shakespeare’s Henry VIII”. That is, of course, one of the most minor of the plays; the search returned 23,500,000 hits. The first 20 included a Wikipedia entry, several references to Spark notes, summaries and quizzes, one text, one (Spanish) production, and several references to A study of Shakespeare’s Henry VIII by Cumberland Clark. Which is doubtless an excellent book; but a similar search in Birmingham University Library’s catalogue shows in addition, in the first 10 items, books by Larry Champion, Alan Young, Sir Edward German, Tom Merriam, Maurice Hunt and Albert Cook, a text with a preface by Israel Gollancz, and a production by the Royal Shakespeare Company. Some of the books are on detailed aspects of the play or its authorship. It is a manageable list, and represents the selection (you could call it censorship) by a group of scholars over a number of years of books which say something worth reading about the play.

That selection is made in a number of ways, such as the reputation or place of work of the author, the reputation of the publisher, reviews in newspapers and professional journals. There can be dangers in all of these: an author may have a reputation as a maverick and be scorned by established academics; just because an academic doesn’t work in a Russell Group university it doesn’t mean he or she is not good; Mills and Boon might publish a scholarly book; reviewers may have personal axes to grind. However, behind all of this is the publisher: it is the publisher who publicises the book, sends around lists of forthcoming volumes to libraries and academics, sends out review copies. Going back one step, publishers’ editors decide which books to take on, and there can be problems here for those with radically new ideas; the existence of a flourishing, competitive industry is one way of minimising the risk of censorship.

In an open access world, the radical and the maverick are in less danger of being stifled by the establishment; but they have an even greater risk of being lost in the mass of irrelevance which comes pouring out of a search. Only their institution might help to refine the search, and even this might not assist given the lack of sophistication of most search engines: adding “published by Universities” to my search had some effect – it reduced it to a mere 9,300,000 hits. So a vital question in relation to open access is:

  • How do we sort the wheat from the chaff?

Effects on conventional publishing.

If open access publishing of monographs became the default option – as it might if open access became a requirement of the REF – the effects on the academic publishing industry could be severe to catastrophic. Much would depend on a question asked above, and explored further below: is open access to be free access? Electronic publication is not necessarily free – e-books are often cheaper than printed copies, but librarians would question whether even this is true of e-journals – but payment is made by somebody in some way. If, however, open access were to mean free or cheap access, academic publishing could become unsustainable; even today margins are small and there is often cross-subsidy within major publishers from more lucrative parts of the list. University presses are often subsided by parent institutions, usually as part of institutional marketing.

A significant decline in the number of academic publishers would (as indicated above) greatly affect the way in which published research was publicised. It would also leave independent scholars outside the university system with little or no choice of where to submit a manuscript, thus potentially reducing the amount of information and scholarship to which the world has access.

However, despite talk of “webs” and “clouds”, it must be remembered that the Internet is a very physical thing at heart: it needs servers which hold the information. Storage of digitised material is becoming ever cheaper; costs of maintenance of equipment are not. Servers sometimes go down – ask any customer of the Royal Bank of Scotland! – and the more information on a single server the more inconvenience caused when this happens. One way of minimising this problem is to scatter the information on a number of machines; another is to duplicate it on more than one server. Might publishers become involved in this? Would every university want to dedicate machines and staff time to such an operation? Who would publicise new monographs, or persuade people to review them? These questions could be summed up as:

  • Would there be a place for academic publishers in an open access system?

Location and language of publication

In the age of the Internet, research collaboration across national borders is common; however, with the important exception of the United States commitment to open access publication is not. For institutions and scholars in many countries, publication in respected journals which are not open access may be important for prestige or career purposes. Hitherto in the UK, this conundrum has usually been solved by the open access “green” version of a paper (the penultimate draft), leaving the final version to be published normally; the “green” version is acceptable to the Research Councils (and so far to the REF) as satisfying their conditions.

If it is decided that all material for submission to the REF must be available as open access, a further problem arises. Researchers in linguistics or the literature of other languages and cultures frequently publish in non-English languages in journals published in the relevant country. Open access journals in, for example, Mandarin or Sanskrit, Latin or even French, may be hard to find! Open access publication of monographs might be possible, but probably only through a UK publisher – depending on the answers to questions above; This could affect the breadth of the reception of the item, which as well as diminishing any royalties which might still be available could significantly reduce the impact in respect of a REF submission.

An important question to be considered if open access academic publishing is to become the default expectation is:

  • Are foreign language publications to be exempted, and if not what provision is to be made for them?

Access to “Open access” and its costs

As suggested above, “open access” is usually interpreted as free access, but this is not without cost. At present universities have been willing to place science articles on local servers at marginal cost; if humanities publishing and monographs are added, the costs of maintenance over the next fifty years will probably be less than marginal in research-intensive universities. Moreover, there will be a need for more sophisticated search software, akin to that in use by libraries – and as librarians will confirm, such software is not cost-free.

Moreover, the costs of indexing may be increased. If articles are not collected into journals, indexers will have to search over a hundred sites for potential material. This could be carried out by software, but again such software would have a cost; and there would be the added problem that software working by gleaning key words from titles or full text may not take account of the context. (It sometimes happens with human cataloguing – I have seen a book on Keats entitled The mirror and the lamp classified as optics!)

Alternatively, material (at least articles, although not monographs) could be collected into online journals. This could ease the problems of refereeing and therefore selection of useful material, although it would bring back the possible problems with the current system of refereeing – which have recently included the costs in terms of time if not of money. But online journals would need editors and some level of administrative staff – publishers, in other words – and there would be costs involved. Who would pay them? If it is expected to be users, we are back to the question of whether open access is to be free; and if it is paid for by institutions we are likely to find those who do not belong to such an institution disenfranchised.

There are also hidden costs in terms of the use of materials. Screens and readers are improving all the time (although that is also a cost – I don’t need equipment to read a book) but many people still find prolonged use uncomfortable. Hyperlinks can facilitate the movement from index to relevant page, but activities which require having more than one volume open at a time – comparing two editions, for example, or reading a critical work in conjunction with a text – can be awkward.

A book published 400 years ago is (generally) as easy to read as one published four days ago; computer software is upgraded frequently, and although upward compatibility is often included, there are sometimes step changes – Windows 10 has provided examples, and many word processing systems confine upward mobility to perhaps the last five versions. In my research I used a number of books and articles published 100 years previously, and probably little-used in between; how accessible will material published today be in 100 years, and what will be the costs of keeping it accessible?

There are a number of questions arising under this heading:

  • Will there be a need for new indexing and/or searching software, and if so who will pay?
  • Will in-built upward compatibility in software cope with material published a century earlier, and if not how will upgrading be managed?
  • If there are costs in respect of open access which are born collectively by institutions rather than by the end-user, will some potential end-users find themselves without access?
  • How can the problems related to potential inconvenience of use be overcome?

Ian Lovecy MA,PhD, Hon FCLIP, FCLIP, MAUA

What do you think of the issues and questions raised in this post?

Are there others?

Get in touch below!

Format, Flexibility, and Speed: Palgrave Pivot

Guest post by Jen McCall  Global Head of Humanities, Scholarly Division and Publisher, Theatre & Performance at Palgrave Macmillan. Jen discusses Palgrave Macmillan’s short-form monograph, the Pivot  what prompted the development of this publishing format; how it operates within current contexts of publishing, academia, and the REF; and how the academic book of the future must be flexible.

9781137373472.inddI have written a book for my research, but it’s not quite a monograph”, our editors would often hear when visiting academics on-campus. “And it’s too long for a journal article. I don’t suppose you’d accept something 50,000 words long, would you?”

Or alternatively, “I don’t have the time to publish a book. I’d better off getting this research out quickly, by splitting it into several journal articles, although that wouldn’t be my preferred option.”

The idea for our mid-length research format, Palgrave Pivot, came from conversations such as these. Most scholarly journal articles are between 7,000 and 8,000 words in length, while most academic print books published are between 70,000 and 110,000 words, and historically there has rarely been any flexibility in this due to the methods used, and costs involved, in the printing process.

However, the scholarly publishing landscape has been changing for a number of years, and the advent of ebooks means that we publishers are less restricted to word counts and page numbers than once might have been the case. In a digital world, we are not bound to the printing costs which once defined the size of a monograph, and the page numbers which must make up one issue of every journal. The academic book of the future need not be so restricted.

What our authors told us

Prompted by these changes in the scholarly publishing landscape, in 2011 Palgrave Macmillan undertook a programme of research designed to explore how our academic audience both uses and produces research. First we established a research panel, with 1,268 representatives from across the whole Humanities and Social Sciences community, representing a wide range of disciplines and geographies.

The first survey put to the panel explored academic perspectives on the length and speed of academic content published in HSS. It found:

  • Almost two thirds of academics (64% of the 870 who responded to the survey) felt that the length of journal articles was about right, while for monographs this figure was slightly lower at 50%.
  • A number of authors (36% journal article authors and 50% monograph authors) were not satisfied with the formats available to them, with almost all those who felt that the designated length was not right saying (in both cases) that the length was too long.
  • The results showed that 16% believe that current outputs (journals articles and monographs) are sufficient.
  • Some, who felt that a mid-form was a good idea or who were neutral, were asked how likely they would be to publish research in a format between the length of a journal and a monograph: 84% (n=705) indicated that they would be likely to publish in this length.

Speed of production times also proved to be a key issue for the academics we surveyed. During the qualitative research phase, Neil Chakraborti, Senior Lecturer in Criminology, University of Leicester, UK, commented on the needs of ‘scholars seeking to disseminate their research while it is still fresh and current’. Likewise, Jane Fitzpatrick, Acquisitions Librarian at CUNY Graduate Center, USA, described the need “for timely research in the digital world. The Humanities and Social Sciences have been left behind in the immediacy of published research […]. As we know, ‘speed’ and ‘innovation’ are key in the current world of scholarly research”.

The Birth of Palgrave Pivot

As a result of our market research, we developed the idea of Palgrave Pivot; an e-first book format for important and new scholarly research, between 25-50,000 words, to be published within 12 weeks of acceptance of the manuscript. Print copies of the books are also available on demand, so that those who prefer to hold the physical copy in their hand can do so. Of course, the mid-format has been explored by other publishers over recent years. In November 2010, Springer announced SpringerBriefs for works between 50 and 125 pages in length. SpringerBriefs are concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. 2011 saw the launch of Princeton Shorts, brief selections taken from previously-published influential Princeton University Press books and produced exclusively in e-book format. But Palgrave Pivot is the first initiative to offer a mid-length format for original research in the humanities and social sciences, rather than summaries of existing work.

How do we publish Palgrave Pivots so quickly?

In order to make this speedy production time work, we have had to revise and adapt our business workflows substantially. For example, one of the areas that usually takes time in the production process is that of choosing a cover design, which often involves some back-and-forth between design, marketing, sales, editorial and of course the author, as well as having to gain rights permission for images used.

For Palgrave Pivot, rather than having individually designed cover designs, authors are required to choose from a wide range of beautiful templated designs, custom designed by our in-house team. Authors also have to agree to answer any queries from copy-editors and typesetters very quickly; this infographic gives a clear example of how the process works from an author’s point of view.

Ensuring we publish the best in scholarship

We have been very careful, along with our commitment to publish Palgrave Pivot titles within a short timeframe, to ensure that the quality of the peer review is in no way compromised. Palgrave Macmillan prides itself on the quality of the research we publish, and we would not have been able to maintain our reputation for quality work without rigorous peer review.

We are well aware that it is not just the scholarly publishing landscape that is changing – it’s also the changing demands of a life and career in academia. For example, we ensured that we met the stringent requirements of the Higher Education Funding Council for England and have obtained written confirmation that that research outputs published with Palgrave Pivot are eligible for the UK’s Research Excellence Framework (REF) – subject to all other criteria being met.

The first 21 Palgrave Pivot titles were published on 30 October 2012, and we immediately received lots of positive feedback from the scholarly community (as well as a rush from many scholars to publish one ‘just in time’ for the last REF!).

9781137488398.indd

Palgrave Pivot has allowed us to offer our authors the flexibility to publish their research at its natural length and in a variety of formats. Nowhere on our list is this better exemplified than in Medieval studies where our series the New Middle Ages publishes Pivots as well as full length monographs, and that, along with our postmedieval journal has opened up the field with options that any generation of scholar can embrace, giving the field of Medieval Studies more ways to communicate their research.

The speed of the production process gives our authors in the humanities opportunities to publish work which is timely or time-sensitive. This means, by way of example, that we could maximise the impact of the work of Joseph Cheah and Grace Ji-Sun Kim in their book Theological Reflections on Gangnam Style. Without the speed that this publishing format offers us, it just wouldn’t have been possible to ride the wave of the popularity of this phenomenon. Another Pivot, Digital Afterlives of Jane Austen, a fascinating look at the ever-expanding realm of Austen fandom on the Internet, was reviewed on the LSE’s Impact Blog.

9781137401328.indd

In 2013, Palgrave Macmillan announced an open access option for authors of Palgrave Pivot publications, as well as for research monographs, and we published our first open access Palgrave Pivot in 2014, Seeing Ourselves Through Technology by Jill Walker Rettberg.

Two years on, we have published over 200 Palgrave Pivots across business, the humanities and social sciences, at an average speed of 10 weeks. Our shortest title so far has been just 78 pages, while the longest has been 196.

It is fair to say that Palgrave Pivot has proved to be a popular format, both in terms of its speed and flexibility on length; and we believe that the academic book of the future will need to be similarly flexible if it to meet the demands, not just of the changing scholarly publishing landscape, but of the changing demands of a career in academia.