#AcBookWeek 2015: Publisher Workshop at Stationers Hall

To celebrate the recent announcement of the next Academic Book Week (23-28 January 2017), we’re revisiting some highlights from last year’s #AcBookWeek! The first post considers the gathering of academic publishers at the historic Stationers Hall to discuss some of the challenges and opportunities facing the industry. There were 25 individuals representing seven academic publishers, all of which publish books in print and/or digital format. The participants were asked to work in groups and address some of the core questions first posed at the launch of The Academic Book of the Future project. Project co-investigator Nick Canty (UCL) reflects back on this event.

The questions and issues we put to the assembled publishers spanned three main areas, as follows:

 

1. Changes in the nature of research, the research environment and the research process

What do academic books do?

We started off by asking publishers for their views of what purposes they think academic books fulfil. Answers were varied, with some participants asking how we define which books relate to research and which are for reference. This point was picked up by another participant who argued that publishers’ categories (reference or textbook) don’t matter – what matters is the prestige of where you find your content and being providing with trusted credible content. There is a glut of information today with undergraduate students and researchers drawing on a broader pool of resources than in the past (including Wikipedia), which has partly been enabled by digital technologies, although it was questioned whether the structures were in place for interdisciplinary research.

Additional purposes for the academic book were offered, for instance: for academics to achieve tenure, or to publish their PhD thesis; while another participant observed that academic books are now required as a tool for metrics to help define impact, as well as working for libraries to gauge interest through bibliographic data. A more apt starting point might be to ask what the book is doing: proving a hypothesis, making an argument, or communicating an idea – but this doesn’t answer whether textbooks, reference, and professional books should be considered academic books, too. Our seemingly simple question clearly has several possible complex and multi-faceted answers.

 

What changes have taken place in the research environment?

Moving on, we looked at how research is changing in academia. This shook out some fascinating points. As well as comments about the REF (Research Excellence Framework), several participants mentioned the pressure to produce research outputs and the ‘need for speed’, which was pushing researchers to journals and away from books (presumably because of their longer production times). The pressure to publish quickly has had big changes on the production process and there has been advances on this side of publishing. However the sales cycle with library wholesalers hasn’t moved as quickly, and advance notice to market is still at least six months. As someone else said, the rate of change is quite slow.

Alternative ways of research were picked up, including real-time feedback and peer review, crowdfunding and the Knowledge Unlatched publishing model and a question about whether Amazon’s classifications are becoming more important – presumably for discoverability.

 

New forms of books

We wanted to find how books might change because of new technologies and Open Access (OA). There was agreement that OA is having the greatest influence on journals, with books following more slowly behind. Several participants remarked that OA and new media offer more opportunity for collaboration with peer-adopted books with extra resources such as data and video. Shorter book formats, such as Palgrave’s Pivot series, are also a response to a changing environment. New media might herald new virtual collections, such as chapters and articles which are led by XML and metrics, although other participants sounded a note of caution: books are still books and they are not changing – they are still driven by market demand and the activity of publishers is still the traditional model of print with some digital offerings.

There were observations that with booksellers increasingly resistant to stock niche books and the academic book more challenged in terms of sales it was hard to find books in bookstores now and they are mostly just in libraries, although book authors still want print copies. This reflects broader concerns about the visibility of books in brick and mortar stores as the online space expands.

 

2. How are the processes through which books are commissioned, approved or accepted, edited, produced, published, marketed, distributed, made accessible, and preserved changing, and what are the implications for the following?

Publishers

Needless to say this elicited lots of responses, with publishers seen as moving from B2B operations to B2C, and more functions outsourced to attempt to lower costs. While some participants didn’t think marketing had changed much over the last decade, others saw changes to staff recruitment as new skillsets are needed as consumer marketing becomes more important. Clearly there are differences between publishers here. There was a comment that nowadays publishers have to do more direct marketing and rely less on channel marketing.

Authors were seen as becoming more ‘savvy’, more demanding, and more knowledgeable on all aspects of publishing – but particularly in marketing, where for example, they understand the importance of Amazon profiles. However there was very little change to the commissioning process, which was still based on a conversation, a campus visit, or a meeting at a conference. Academics are therefore still ‘student intermediaries’. There is a need to make books available everywhere but it is difficult to push every channel and there is therefore more pressure on authors to help with marketing via their profile in academia. The publishing industry increasingly values media skills and as a consequence there is a convergence of academic and trade publishing at this point.

The publisher brand and the website are important but editors still need to actively reach out in the commissioning process. Editors need usage data to inform commissioning decisions but they aren’t getting this at the moment.

In terms of the publishing process as well as new distribution formats (XML, video) reference works can published in stages with no single publication date, raising the question: what is ‘enough’ content to launch with? Finally, there was general agreement that while there are experiments with peer review it is ‘here to stay’ and ‘still central’ to academic publishing.

 

Aggregators

Pressures and tensions were noted here. These revolve around asking how sustainable the aggregator business model is, with publishers improving discoverability and free searches from Google. There is also tension in that libraries still want aggregators and value their services and small publishers need aggregators (‘in thrall to them’), but publishers are selling complete books – not bits of content. The situation is made more complicated by centralisation and mergers in the sector.

 

Booksellers

In addition to the points about booksellers above, participants noted the disappearance of campus bookstores and the emphasis on stocking high sales books rather than niche ones, therefore questioning the value of bookstores to publishers today.

 

Libraries 

The issue of preservation came through here, in addition to comments about squeezed library budgets (although new models such as just-in-time purchasing and PDA were mentioned as solutions). There was concern about what happens when publishers merge, and features of online access are no longer available with the new company (the example cited was in relation of viewing PDFs after a merger). Further concerns were that although libraries keep digital archives, what happens when formats change? This has implications for future access and preservation.

 

How might the relationships between the different kinds of agents in the publishing supply chain develop in the future?

The last question looked at the supply chain and how publishers and other intermediaries might work together in the future. Once again, some tensions were noted. Libraries are concerned about the power of aggregators, but they choose to work with them rather than with individual publishers. This makes it hard to resolve problems, as it is unclear who is ultimately responsible for problems: the aggregator or the publisher? One group suggested we need to ask what an intermediary is in the supply chain; can we consider the library as an aggregator today? Another group defined intermediaries as ‘anyone/thing that intervenes between point of production and point of use/reading.’

Publishers increasingly want direct access to end-user data from aggregators to drive usage to their online collections to improve renewals, but this desire to drive users to their sites puts them in conflict with aggregators, who provide little information to publishers. Open Access is a possible way to sidestep aggregators, but it then needs something like Amazon or Google for users to discover the books.

 

Conclusion

The workshop was an opportunity for the publishing industry to address some key issues the project has sought to address. While there were bound to be contradictions among participants, what came through were questions about the future role of aggregators in the supply chain, changes in the research environment and perhaps as a consequence, changes in how authors work with publishers, and changes in the way publishers operate. There was agreement however that the book, whether print or digital, was here to stay.

Advertisements

Creative writing theses: guidelines on discoverability and open access

On 5th May 2016, the Project attended a meeting at the British Library to discuss the issue of discoverability of creative writing theses. The meeting was organised by Dr Susan L. Greenberg (Senior Lecturer in the University of Roehampton’s Department of English and Creative Writing). She acted on behalf of the National Association of Writers in Education (NAWE) whose remit includes supporting the work of creative writing academics in the UK. The meeting brought together leading academics in the field of creative writing, as well as library staff from the British Library and university libraries. Discussions expanded well beyond the initial topic of discoverability, touching upon a wide range of issues. This blog post is a summary of the discussions that took place, and includes some important advice for those submitting creative writing PhD theses.

Discoverability

The initial topic of conversation was discoverability. A core concern is that it is difficult for researchers to find creative writing theses, particularly without an author name, and it is also difficult to advise students on how to find them. Dr Greenberg outlined this in an earlier blog post, but the conversation at the British Library meeting extended the scope of debate. The following issues may hamper the discoverability of creative writing theses:

  • The title of the thesis is often metaphorical, and may not be explicit.
  • Often there are no abstracts.
  • Accompanying metadata is often unclear, or even missing altogether.
  • The thesis can be in two parts – creative work and critical analysis – but this is not always the case. How are the different parts catalogued and searched for?
  • At an institutional level, the forms that must be filled in by PhD students are designed for other disciplines, and may not contain the fields required to make creative writing theses discoverable.
  • Creative theses that incorporate a media element cannot currently be deposited in EThOS.
  • International barriers exist: for example, a UK researcher faces difficulties finding and accessing theses from Australia.
  • There is a lack of consensus across institutions about terminology: creative writing PhDs are catalogued and described on EThOS in different ways, for instance:
    • PhD in Creative Writing
    • PhD in English Literature
    • PhD in English with Creative Writing
    • PhD in Critical and Creative Writing

EThOS does not have an option to catalogue a thesis under ‘creative writing’, so it must be included in the abstract/keywords if it is to appear.

In the meeting it became clear that there are numerous reasons for the difficulties outlined above, including a lack of clarity about who is responsible for training students in the use of electronic repositories. Should this be the role of specialist subject supervisors, graduate schools, or research training departments? As increasing technical demands are made on researchers, it is an issue that must be resolved.

Although the day was ostensibly about discoverability, it soon emerged that there were several other interconnected issues around creative writing theses in current and emerging academic and publishing contexts, which are described in the rest of this post.

Open Access mandates and institutional repositories

The major issues seemed to hinge on Open Access. UK university institutions now mandate their researchers to deposit their work in Open Access repositories, which has specific implications for creative writing researchers, as outlined below.

Intellectual Property

When EThOS was established, research by Charles Oppenheim on Intellectual Property Rights (IPR) concluded that publishing theses in repositories posed a very low risk to the rights of authors. But this is not the case for creative writing theses. While academic publishers are by and large prepared to publish a thesis available on a repository as long as it has been substantially revised, trade publishers may refuse publication of a creative writing theses in a similar position. Greenberg summarised the issue: ‘Having a pre-existing version anywhere, on any conditions, seems to be anathema.’

Version control

Creative writing theses that are later developed by publishers may be amended, ranging from the correction of minor typos to the incorporation of major plot changes. As one writer-academic stated at the meeting: ‘I’d much rather people accessed the revised, published version than the legally available version in a repository.’

Piracy

There is a major issue with piracy; one academic reported the example of a novel that became available as a free Torrent download within weeks of publication.

Embargoes

Researchers have the option to place their thesis under embargo for a fixed period – usually three to five years. This action can help with some of the issues discussed above, but prompts questions of its own. The first concerns knowledge: do all PhD students know that this option is open to them? If not, whose responsibility is it to make them aware? The second is the fixed-term nature of the embargo: can “never” be an option? And whose responsibility is it to renew embargoes once they expire, the library or the author? Libraries will probably not have current contact details for authors after 5 years, and the authors may forget.

From the non-author point of view, embargoes can have an adverse effect on the dissemination of research, impacting for example on individual scholars who would like to access the thesis to inform their own work. How is this overcome?

Policies on embargoes currently operate on a university-by-university level: perhaps national guidance on policy for creative writing theses is required.

Ethics

Creative writing theses that involve nonfiction accounts of living subjects raise specific issues. One participant described the case of a PhD supervisee writing a memoir which included anecdotes gathered from family funerals and other events. In the social sciences, the default assumption is that all identities are anonymised before thesis submission, but in the case of creative nonfiction (as with journalism) full anonymity is not always possible or desirable. This can create difficulties with ethics committees, because the projects do not fit into standard models built with other disciplines in mind. A different form and different process is required, but how will this be brought about?

Clearly, there are many complex issues and questions to be addressed:

  • Who should be the gatekeepers for creative writing theses: libraries and institutional repositories, or the authors?
  • How should this gatekeeping be managed so that creative writing theses are available for research, but not so publicly available that they hinder trade publication?
  • How are creative writing PhD students being trained in writing abstracts and metadata; using repositories; copyright? Who should deliver and teach this training?

All of the issues boil down to the fact that creative writing is a very distinct discipline with unique requirements. As Greenberg stated: ‘Creative writing as a relatively new discipline has had to constantly negotiate its way through the academic system in order to be recognised.’ These issues are highlighted anew by the mandate to move towards Open Access. Creative writing academics present at the meeting agree that now is the time to address them.

Practical Guidance for Creative Writing PhD Theses

One immediate practical outcome of the meeting is the launch of a new one-page document, backed by NAWE and the British Library, which gives staff and students advice on how to submit the electronic copy of their PhD thesis. The document has a Creative Commons license, allowing universities and other organisations to share it freely. You can download the document using the link below and share it freely.

NAWE-BL-General-Guidelines (pdf)

The Project would like to extend its thanks to all attendees of the meeting, in particular Dr Susan Greenberg for organising it, and Dr Ros Barber for creating the initial draft of the guidelines document.

Musical Scholarship and the Future of Academic Publishing

This guest post was written by Richard Lewis (Goldsmiths) of the AHRC Transforming Musicology project. It outlines a workshop on ‘Musical Scholarship and the Future of Academic Publishing’, sponsored by The Academic Book of the Future project, and held at Goldsmiths, University of London on Monday 11th April 2016. This post first appeared on the Transforming Musicology project website, and is reproduced here with kind permission from Richard.

A couple of months ago Marilyn Deegan, who is emeritus professor at King’s College London, approached Tim Crawford asking him to put together a workshop as part of their Academic Book of the Future project (2014-2016, PI: Samantha Rayner). The project is a partnership between King’s and the UCL Centre for Publishing, and is funded by the British Library and the AHRC. The project has included a lot of work with practising scholars but Marilyn was keen to engage the musical community so we accepted her invitation.

The workshop was held at Goldsmiths on Monday 11 April and attracted just under 40 delegates. The programme comprised six invited presentations and a roundtable discussion with a mixture of scholars, musicians, and library professionals. This post is a report on the proceedings of the day.

The day began with an introduction to The Academic Book of the Future project from Rebecca Lyons (UCL) who is the research associate on the project. Bex described the background of the project and some of its activities so far, including the inaugural Academic Book Week in November 2015. She described how much of their early work has been involved with forming a community coalition by consulting with publishers, academics, and other stakeholders in the academic book, and attempting to address fundamental questions around the nature of academic publishing. Bex outlined some of their future plans, which include an online modular publication, called a BOOC, which will gather together content from a variety of sources including audio, essays, blog posts, and Storifies.

Mark Everist‘s (Southampton) presentation was pitched as a warning against the apparent benefits of Open Access publishing. Mark spoke from three different perspectives: as president of the RMA, as head of a research-intensive music department, and as a publishing academic. He argued through some of the hypothetical implications to the RMA of going fully Open Access. The RMA runs three publications: the Journal of the RMA, the RMA Research Chronicle, and a monograph series and publishes with Routledge. Mark described some of the benefits of digital documents over paper, including convenience of access and searchability. But he argued that online publication of scholarship does not involve any less work than paper publication: authoring and review is carried out by academics as part of their contractual responsibilities, but copy editing (including fact checking and typesetting), maintenance and sustainability, and promotion and marketing are carried out by professional publishers and these cost money. Mark argued that if scholarship were to go online and be Open Access, none of these processes could be avoided and so the costs would still need to be covered. Mark summarised by arguing that the biggest question around going Open Access is: who takes the risk? Currently it’s a commercial publisher, but if the RMA were to move completely to Open Access it would have to absorb that risk itself.

Following his presentation, Mark answered questions on alternative business models for publishing including that of the Open Library of Humanities which is funded by the Mellon Foundation and by library subscriptions. Another question concerned the practice in science publishing of requiring authors to produce so-called camera-ready copy using a template. Mark responded that science articles are normally short and so proof-reading and fact-checking is much more tractable for authors or reviewers, whereas humanities articles tend to be much longer so these copy editing tasks are better handled by specialist professionals. Mark also noted that he believes, because of the relative ease of science publication, the drive for Open Access is coming from the sciences.

Tim Crawford and I gave a presentation of our work on the plans for the final publication of the Transforming Musicology project. We described our original plan to publish a book which collects together the work of the project and which has a significant online component, but said that now we are intending instead to produce a fully-online publication with a possible future print version. We described how our work so far on the project has successfully led to the creation of a number of Linked Data resources which will feed directly into the publication. We reported that we now have a good idea of the expected content of the publication. Now we are in the position where we need to make plans about the required information architecture for the publication. It needs an authoring and editing strategy which will result in high quality hypertext. We are looking for a publication platform that is based on sound Web architecture principles. We hope to be able to include features such as embedded – but also interactive – music notation examples; Tim gave a demonstration of some of the work we have done on providing such features for lute tablature. We described our intention to curate dynamic reading paths through the publication’s content. While we are expecting authors to produce essentially prose chapters, we intend to edit them into re-combinable chunks, each bearing semantics describing how it may be related to other content chunks from the publication. As editors, we will then define a number of reading paths that address the needs and interests of different audiences, such as:

  • A research findings report on Transforming Musicology
  • A handbook on digital musicology methods
  • Readings paths on particular digital methods (MIR, Linked Data)
  • A reviews and comments reading path
  • Authorial/editorial reading path (i.e. conventional book)

We described our intention to make use of the affordances of the Web to help widen access to our research, in particular by allowing commenting, custom citation, and reader contributions (especially contributing to our data sets such as leitmotive identification or optical music recognition correction). Similarly, we outlined our intentions to use the publication as an access point for researchers who may want to make use of our data sets in their own research.

John Baily (Goldsmiths) began his presentation by mentioning his recently published book, War, Exile, and the Music of Afghanistan (Ashgate), which includes a DVD of films which John described as integral to the text, going on to argue for the complementary properties of text, sound, and video. He gave an account of his extensive use of film-making technology over the course of his career as an ethnographer and observational film-maker, arguing that technological developments have had a significant impact on the practice of ethnography. Following John’s presentation there was some discussion on the relation of the DVD to the text of his book and whether a digital publication may have provided richer opportunities for integrating the two. John partly answered this by demonstrating his online Afghan rubab tutor which mixes text, music notation, and three-camera videos.

Laurent Pugin (RISM) spoke about the initial meeting of a new NEH-funded project, Music Scholarship Online (MuSO). The project may become part of ARC (which backs other online projects including NINES and 18thconnect) and make use of the Collex (COLLections and EXhibits) Semantic Web archive management system. Laurent described several other tools published by ARC including TypeWright for correcting optical recognition output and BigDIVA for making visualisations from large data sets. Laurent argued that it’s not yet clear how MuSO may fit into the Collex system as that system’s affordances for text and metadata may not serve musical content so well. He gave the example of Collex’s full-text search system arguing that it wouldn’t be applicable for searching in music notation collections. Similarly, he argued that the FRBR concepts used in Collex are not necessarily suitable for music sources. Laurent went on to describe RISM’s intention to work with the other so-called “R projects”: RILM, RIdIM, and RIPM to build bibliographic research tools for music scholars. He demonstrated how the traditional RISM and RILM referencing schemes may be updated for online usage. For RISM, this is now largely completed in the shape of their Linked Data interface. Laurent reported that RISM and RILM are in active negotiation over improving their inter-resource hyperlinking.

Yun Fan/樊昀 (RILM) reported on some early-stage work at RILM in producing a Semantic Web ontology for musical concepts to help them develop their database of music literature. As motivation for their work Yun gave the example of being able to answer a natural language query about music: who composed the music for Star Wars? And showed how the search engine Google is already able to deal with this. She argued that Google is effectively using something like an ontology to help make this query possible. She began by describing some of the key properties of Semantic Web ontologies and the benefits they can bring. She mentioned Yves Raimond’s Music Ontology arguing that it was too focused on recorded music production to be suitable for RILM’s needs. She described how their increasing internationalisation is requiring that they update their indexing and cross-search to allow them to relate concepts in different languages. They are hoping that developing an ontology will assist in this aim. Yun gave some examples of RILM’s existing hierarchical subject headings, demonstrating how they are very biased towards European art music. She spoke about some of the difficulties in formalising musical concepts, giving the example of an encyclopedia definition of gospel music which is richly detailed and argued that it is difficult to pick out the precise concepts embedded in such prose knowledge. Following her presentation, there was discussion about the importance of re-use in ontology design: where suitable concepts already exist in other ontologies it’s best practice to point to them rather than replace them. There was also discussion about how RILM, which is a closed access resource, will actually make its ontology public.

Zoltán Kőmíves‘s (Tido Music) presentation was centred around Tido Music’s vision for the future of music publishing. He argued that print music publishing is not going to provide value in the long term and outlined their goals to create enriched and connected musical objects, musical objects as “living creatures”. He showed some examples of the iOS software they are developing for displaying musical scores in a dynamic and responsive way and for integrating extra-musical content into scores. Zoltán argued that academic and what he called “trade” publication needs are quite different (although individuals can be and often are members of both audiences). He gave the example of “preserving uncertainty”, describing how academic audiences often want to know about the uncertainties in musical sources, whereas trade audiences (especially performers) instead want to be presented with a single editorial selection in such cases. As illustrations of this he showed the Online Chopin Variorum Edition and the Lost Voices project. Following his presentation, Zoltán answered questions on the future publication strategy of Tido explaining that their next publications will be piano works for beginners. Discussion also covered the current restriction of Tido’s software to iOS and how this is not good for long-term sustainability.

Following the presentations there was a round table discussion chaired by Simon McVeigh (Goldsmiths). The speakers were joined by: Paul Cassidy, Sarah Westwood, and James Bulley (all PhD students in Music), Jonathan Clinch (Research Associate at Cambridge), and Richard Chesser (head of music at the British Library).

Following introductions, Richard Chesser began the discussion, arguing that everything that had been presented during the day was vital to the work of the British Library. He mentioned that digital publications already come under the rules of legal deposit and questioned how the restrictions of legal deposit will interact with the rights afforded to users of resources that are also open access. He also argued that legal deposit may help to address some of the sustainability issues of digital resources.

Mark Everist next raised a topic that had been introduced earlier – prestige and open access publication, suggesting it’s going to be somewhat of an obstacle or milestone. He argued that most academics know the value of a particular journal or publisher and will want to profit from that as much as possible and that therefore open access publications need to retain the brand of the publisher. Tim Crawford mentioned that prestige and quality are not necessarily correlated with impact, pointing out that it’s possible to perform well under various publication metrics – especially on the Web – without necessarily producing high quality work. Mark argued that impact factors are currently more significant in the sciences than they are in the humanities but that a move to online publication may alter this.

Laurent Pugin described the patchy uptake of digital techniques in publishing and libraries. He noted how libraries are now often digitising books that were actually digitally printed and argued that it would be better for libraries to be allowed to archive the original digital versions. Richard Chesser mentioned that under legal deposit legislation libraries are entitled to the best version available.

A question from the audience was asked about how people make use of Tido’s scores, particularly whether they know of performers playing from tablet computers, and whether their software is useful for ensemble performance. Zoltán Kőmíves argued that print music publications may still have their place in performance situations but also mentioned possible future display technologies that may be more suitable for performance. Tim Crawford and Jonathan Clinch discussed potential problems such as computers crashing or malfunctioning during a performance, or systems where the conductor gets to dictate the page turns. Zoltán argues that a potentially useful feature would be to allow annotations to be shared between performers.

Another question from the audience addressed the topic of reading habits and what reading of the future may be like. One member of the audience responded that Amazon have done some research based on the data they can retrieve from Kindle devices about how people read their eBooks, including where they start and stop. Amazon’s findings include that non-academics read books more closely.

From the day’s discussions it seems that there is a strong drive for increasing open access, but there are numerous serious issues that need to be resolved before it can become more widespread. It also seems that digital publication (whether open or closed) is not likely to replace print entirely in the near future, especially for music publication, but innovations will continue to push the boundaries.

The academic book in Chile: present and future contexts

Today’s guest blog post considers the academic book from a Chilean perspective. The author Manuel Loyola is academic and scientific editor at the Universidad de Santiago de Chile and director of Ariadna Editions (open access) http://ariadnaediciones.cl/, as well as editor of the peer-reviewed journal Izquierdas: http://www.izquierdas.cl/.

Manuel Loyola

According to ISBN records, the academic book in Chile has had little relevance during the last decade with regard to titles published every year. In fact, the books published by all the universities of the country (of which there are 57 in total) represent 11% of the roughly 5500 books published here each year. In addition to university publications, there are also many small and medium publishing houses focused on academic content, which may increase the figure for academic books from 11% to around 20%.

Behind these numbers, the Chilean academic book is subject to different and usually problematic realities. For example, we are not talking about a relatively homogeneous production in terms of national geography: the capital, Santiago, is responsible for more than 60% of the output. Additionally, within this geographical area there are just a few higher education institutions that concentrate most of the production, especially the University of Chile, Pontifical Catholic University, and the University of Santiago – all in Santiago.

The distribution and use of academic books also presents some interesting considerations. They have a low circulation – usually they do not have their own distribution channels because there is not a proper business model defined according to formative and educational goals. Often academic books depend on the mechanisms and strategies of private firms that are usually not interested in these kinds of books. These issues hinder the already precarious life of academic publishing, combining with a lack of collaboration and common strategies.

Why is the Chilean academic book (published by universities in particular) in this situation? I believe that the answer is in the lack of effective and coherent publishing policies. The university authorities as well as those from other scientific entities of the country know little or nothing about publishing activity. Maybe this would not be a problem if these authorities promoted the development and growth of this area. But unfortunately this is not the case. Academic publishing work is in the hands of people with good intentions, but who may be inexperienced. This causes frustrations.

However, the goal of this post is not to suggest a dramatic and pessimistic forecast. Despite what I have previously stated, our field offers many possibilities to improve and develop a better performance of the local academic book. In the short run, we must take advantage of the importance of the state in the provision of human and financial resources focused on academic production. Related to that is the increasing support for open access publishing, providing easier access to research. Additionally, there have been advancements in scientific publishing and enhanced discussions for those working in this field, establishing relationships with foreign academic and publishing organisations, and with the scientific community. Finally, the continued development of academic journals offers hope for a favourable change with books too, showing the potential for improvement.

 

 

 

Manuel Loyola, PhD

Scientific editor

Universidad de Santiago de Chile

 

#AcBookWeek: The Academic Book of the Future: Evolution or Revolution?

 

This post reflects on one of the events that took place during Academic Book Week in Cambridge. A colloquium was the basis of multiple viewpoints airing thoughts on where the future of the academic book lies from perspectives of booksellers, librarians, and academics. 

During the week of the 9th November the CMT convened a one-day colloquium entitled ‘The Academic Book of the Future: Evolution or Revolution?’ The colloquium was part of Cambridge’s contribution to a host of events being held across the UK in celebration of the first ever Academic Book Week, which is itself an offshoot of the AHRC-funded ‘Academic Book of the Future’ project. The aim of that project is both to raise awareness of academic publishing and to explore how it might change in response to new digital technologies and changing academic cultures. We were delighted to have Samantha Rayner, the PI on the project, to introduce the event.

 

The first session kicked off with a talk from Rupert Gatti, Fellow in Economics at Trinity and one of the founders of Open Book Publishers, explaining ‘Why the Future is Open Access’. Gatti contrasted OA publishing with ‘legacy’ publishing and emphasized the different orders of magnitude of the audience for these models. Academic books published through the usual channels were, he contended, failing to reach 99% of their potential audience. They were also failing to take account of the possibilities opened up by digital media for embedding research materials and for turning the book  into an ongoing project rather than a finished article. The second speaker in this session, Alison Wood, a Mellon/Newton postdoctoral fellow at the Centre for Research in the Arts, Social Sciences and Humanities in Cambridge, reflected on the relationship between academic publishing and the changing institutional structures of the university. She urged us to look for historical precedents to help us cope with current upheavals, and called in the historian Anthony Grafton to testify to the importance of intellectual communities and institutions to the seemingly solitary labour of the academic monograph. In Wood’s analysis, we need to draw upon our knowledge of the changing shape of the university as a collective (far more postdocs, far more adjunct teachers, far more globalization) when thinking about how academic publishing might develop. We can expect scholarly books of the future to take some unusual forms in response to shifting material circumstances.

 

The day was punctuated by a series of ‘views’ from different Cambridge institutions. The first was offered by David Robinson, the Managing Director of Heffers, which has been selling books in Cambridge since 1876. Robinson focused on the extraordinary difference between his earlier job, in a university campus bookshop, and his current role. In the former post, in the heyday of the course textbook, before the demise of the net book agreement and the rise of the internet, selling books had felt a little like ‘playing shops’. Now that the textbook era is over, bookshops are less tightly bound into the warp and weft of universities, and academic books are becoming less and less visible on the shelves even of a bookshop like Heffers. Robinson pointed to the ‘crossover’ book, the academic book that achieves a large readership, as a crucial category in the current bookselling landscape. He cited Thomas Piketty’s Capital as a recent example of the genre.

 

Our second panel was devoted to thinking about the ‘Academic Book of the Near-Future’, and our speakers offered a series of reflections on the current state of play. The first speaker, Samantha Rayner (Senior Lecturer in the Department of Information Studies at UCL and ‘Academic Book of the Future’ PI) described the progress of the project to date. The first phase had involved starting conversations with numerous stakeholders at every point in the production process, to understand the nature of the systems in which the academic book is enmeshed. Rayner called attention to the volatility of the situation in which the project is unfolding—every new development in government higher education policy forces a rethink of possible futures. She also stressed the need for early-career scholars to receive training in the variety of publishing avenues that are open to them. Richard Fisher, former Managing Director of Academic Publishing at CUP, took up the baton with a talk about the ‘invisibles’ of traditional academic publishing—all the work that goes into making the reputation of an academic publisher that never gets seen by authors and readers. Those invisibles had in the past created certain kinds of stability—‘lines’ that libraries would need to subscribe to, periodicals whose names would be a byword for quality, reliable metadata for hard-pressed cataloguers. And the nature of these existing arrangements is having a powerful effect on the ways in which digital technology is (or is not) being adopted by particular publishing sectors. Peter Mandler, Professor of Modern Cultural History at Cambridge and President of the Royal Historical Society, began by singing the praises of the academic monograph; he saw considerable opportunities for evolutionary rather than revolutionary change in this format thanks to the move to digital. The threat to the monograph came, in his view, mostly from government-induced productivism. The scramble to publish for the REF as it is currently configured leads to a lower-quality product, and threatens to marginalize the book altogether. Danny Kingsley, Head of Scholarly Communication at Cambridge, discussed the failure of the academic community to embrace Open Access, and its unpreparedness for the imposition of OA by governments. She outlined Australian Open Access models that had given academic work a far greater impact, putting an end to the world in which intellectual prestige stood in inverse proportion to numbers of readers.

 

In the questions following this panel, some anxieties were aired about the extent to which the digital transition might encourage academic publishers to further devolve labour and costs to their authors, and to weaken processes of peer review. How can we ensure that any innovations bring us the best of academic life, rather than taking us on a race to the bottom? There was also discussion about the difficulties of tailoring Open Access to humanities disciplines that relied on images, given the current costs of digital licences; it was suggested that the use of lower-density (72 dpi) images might offer a way round the problem, but there was some vociferous dissent from this view.

 

After lunch, the University Librarian Anne Jarvis offered us ‘The View from the UL’. The remit of the UL, to safeguard the book’s past for future generations and to make it available to researchers, remains unchanged. But a great deal is changing. Readers no longer perceive the boundaries between different kinds of content (books, articles, websites), and the library is less concerned with drawing in readers and more concerned with pushing out content. The curation and preservation of digital materials, including materials that fall under the rules for legal deposit, has created a set of new challenges. Meanwhile the UL has been increasingly concerned about working with academics in order to understand how they are using old and new technologies in their day-to-day lives, and to ensure that it provides a service tailored to real rather than imagined needs.

 

The third panel session of the day brought together four academics from different humanities disciplines to discuss the publishing landscape as they perceive it. Abigail Brundin, from the Department of Italian, insisted that the future is collaborative; collaboration offers an immediate way out of the often closed-off worlds of our specialisms, fosters interdisciplinary exchanges and allows access to serious funding opportunities. She took issue with any idea that the initiative in pioneering new forms of academic writing should come from early-career academics; it is those who are safely tenured who have a responsibility to blaze a trail. Matthew Champion, a Research Fellow in History, drew attention to the care that has traditionally gone into the production of academic books—care over the quality of the finished product and over its physical appearance, down to details such as the font it is printed in. He wondered whether the move to digital and to a higher speed of publication would entail a kind of flattening of perspectives and an increased sense of alienation on all sides. Should we care how many people see our work? Champion thought not: what we want is not 50,000 careless clicks but the sustained attention of deeply-engaged readers. Our third speaker, Liana Chua reported on the situation in Anthropology, where conservative publishing imperatives are being challenged by digital communications. Anthropologists usually write about living subjects, and increasingly those subjects are able to answer back. This means that the ‘finished-product’ model of the book is starting to die off, with more fluid forms taking its place. Such forms (including film-making) are also better-suited to capturing the experience of fieldwork, which the book does a great deal to efface. Finally Orietta da Rold, from the Faculty of English, questioned the dominance of the book in academia. Digital projects that she had been involved in had been obliged, absurdly, to dress themselves up as books, with introductions and prefaces and conclusions. And collections of articles that might better be published as individual interventions were obliged to repackage themselves as books. The oppressive desire for the ‘big thing’ obscures the important work that is being done in a plethora of forms.

 

In discussion it was suggested that the book form was a valuable identifier, allowing unusual objects like CD-ROMs or databases to be recognized and catalogued and found (the book, in this view, provides the metadata or the paratextual information that gives an artefact a place in the world). There was perhaps a division between those who saw the book as giving ideas a compelling physical presence and those who were worried about the versions of authority at stake in the monograph. The monograph model perhaps discourages people from talking back; this will inevitably come under pressure in a more ‘oral’ digital economy.

 

Our final ‘view’ of the day was ‘The View from Plurabelle Books’, offered by Michael Cahn but read in his absence by Gemma Savage. Plurabelle is a second-hand academic bookseller based in Cambridge; it was founded in 1996. Cahn’s talk focused on a different kind of ‘future’ of the academic book—the future in which the book ages and its owner dies. The books that may have marked out a mental universe need to be treated with appropriate respect and offered the chance of a new lease of life. Sometimes they carry with them a rich sense of their past histories.

 

A concluding discussion drew out several themes from the day:

 

(1) A particular concern had been where the impetus for change would and should come from—from individual academics, from funding bodies, or from government. The conservatism and two-sizes-fit-almost-all nature of the REF act as a brake on innovation and experiment, although the rising significance of ‘impact’ might allow these to re-enter by the back door. The fact that North America has remained impervious to many of the pressures that are affecting British academics was noted with interest.

 

(2) The pros and cons of peer review were a subject of discussion—was it the key to scholarly integrity or a highly unreliable form of gatekeeping that would naturally wither in an online environment?

 

(3) Questions of value were raised—what would determine academic value in an Open Access world? The day’s discussions had veered between notions of value/prestige that were based on numbers of readers and those that were not. Where is the appropriate balance?

 

(4) A broad historical and technological question: are we entering a phase of perpetual change or do we expect that the digital domain will eventually slow down, developing protocols that seem as secure as those that we used to have for print. (And would that be a good or a bad thing?) Just as paper had to be engineered over centuries in order to become a reliable communications medium (or the basis for numerous media), so too the digital domain may take a long time to find any kind of settled form. It was also pointed out that the academic monograph as we know it today was a comparatively short-lived, post-World War II phenomenon.

 

(5) As befits a conference held under the aegis of the Centre for Material Texts, the physical form of the book was a matter of concern. Can lengthy digital books be made a pleasure to read? Can the book online ever substitute for the ‘theatres of memory’ that we have built in print? Is the very restrictiveness of print a source of strength?

 

(6) In the meantime, the one thing that all of the participants could agree on was that we will need to learn to live with (sometimes extreme) diversity.

 

With many thanks to our sponsors, Cambridge University Press, the Academic Book of the Future Project, and the Centre for Material Texts. The lead organizer of the day was Jason Scott-Warren (jes1003@cam.ac.uk); he was very grateful for the copious assistance of Sam Rayner, Rebecca Lyons, and Richard Fisher; for the help of the staff at the Pitt Building, where the colloquium took place; and for the contributions of all of our speakers.

 

#AcBookWeek: The Manchester Great Debate

On Wednesday 11th November, the John Rylands Library, Manchester, played host to the Manchester Great Debate, a panel discussion dedicated to addressing the future of the academic monograph. The event was one of over sixty organised during Academic Book Week to celebrate the diversity, innovation, and influence of the academic monograph. While opinions remained varied, with panel representatives from both sides of the fence, the discussion always seemed to return to a few key thematic strands. How are people using books? How are people encountering books? And what future lies ahead for the academic book? Melek Karatas, Lydia Leech, and Paul Clarke (University of Manchester) report here on the event.

The reading room of the John Rylands Library

The reading room of the John Rylands Library

It was in the stunning Christie Room of the Rylands Library that Dr. Guyda Armstrong, Lead Academic for Digital Humanities at the University of Manchester, welcomed the audience of publishers, academics, librarians, early career researchers, and students with a shared concern for the future of how the humanities might be produced, read, and preserved over the coming years. Five panellists were invited to present their case to the group before the floor was opened and the debate got into full swing.

The session was chaired by Professor Marilyn Deegan, Co-Investigator of The Academic Book of the Future project. She began by outlining the project’s main objectives and its future activities, details of which can be found at www.academicbookfuture.org. Before commencing with the presentations she asked the audience to quite simply reflect on what they conceived of when trying to define the book. It was this question, with its rather complex and capacious ramifications, that was the fundamental core of the Manchester Great Debate.

The first of the panellists to present was Frances Pinter, CEO of Manchester University Press. Since print runs of academic books have decreased in volume and their prices increased beyond inflation, she firmly believes that the future of the academic monograph will be governed by the principles of Open Access (OA). She contends that although the journey to OA will be difficult, drawing on the Crossick report to highlight such obstacles as the lack of a skilled workforce and the high cost of publication, it is impossible to deny the potential of the digital age to advance knowledge and maximise discovery. She identified Knowledge Unlatched, a not-for-profit organisation dedicated to assisting libraries to co-ordinate the purchase of monographs, as a pioneer in overcoming some of these obstacles. Under the scheme, the basic cost of publication is shared, and the works are made readily available as a PDF with an OA license via OAPEN. An initial pilot project saw the publication of twenty-eight new books at the cost of just $1,120 per library. The digital copies of these books were also downloaded in a staggering 167 countries worldwide, a true testament to the benefits of the OA monograph.

Emma Brennan, Editorial Director and Commissioning Editor at Manchester University Press, followed with a convincing argument against the financial restraints of contemporary academic book publishing. The system, she claims, is fundamentally broken, favouring short-form sciences over the humanities. A key to this problem lies in the steep rises in purchase prices over recent years, with the result that a monograph, which once sold for around £50 in a run of five hundred copies, is now sold for upwards of £70 and oftentimes on a print-on-demand basis. However, more crucial still is the disparity between authorial costs and corporate profits. After all, typical profit margins for article processing charges (APCs) reached an astonishing 37% in 2014, undeniably privileging shareholders over authors. Under this current system, university presses are only ever able to operate on a not-for-profit basis whereby surplus funds must necessarily be reinvested to cover the costs of future APCs. Such a fragile structure can only continue in the short-term and so the need for a drastic upheaval is undeniable.

Next to present their case was Sandra Bracegirdle, Head of Collection Management at the University of Manchester Library. Through a variety of diagrams she was able to highlight a number of curious trends in the reading habits of library users. A particularly interesting point of discussion was the usability of both electronic and print resources amongst student readers. While those who tended to prefer the former valued the mobility of text and the equality of access, readers of the latter tended to prefer the readability of the physical text. Interestingly, 50% of the students questioned said that they were more likely to read a book if it were available digitally, suggesting that “access trumps readability”. The decreased popularity of physical books is reflected further still by the fact that 27% the books held within the library have not been borrowed for some ten years. She continued to suggest that the increased popularity of electronic formats, on the other hand, might be the result of a change in the way people use and encounter information, arguing that different book forms engender different cognitive styles. While she did not appear to have a strong predisposition one way or the other, she did point out the “emotional presence” of a physical book by concluding with a note from Cicero: “a room without books is like a body without a soul”.

She was followed by Dr. Francesca Billiani, Director of the Centre for Interdisciplinary Research in Arts and Languages (CIDRAL) at the University of Manchester. She fundamentally contends that the materialization of knowledge has in recent years changed beyond recognition and as such so the academic monograph must also adapt. After all, the book is no longer a stand-alone piece of writing for it is firmly rooted within a digital “galaxy of artefacts” comprising blog posts, photos, and videos. Many readers no longer rely exclusively on the academic book itself in their reading of a subject, nor will they necessarily read the work in its entirety choosing instead to read what they consider to be the most relevant fragments. Academics need to embrace these changes in their future writing by composing their monographs in a way that accommodates the new methods of knowledge dissemination. Yet, at the same time, she remained mindful of the fact that the monograph of the future must also retain its academic rigour and avoid falling into the trap of eclecticism.

The last of the speakers to present was Dr. George Walkden, Lecturer in English Linguistics at the University of Manchester. A self-proclaimed Open Access activist, he claims that academic books should not only be free in terms of cost but also in terms of what readers can do with them. He lamented those individuals who, in such a climate of exhaustive copyright limitations, are all too readily branded as pirates for attempting to disseminate knowledge publicly. Although he remains sceptical of what these individuals share with the “cannon-firing, hook-toting, parrot-bearing sailors of the seven seas”, he believes that such labelling elucidates the many issues that have constrained both historic and modern publication practices. In the first instance, publishers should value the transmission of knowledge over their own profitable gain. But, perhaps more crucially, it must be acknowledged that the copyright of a work should remain with its author. He fundamentally contends that academics predominantly write to increase societal wealth and readership, a mission that can only ever really be achieved through the whole-hearted acceptance of Open Access.

Once all of the panellists had concluded their presentations, the floor was opened to the audience and a stimulating debate ensued. One of the most contested issues to arise from the discussion was the ownership of copyright. Many institutions actually hold the copyright of works produced by academics whose research they fund, although they do not always choose to exert this right. Walkden questioned to what extent this practice effectively safeguards the interests of academics, particularly since it is oftentimes too costly for them to even justify challenging this. He argued that academics should be granted the power to make decisions concerning their own intellectual property, particularly regarding the OA nature of their work.

Copyright issues were complicated further by a discussion of Art History monographs, particularly with regards to third party content. The case of Art History is particularly curious since, as Brennan highlighted, print runs have continued to remain reasonably high. After all, many art historians tend to opt for physical books over their digital counterparts since problems can often arise with their visual reproductions if, for instance, the screen is not calibrated to the original settings used in its creation. The discussion then turned to the resurgence of a material culture, whereby consumers are returning to physical artefacts. The increased popularity of vinyl records in today’s digital music society was used to illustrate this point. It was nevertheless argued that such a comparison was counterproductive since consumers ultimately have the freedom to decide the medium through which they access music but do not always have this choice with regards to books. Perhaps the academic book of the future will permit such freedom.

A member of the audience then identified the notable absence of the student in such a discussion. Academics, both on the panel and in the audience, expressed concerns that students were able to access information too easily by simply using the key word search function to find answers. Many felt that the somewhat lengthy process of physically searching out answers was more valuable to developing their research skills. Students within the audience said that while they do like the speed with which they are able to process information, they also value the experience of going to bookshelves, possibly finding other items they had not initially set out to obtain. An interesting discussion followed on whether technology might ever be able to replicate the experience of a physical library, and to what extent learning can be productive within a digital environment.

While the future of the academic book remains unclear, certain issues materialise as central topics of debate. Concerns for copyright, visual reproductions, and third party content, for instance, must necessarily form a basis of this future discussion. But more so than this, authors must begin to write within the context of a rapidly emergent digital world by ensuring that their academic outputs engage precisely with new technological formats and platforms. The opening of the book has only just begun, and perhaps it is only through investment and interdisciplinary collaboration that the academic monograph will have a future.

 

 

Melek Karatas, Lydia Leech, and Paul Clarke are postgraduate students in medieval and early modern languages at the University of Manchester, with research interests in manuscript and print cultures of the literary book.

Three hundred years of piracy: why academic books should be free

This is a repost from George Walkden’s personal blog about Open Access in the context of academic linguistics. The original post can be found here.

I think academic books should be free.

It’s not a radically new proposal, but I’d like to clarify what I mean by “free”. First, there’s the financial sense: books should be free in that there should be no cost to either the author or the reader. Secondly, and perhaps more importantly, books should be free in terms of what the reader can do with them: copying, sharing, creating derivative works, and more.

I’m not going to go down the murky road of what exactly a modern academic book actually is. I’m just going to take it for granted that there is such a thing, and that it will continue to have a niche in the scholarly ecosystem of the future, even if it doesn’t have the pre-eminent role it has at present in some disciplines, or even the same form and structure. (For instance, I’d be pretty keen to see an academic monograph written in Choose Your Own Adventure style.)

Another thing I’ll be assuming is that technology does change things, even if we’re rather it didn’t. If you’re reluctant to accept that, I’d like to point you to what happened with yellow pages. Or take a look at the University of Manchester’s premier catering space, Christie’s Bistro. Formerly a science library, this imposing chamber retains its bookshelves, which are all packed full of books that have no conceivable use to man or beast: multi-volume indexes of mid-20th-century scientific periodicals, for instance. In this day and age, print is still very much alive, but at the same time the effects of technological change aren’t hard to spot.

With those assumptions in place, then, let’s move on to thinking about the academic book of the future. To do that I’m going to start with the academic book of the past, so let’s rewind time by three centuries. In 1710, the world’s first copyright law, the UK’sStatute of Anne, was passed. This law was a direct consequence of the introduction and spread of the printing press, and the businesses that had sprung up around it. Publishers such as the rapacious Andrew Millar had taken to seizing on texts that, even now, could hardly be argued to be anything other than public-domain: for instance,Livy’s History of Rome. (Titus Livius died in AD 17.) What’s more, they then claimed an exclusive right to publish such texts – a right that extended into perpetuity. This perpetual version of copyright was based on the philosopher John Locke’s theory of property as a natural right. Locke himself was fiercely opposed to this interpretation of his work, but that didn’t dissuade the publishers, who saw the opportunity to make a quick buck (as well as a slow one).

Fortunately, the idea of perpetual copyright was defeated in the courts in 1774, in the landmark Donaldson v. Becket case. It’s reared its ugly head since, of course, for instance when the US was preparing its 1998 Copyright Term Extension Act: it was mentioned that the musician Sonny Bono believed that copyright should last forever(see also this execrable New York Times op-ed). What’s interesting is that this proposal was challenged at the time, by Edinburgh-based publisher Alexander Donaldson – and, for his efforts to make knowledge more widely available, Donaldson was labelled a “pirate”. The term has survived, and is now used – for instance – to describe those scientists who try to access paywalled research articles using the hashtag #ICanHazPDF, and those scientists who help them. What these people have in common with the cannon-firing, hook-toting, parrot-bearing sailors of the seven seas is not particularly clear, but it’s clearly high time that the term was reclaimed.

If you’re interested in the 18th century and its copyright trials and tribulations, I’d encourage you to take a look at Yamada Shōji’s excellent 2012 book “Pirate” Publishing: the Battle over Perpetual Copyright in eighteenth-century Britain, which, appropriately, is available online under a CC-BY-NC-ND license. And lest you think that this is a Whiggish interpretation of history, let me point out that contemporaries saw things in exactly the same way. The political economist Adam Smith, in his seminal work The Wealth of Nations, pointed out that, before the invention of printing, the goal of an academic writer was simply “communicating to other people the curious and useful knowledge which he had acquired himself“. Printing changed things.

Let’s come back to the present. In the present, academic authors make almost nothing from their work: royalties from monographs are a pittance. Meanwhile, it’s an economic truism that each electronic copy made of a work – at a cost of essentially nothing – increases total societal wealth. (This is one of the reasons that intellectual property is not real property.) What academic authors want is readership and recognition: they aren’t after the money, and don’t, for the most part, care about sales. The bizarre part is that they’re punished for trying to increase wealth and readership by the very organizations that supposedly exist to help them increase wealth and readership. Elsevier, for instance, filed a complaint earlier this year against the knowledge sharing site Sci-Hub.org, demanding compensation. It beggars belief that they have the audacity to do this, especially given their insane 37% profit margin in 2014.

So we can see that publishers, when profit-motivated, have interests that run counter to those of academics themselves. And, when we look at the actions of eighteenth-century publishers such as Millar, we can see that this is nothing new. Where does this leave us for the future? Here’s a brief sketch:

  • Publishers should be mission-oriented, and that mission should be the transmission of knowledge.
  • Funding should come neither from authors nor from readers. There are a great many business models compatible with this.
  • Copyright should remain with the author: it’s the only way of preventing exploitation. In practice, this means a CC-BY license, or something like it. Certain humanities academics claim that CC-BY licenses allow plagiarism. This is nonsense.

How far are we down this road? Not far enough; but if you’re a linguist, take a look atLanguage Science Press, if you haven’t already.

In conclusion, then, for-profit publishers should be afraid. If they can’t do their job, then academics will. Libraries will. Mission-oriented publishers will. Pirates will.

It’s sometimes said that “information wants to be free”. This is false: information doesn’t have agency. But if we want information to be free, and take steps in that direction… well, it’s a start.


Note: this post is a written-up version of a talk I gave on 11th Nov 2015 at the John Rylands Library, as part of a debate on “Opening the Book: the Future of the Academic Monograph”. Thanks to the audience, organizers and other panel members for their feedback.