#AcBookWeek 2015: Publisher Workshop at Stationers Hall

To celebrate the recent announcement of the next Academic Book Week (23-28 January 2017), we’re revisiting some highlights from last year’s #AcBookWeek! The first post considers the gathering of academic publishers at the historic Stationers Hall to discuss some of the challenges and opportunities facing the industry. There were 25 individuals representing seven academic publishers, all of which publish books in print and/or digital format. The participants were asked to work in groups and address some of the core questions first posed at the launch of The Academic Book of the Future project. Project co-investigator Nick Canty (UCL) reflects back on this event.

The questions and issues we put to the assembled publishers spanned three main areas, as follows:

 

1. Changes in the nature of research, the research environment and the research process

What do academic books do?

We started off by asking publishers for their views of what purposes they think academic books fulfil. Answers were varied, with some participants asking how we define which books relate to research and which are for reference. This point was picked up by another participant who argued that publishers’ categories (reference or textbook) don’t matter – what matters is the prestige of where you find your content and being providing with trusted credible content. There is a glut of information today with undergraduate students and researchers drawing on a broader pool of resources than in the past (including Wikipedia), which has partly been enabled by digital technologies, although it was questioned whether the structures were in place for interdisciplinary research.

Additional purposes for the academic book were offered, for instance: for academics to achieve tenure, or to publish their PhD thesis; while another participant observed that academic books are now required as a tool for metrics to help define impact, as well as working for libraries to gauge interest through bibliographic data. A more apt starting point might be to ask what the book is doing: proving a hypothesis, making an argument, or communicating an idea – but this doesn’t answer whether textbooks, reference, and professional books should be considered academic books, too. Our seemingly simple question clearly has several possible complex and multi-faceted answers.

 

What changes have taken place in the research environment?

Moving on, we looked at how research is changing in academia. This shook out some fascinating points. As well as comments about the REF (Research Excellence Framework), several participants mentioned the pressure to produce research outputs and the ‘need for speed’, which was pushing researchers to journals and away from books (presumably because of their longer production times). The pressure to publish quickly has had big changes on the production process and there has been advances on this side of publishing. However the sales cycle with library wholesalers hasn’t moved as quickly, and advance notice to market is still at least six months. As someone else said, the rate of change is quite slow.

Alternative ways of research were picked up, including real-time feedback and peer review, crowdfunding and the Knowledge Unlatched publishing model and a question about whether Amazon’s classifications are becoming more important – presumably for discoverability.

 

New forms of books

We wanted to find how books might change because of new technologies and Open Access (OA). There was agreement that OA is having the greatest influence on journals, with books following more slowly behind. Several participants remarked that OA and new media offer more opportunity for collaboration with peer-adopted books with extra resources such as data and video. Shorter book formats, such as Palgrave’s Pivot series, are also a response to a changing environment. New media might herald new virtual collections, such as chapters and articles which are led by XML and metrics, although other participants sounded a note of caution: books are still books and they are not changing – they are still driven by market demand and the activity of publishers is still the traditional model of print with some digital offerings.

There were observations that with booksellers increasingly resistant to stock niche books and the academic book more challenged in terms of sales it was hard to find books in bookstores now and they are mostly just in libraries, although book authors still want print copies. This reflects broader concerns about the visibility of books in brick and mortar stores as the online space expands.

 

2. How are the processes through which books are commissioned, approved or accepted, edited, produced, published, marketed, distributed, made accessible, and preserved changing, and what are the implications for the following?

Publishers

Needless to say this elicited lots of responses, with publishers seen as moving from B2B operations to B2C, and more functions outsourced to attempt to lower costs. While some participants didn’t think marketing had changed much over the last decade, others saw changes to staff recruitment as new skillsets are needed as consumer marketing becomes more important. Clearly there are differences between publishers here. There was a comment that nowadays publishers have to do more direct marketing and rely less on channel marketing.

Authors were seen as becoming more ‘savvy’, more demanding, and more knowledgeable on all aspects of publishing – but particularly in marketing, where for example, they understand the importance of Amazon profiles. However there was very little change to the commissioning process, which was still based on a conversation, a campus visit, or a meeting at a conference. Academics are therefore still ‘student intermediaries’. There is a need to make books available everywhere but it is difficult to push every channel and there is therefore more pressure on authors to help with marketing via their profile in academia. The publishing industry increasingly values media skills and as a consequence there is a convergence of academic and trade publishing at this point.

The publisher brand and the website are important but editors still need to actively reach out in the commissioning process. Editors need usage data to inform commissioning decisions but they aren’t getting this at the moment.

In terms of the publishing process as well as new distribution formats (XML, video) reference works can published in stages with no single publication date, raising the question: what is ‘enough’ content to launch with? Finally, there was general agreement that while there are experiments with peer review it is ‘here to stay’ and ‘still central’ to academic publishing.

 

Aggregators

Pressures and tensions were noted here. These revolve around asking how sustainable the aggregator business model is, with publishers improving discoverability and free searches from Google. There is also tension in that libraries still want aggregators and value their services and small publishers need aggregators (‘in thrall to them’), but publishers are selling complete books – not bits of content. The situation is made more complicated by centralisation and mergers in the sector.

 

Booksellers

In addition to the points about booksellers above, participants noted the disappearance of campus bookstores and the emphasis on stocking high sales books rather than niche ones, therefore questioning the value of bookstores to publishers today.

 

Libraries 

The issue of preservation came through here, in addition to comments about squeezed library budgets (although new models such as just-in-time purchasing and PDA were mentioned as solutions). There was concern about what happens when publishers merge, and features of online access are no longer available with the new company (the example cited was in relation of viewing PDFs after a merger). Further concerns were that although libraries keep digital archives, what happens when formats change? This has implications for future access and preservation.

 

How might the relationships between the different kinds of agents in the publishing supply chain develop in the future?

The last question looked at the supply chain and how publishers and other intermediaries might work together in the future. Once again, some tensions were noted. Libraries are concerned about the power of aggregators, but they choose to work with them rather than with individual publishers. This makes it hard to resolve problems, as it is unclear who is ultimately responsible for problems: the aggregator or the publisher? One group suggested we need to ask what an intermediary is in the supply chain; can we consider the library as an aggregator today? Another group defined intermediaries as ‘anyone/thing that intervenes between point of production and point of use/reading.’

Publishers increasingly want direct access to end-user data from aggregators to drive usage to their online collections to improve renewals, but this desire to drive users to their sites puts them in conflict with aggregators, who provide little information to publishers. Open Access is a possible way to sidestep aggregators, but it then needs something like Amazon or Google for users to discover the books.

 

Conclusion

The workshop was an opportunity for the publishing industry to address some key issues the project has sought to address. While there were bound to be contradictions among participants, what came through were questions about the future role of aggregators in the supply chain, changes in the research environment and perhaps as a consequence, changes in how authors work with publishers, and changes in the way publishers operate. There was agreement however that the book, whether print or digital, was here to stay.

Creative writing theses: guidelines on discoverability and open access

On 5th May 2016, the Project attended a meeting at the British Library to discuss the issue of discoverability of creative writing theses. The meeting was organised by Dr Susan L. Greenberg (Senior Lecturer in the University of Roehampton’s Department of English and Creative Writing). She acted on behalf of the National Association of Writers in Education (NAWE) whose remit includes supporting the work of creative writing academics in the UK. The meeting brought together leading academics in the field of creative writing, as well as library staff from the British Library and university libraries. Discussions expanded well beyond the initial topic of discoverability, touching upon a wide range of issues. This blog post is a summary of the discussions that took place, and includes some important advice for those submitting creative writing PhD theses.

Discoverability

The initial topic of conversation was discoverability. A core concern is that it is difficult for researchers to find creative writing theses, particularly without an author name, and it is also difficult to advise students on how to find them. Dr Greenberg outlined this in an earlier blog post, but the conversation at the British Library meeting extended the scope of debate. The following issues may hamper the discoverability of creative writing theses:

  • The title of the thesis is often metaphorical, and may not be explicit.
  • Often there are no abstracts.
  • Accompanying metadata is often unclear, or even missing altogether.
  • The thesis can be in two parts – creative work and critical analysis – but this is not always the case. How are the different parts catalogued and searched for?
  • At an institutional level, the forms that must be filled in by PhD students are designed for other disciplines, and may not contain the fields required to make creative writing theses discoverable.
  • Creative theses that incorporate a media element cannot currently be deposited in EThOS.
  • International barriers exist: for example, a UK researcher faces difficulties finding and accessing theses from Australia.
  • There is a lack of consensus across institutions about terminology: creative writing PhDs are catalogued and described on EThOS in different ways, for instance:
    • PhD in Creative Writing
    • PhD in English Literature
    • PhD in English with Creative Writing
    • PhD in Critical and Creative Writing

EThOS does not have an option to catalogue a thesis under ‘creative writing’, so it must be included in the abstract/keywords if it is to appear.

In the meeting it became clear that there are numerous reasons for the difficulties outlined above, including a lack of clarity about who is responsible for training students in the use of electronic repositories. Should this be the role of specialist subject supervisors, graduate schools, or research training departments? As increasing technical demands are made on researchers, it is an issue that must be resolved.

Although the day was ostensibly about discoverability, it soon emerged that there were several other interconnected issues around creative writing theses in current and emerging academic and publishing contexts, which are described in the rest of this post.

Open Access mandates and institutional repositories

The major issues seemed to hinge on Open Access. UK university institutions now mandate their researchers to deposit their work in Open Access repositories, which has specific implications for creative writing researchers, as outlined below.

Intellectual Property

When EThOS was established, research by Charles Oppenheim on Intellectual Property Rights (IPR) concluded that publishing theses in repositories posed a very low risk to the rights of authors. But this is not the case for creative writing theses. While academic publishers are by and large prepared to publish a thesis available on a repository as long as it has been substantially revised, trade publishers may refuse publication of a creative writing theses in a similar position. Greenberg summarised the issue: ‘Having a pre-existing version anywhere, on any conditions, seems to be anathema.’

Version control

Creative writing theses that are later developed by publishers may be amended, ranging from the correction of minor typos to the incorporation of major plot changes. As one writer-academic stated at the meeting: ‘I’d much rather people accessed the revised, published version than the legally available version in a repository.’

Piracy

There is a major issue with piracy; one academic reported the example of a novel that became available as a free Torrent download within weeks of publication.

Embargoes

Researchers have the option to place their thesis under embargo for a fixed period – usually three to five years. This action can help with some of the issues discussed above, but prompts questions of its own. The first concerns knowledge: do all PhD students know that this option is open to them? If not, whose responsibility is it to make them aware? The second is the fixed-term nature of the embargo: can “never” be an option? And whose responsibility is it to renew embargoes once they expire, the library or the author? Libraries will probably not have current contact details for authors after 5 years, and the authors may forget.

From the non-author point of view, embargoes can have an adverse effect on the dissemination of research, impacting for example on individual scholars who would like to access the thesis to inform their own work. How is this overcome?

Policies on embargoes currently operate on a university-by-university level: perhaps national guidance on policy for creative writing theses is required.

Ethics

Creative writing theses that involve nonfiction accounts of living subjects raise specific issues. One participant described the case of a PhD supervisee writing a memoir which included anecdotes gathered from family funerals and other events. In the social sciences, the default assumption is that all identities are anonymised before thesis submission, but in the case of creative nonfiction (as with journalism) full anonymity is not always possible or desirable. This can create difficulties with ethics committees, because the projects do not fit into standard models built with other disciplines in mind. A different form and different process is required, but how will this be brought about?

Clearly, there are many complex issues and questions to be addressed:

  • Who should be the gatekeepers for creative writing theses: libraries and institutional repositories, or the authors?
  • How should this gatekeeping be managed so that creative writing theses are available for research, but not so publicly available that they hinder trade publication?
  • How are creative writing PhD students being trained in writing abstracts and metadata; using repositories; copyright? Who should deliver and teach this training?

All of the issues boil down to the fact that creative writing is a very distinct discipline with unique requirements. As Greenberg stated: ‘Creative writing as a relatively new discipline has had to constantly negotiate its way through the academic system in order to be recognised.’ These issues are highlighted anew by the mandate to move towards Open Access. Creative writing academics present at the meeting agree that now is the time to address them.

Practical Guidance for Creative Writing PhD Theses

One immediate practical outcome of the meeting is the launch of a new one-page document, backed by NAWE and the British Library, which gives staff and students advice on how to submit the electronic copy of their PhD thesis. The document has a Creative Commons license, allowing universities and other organisations to share it freely. You can download the document using the link below and share it freely.

NAWE-BL-General-Guidelines (pdf)

The Project would like to extend its thanks to all attendees of the meeting, in particular Dr Susan Greenberg for organising it, and Dr Ros Barber for creating the initial draft of the guidelines document.

Musical Scholarship and the Future of Academic Publishing

This guest post was written by Richard Lewis (Goldsmiths) of the AHRC Transforming Musicology project. It outlines a workshop on ‘Musical Scholarship and the Future of Academic Publishing’, sponsored by The Academic Book of the Future project, and held at Goldsmiths, University of London on Monday 11th April 2016. This post first appeared on the Transforming Musicology project website, and is reproduced here with kind permission from Richard.

A couple of months ago Marilyn Deegan, who is emeritus professor at King’s College London, approached Tim Crawford asking him to put together a workshop as part of their Academic Book of the Future project (2014-2016, PI: Samantha Rayner). The project is a partnership between King’s and the UCL Centre for Publishing, and is funded by the British Library and the AHRC. The project has included a lot of work with practising scholars but Marilyn was keen to engage the musical community so we accepted her invitation.

The workshop was held at Goldsmiths on Monday 11 April and attracted just under 40 delegates. The programme comprised six invited presentations and a roundtable discussion with a mixture of scholars, musicians, and library professionals. This post is a report on the proceedings of the day.

The day began with an introduction to The Academic Book of the Future project from Rebecca Lyons (UCL) who is the research associate on the project. Bex described the background of the project and some of its activities so far, including the inaugural Academic Book Week in November 2015. She described how much of their early work has been involved with forming a community coalition by consulting with publishers, academics, and other stakeholders in the academic book, and attempting to address fundamental questions around the nature of academic publishing. Bex outlined some of their future plans, which include an online modular publication, called a BOOC, which will gather together content from a variety of sources including audio, essays, blog posts, and Storifies.

Mark Everist‘s (Southampton) presentation was pitched as a warning against the apparent benefits of Open Access publishing. Mark spoke from three different perspectives: as president of the RMA, as head of a research-intensive music department, and as a publishing academic. He argued through some of the hypothetical implications to the RMA of going fully Open Access. The RMA runs three publications: the Journal of the RMA, the RMA Research Chronicle, and a monograph series and publishes with Routledge. Mark described some of the benefits of digital documents over paper, including convenience of access and searchability. But he argued that online publication of scholarship does not involve any less work than paper publication: authoring and review is carried out by academics as part of their contractual responsibilities, but copy editing (including fact checking and typesetting), maintenance and sustainability, and promotion and marketing are carried out by professional publishers and these cost money. Mark argued that if scholarship were to go online and be Open Access, none of these processes could be avoided and so the costs would still need to be covered. Mark summarised by arguing that the biggest question around going Open Access is: who takes the risk? Currently it’s a commercial publisher, but if the RMA were to move completely to Open Access it would have to absorb that risk itself.

Following his presentation, Mark answered questions on alternative business models for publishing including that of the Open Library of Humanities which is funded by the Mellon Foundation and by library subscriptions. Another question concerned the practice in science publishing of requiring authors to produce so-called camera-ready copy using a template. Mark responded that science articles are normally short and so proof-reading and fact-checking is much more tractable for authors or reviewers, whereas humanities articles tend to be much longer so these copy editing tasks are better handled by specialist professionals. Mark also noted that he believes, because of the relative ease of science publication, the drive for Open Access is coming from the sciences.

Tim Crawford and I gave a presentation of our work on the plans for the final publication of the Transforming Musicology project. We described our original plan to publish a book which collects together the work of the project and which has a significant online component, but said that now we are intending instead to produce a fully-online publication with a possible future print version. We described how our work so far on the project has successfully led to the creation of a number of Linked Data resources which will feed directly into the publication. We reported that we now have a good idea of the expected content of the publication. Now we are in the position where we need to make plans about the required information architecture for the publication. It needs an authoring and editing strategy which will result in high quality hypertext. We are looking for a publication platform that is based on sound Web architecture principles. We hope to be able to include features such as embedded – but also interactive – music notation examples; Tim gave a demonstration of some of the work we have done on providing such features for lute tablature. We described our intention to curate dynamic reading paths through the publication’s content. While we are expecting authors to produce essentially prose chapters, we intend to edit them into re-combinable chunks, each bearing semantics describing how it may be related to other content chunks from the publication. As editors, we will then define a number of reading paths that address the needs and interests of different audiences, such as:

  • A research findings report on Transforming Musicology
  • A handbook on digital musicology methods
  • Readings paths on particular digital methods (MIR, Linked Data)
  • A reviews and comments reading path
  • Authorial/editorial reading path (i.e. conventional book)

We described our intention to make use of the affordances of the Web to help widen access to our research, in particular by allowing commenting, custom citation, and reader contributions (especially contributing to our data sets such as leitmotive identification or optical music recognition correction). Similarly, we outlined our intentions to use the publication as an access point for researchers who may want to make use of our data sets in their own research.

John Baily (Goldsmiths) began his presentation by mentioning his recently published book, War, Exile, and the Music of Afghanistan (Ashgate), which includes a DVD of films which John described as integral to the text, going on to argue for the complementary properties of text, sound, and video. He gave an account of his extensive use of film-making technology over the course of his career as an ethnographer and observational film-maker, arguing that technological developments have had a significant impact on the practice of ethnography. Following John’s presentation there was some discussion on the relation of the DVD to the text of his book and whether a digital publication may have provided richer opportunities for integrating the two. John partly answered this by demonstrating his online Afghan rubab tutor which mixes text, music notation, and three-camera videos.

Laurent Pugin (RISM) spoke about the initial meeting of a new NEH-funded project, Music Scholarship Online (MuSO). The project may become part of ARC (which backs other online projects including NINES and 18thconnect) and make use of the Collex (COLLections and EXhibits) Semantic Web archive management system. Laurent described several other tools published by ARC including TypeWright for correcting optical recognition output and BigDIVA for making visualisations from large data sets. Laurent argued that it’s not yet clear how MuSO may fit into the Collex system as that system’s affordances for text and metadata may not serve musical content so well. He gave the example of Collex’s full-text search system arguing that it wouldn’t be applicable for searching in music notation collections. Similarly, he argued that the FRBR concepts used in Collex are not necessarily suitable for music sources. Laurent went on to describe RISM’s intention to work with the other so-called “R projects”: RILM, RIdIM, and RIPM to build bibliographic research tools for music scholars. He demonstrated how the traditional RISM and RILM referencing schemes may be updated for online usage. For RISM, this is now largely completed in the shape of their Linked Data interface. Laurent reported that RISM and RILM are in active negotiation over improving their inter-resource hyperlinking.

Yun Fan/樊昀 (RILM) reported on some early-stage work at RILM in producing a Semantic Web ontology for musical concepts to help them develop their database of music literature. As motivation for their work Yun gave the example of being able to answer a natural language query about music: who composed the music for Star Wars? And showed how the search engine Google is already able to deal with this. She argued that Google is effectively using something like an ontology to help make this query possible. She began by describing some of the key properties of Semantic Web ontologies and the benefits they can bring. She mentioned Yves Raimond’s Music Ontology arguing that it was too focused on recorded music production to be suitable for RILM’s needs. She described how their increasing internationalisation is requiring that they update their indexing and cross-search to allow them to relate concepts in different languages. They are hoping that developing an ontology will assist in this aim. Yun gave some examples of RILM’s existing hierarchical subject headings, demonstrating how they are very biased towards European art music. She spoke about some of the difficulties in formalising musical concepts, giving the example of an encyclopedia definition of gospel music which is richly detailed and argued that it is difficult to pick out the precise concepts embedded in such prose knowledge. Following her presentation, there was discussion about the importance of re-use in ontology design: where suitable concepts already exist in other ontologies it’s best practice to point to them rather than replace them. There was also discussion about how RILM, which is a closed access resource, will actually make its ontology public.

Zoltán Kőmíves‘s (Tido Music) presentation was centred around Tido Music’s vision for the future of music publishing. He argued that print music publishing is not going to provide value in the long term and outlined their goals to create enriched and connected musical objects, musical objects as “living creatures”. He showed some examples of the iOS software they are developing for displaying musical scores in a dynamic and responsive way and for integrating extra-musical content into scores. Zoltán argued that academic and what he called “trade” publication needs are quite different (although individuals can be and often are members of both audiences). He gave the example of “preserving uncertainty”, describing how academic audiences often want to know about the uncertainties in musical sources, whereas trade audiences (especially performers) instead want to be presented with a single editorial selection in such cases. As illustrations of this he showed the Online Chopin Variorum Edition and the Lost Voices project. Following his presentation, Zoltán answered questions on the future publication strategy of Tido explaining that their next publications will be piano works for beginners. Discussion also covered the current restriction of Tido’s software to iOS and how this is not good for long-term sustainability.

Following the presentations there was a round table discussion chaired by Simon McVeigh (Goldsmiths). The speakers were joined by: Paul Cassidy, Sarah Westwood, and James Bulley (all PhD students in Music), Jonathan Clinch (Research Associate at Cambridge), and Richard Chesser (head of music at the British Library).

Following introductions, Richard Chesser began the discussion, arguing that everything that had been presented during the day was vital to the work of the British Library. He mentioned that digital publications already come under the rules of legal deposit and questioned how the restrictions of legal deposit will interact with the rights afforded to users of resources that are also open access. He also argued that legal deposit may help to address some of the sustainability issues of digital resources.

Mark Everist next raised a topic that had been introduced earlier – prestige and open access publication, suggesting it’s going to be somewhat of an obstacle or milestone. He argued that most academics know the value of a particular journal or publisher and will want to profit from that as much as possible and that therefore open access publications need to retain the brand of the publisher. Tim Crawford mentioned that prestige and quality are not necessarily correlated with impact, pointing out that it’s possible to perform well under various publication metrics – especially on the Web – without necessarily producing high quality work. Mark argued that impact factors are currently more significant in the sciences than they are in the humanities but that a move to online publication may alter this.

Laurent Pugin described the patchy uptake of digital techniques in publishing and libraries. He noted how libraries are now often digitising books that were actually digitally printed and argued that it would be better for libraries to be allowed to archive the original digital versions. Richard Chesser mentioned that under legal deposit legislation libraries are entitled to the best version available.

A question from the audience was asked about how people make use of Tido’s scores, particularly whether they know of performers playing from tablet computers, and whether their software is useful for ensemble performance. Zoltán Kőmíves argued that print music publications may still have their place in performance situations but also mentioned possible future display technologies that may be more suitable for performance. Tim Crawford and Jonathan Clinch discussed potential problems such as computers crashing or malfunctioning during a performance, or systems where the conductor gets to dictate the page turns. Zoltán argues that a potentially useful feature would be to allow annotations to be shared between performers.

Another question from the audience addressed the topic of reading habits and what reading of the future may be like. One member of the audience responded that Amazon have done some research based on the data they can retrieve from Kindle devices about how people read their eBooks, including where they start and stop. Amazon’s findings include that non-academics read books more closely.

From the day’s discussions it seems that there is a strong drive for increasing open access, but there are numerous serious issues that need to be resolved before it can become more widespread. It also seems that digital publication (whether open or closed) is not likely to replace print entirely in the near future, especially for music publication, but innovations will continue to push the boundaries.

The academic book in Chile: present and future contexts

Today’s guest blog post considers the academic book from a Chilean perspective. The author Manuel Loyola is academic and scientific editor at the Universidad de Santiago de Chile and director of Ariadna Editions (open access) http://ariadnaediciones.cl/, as well as editor of the peer-reviewed journal Izquierdas: http://www.izquierdas.cl/.

Manuel Loyola

According to ISBN records, the academic book in Chile has had little relevance during the last decade with regard to titles published every year. In fact, the books published by all the universities of the country (of which there are 57 in total) represent 11% of the roughly 5500 books published here each year. In addition to university publications, there are also many small and medium publishing houses focused on academic content, which may increase the figure for academic books from 11% to around 20%.

Behind these numbers, the Chilean academic book is subject to different and usually problematic realities. For example, we are not talking about a relatively homogeneous production in terms of national geography: the capital, Santiago, is responsible for more than 60% of the output. Additionally, within this geographical area there are just a few higher education institutions that concentrate most of the production, especially the University of Chile, Pontifical Catholic University, and the University of Santiago – all in Santiago.

The distribution and use of academic books also presents some interesting considerations. They have a low circulation – usually they do not have their own distribution channels because there is not a proper business model defined according to formative and educational goals. Often academic books depend on the mechanisms and strategies of private firms that are usually not interested in these kinds of books. These issues hinder the already precarious life of academic publishing, combining with a lack of collaboration and common strategies.

Why is the Chilean academic book (published by universities in particular) in this situation? I believe that the answer is in the lack of effective and coherent publishing policies. The university authorities as well as those from other scientific entities of the country know little or nothing about publishing activity. Maybe this would not be a problem if these authorities promoted the development and growth of this area. But unfortunately this is not the case. Academic publishing work is in the hands of people with good intentions, but who may be inexperienced. This causes frustrations.

However, the goal of this post is not to suggest a dramatic and pessimistic forecast. Despite what I have previously stated, our field offers many possibilities to improve and develop a better performance of the local academic book. In the short run, we must take advantage of the importance of the state in the provision of human and financial resources focused on academic production. Related to that is the increasing support for open access publishing, providing easier access to research. Additionally, there have been advancements in scientific publishing and enhanced discussions for those working in this field, establishing relationships with foreign academic and publishing organisations, and with the scientific community. Finally, the continued development of academic journals offers hope for a favourable change with books too, showing the potential for improvement.

 

 

 

Manuel Loyola, PhD

Scientific editor

Universidad de Santiago de Chile

 

#AcBookWeek: The Academic Book of the Future: Evolution or Revolution?

 

This post reflects on one of the events that took place during Academic Book Week in Cambridge. A colloquium was the basis of multiple viewpoints airing thoughts on where the future of the academic book lies from perspectives of booksellers, librarians, and academics. 

During the week of the 9th November the CMT convened a one-day colloquium entitled ‘The Academic Book of the Future: Evolution or Revolution?’ The colloquium was part of Cambridge’s contribution to a host of events being held across the UK in celebration of the first ever Academic Book Week, which is itself an offshoot of the AHRC-funded ‘Academic Book of the Future’ project. The aim of that project is both to raise awareness of academic publishing and to explore how it might change in response to new digital technologies and changing academic cultures. We were delighted to have Samantha Rayner, the PI on the project, to introduce the event.

 

The first session kicked off with a talk from Rupert Gatti, Fellow in Economics at Trinity and one of the founders of Open Book Publishers, explaining ‘Why the Future is Open Access’. Gatti contrasted OA publishing with ‘legacy’ publishing and emphasized the different orders of magnitude of the audience for these models. Academic books published through the usual channels were, he contended, failing to reach 99% of their potential audience. They were also failing to take account of the possibilities opened up by digital media for embedding research materials and for turning the book  into an ongoing project rather than a finished article. The second speaker in this session, Alison Wood, a Mellon/Newton postdoctoral fellow at the Centre for Research in the Arts, Social Sciences and Humanities in Cambridge, reflected on the relationship between academic publishing and the changing institutional structures of the university. She urged us to look for historical precedents to help us cope with current upheavals, and called in the historian Anthony Grafton to testify to the importance of intellectual communities and institutions to the seemingly solitary labour of the academic monograph. In Wood’s analysis, we need to draw upon our knowledge of the changing shape of the university as a collective (far more postdocs, far more adjunct teachers, far more globalization) when thinking about how academic publishing might develop. We can expect scholarly books of the future to take some unusual forms in response to shifting material circumstances.

 

The day was punctuated by a series of ‘views’ from different Cambridge institutions. The first was offered by David Robinson, the Managing Director of Heffers, which has been selling books in Cambridge since 1876. Robinson focused on the extraordinary difference between his earlier job, in a university campus bookshop, and his current role. In the former post, in the heyday of the course textbook, before the demise of the net book agreement and the rise of the internet, selling books had felt a little like ‘playing shops’. Now that the textbook era is over, bookshops are less tightly bound into the warp and weft of universities, and academic books are becoming less and less visible on the shelves even of a bookshop like Heffers. Robinson pointed to the ‘crossover’ book, the academic book that achieves a large readership, as a crucial category in the current bookselling landscape. He cited Thomas Piketty’s Capital as a recent example of the genre.

 

Our second panel was devoted to thinking about the ‘Academic Book of the Near-Future’, and our speakers offered a series of reflections on the current state of play. The first speaker, Samantha Rayner (Senior Lecturer in the Department of Information Studies at UCL and ‘Academic Book of the Future’ PI) described the progress of the project to date. The first phase had involved starting conversations with numerous stakeholders at every point in the production process, to understand the nature of the systems in which the academic book is enmeshed. Rayner called attention to the volatility of the situation in which the project is unfolding—every new development in government higher education policy forces a rethink of possible futures. She also stressed the need for early-career scholars to receive training in the variety of publishing avenues that are open to them. Richard Fisher, former Managing Director of Academic Publishing at CUP, took up the baton with a talk about the ‘invisibles’ of traditional academic publishing—all the work that goes into making the reputation of an academic publisher that never gets seen by authors and readers. Those invisibles had in the past created certain kinds of stability—‘lines’ that libraries would need to subscribe to, periodicals whose names would be a byword for quality, reliable metadata for hard-pressed cataloguers. And the nature of these existing arrangements is having a powerful effect on the ways in which digital technology is (or is not) being adopted by particular publishing sectors. Peter Mandler, Professor of Modern Cultural History at Cambridge and President of the Royal Historical Society, began by singing the praises of the academic monograph; he saw considerable opportunities for evolutionary rather than revolutionary change in this format thanks to the move to digital. The threat to the monograph came, in his view, mostly from government-induced productivism. The scramble to publish for the REF as it is currently configured leads to a lower-quality product, and threatens to marginalize the book altogether. Danny Kingsley, Head of Scholarly Communication at Cambridge, discussed the failure of the academic community to embrace Open Access, and its unpreparedness for the imposition of OA by governments. She outlined Australian Open Access models that had given academic work a far greater impact, putting an end to the world in which intellectual prestige stood in inverse proportion to numbers of readers.

 

In the questions following this panel, some anxieties were aired about the extent to which the digital transition might encourage academic publishers to further devolve labour and costs to their authors, and to weaken processes of peer review. How can we ensure that any innovations bring us the best of academic life, rather than taking us on a race to the bottom? There was also discussion about the difficulties of tailoring Open Access to humanities disciplines that relied on images, given the current costs of digital licences; it was suggested that the use of lower-density (72 dpi) images might offer a way round the problem, but there was some vociferous dissent from this view.

 

After lunch, the University Librarian Anne Jarvis offered us ‘The View from the UL’. The remit of the UL, to safeguard the book’s past for future generations and to make it available to researchers, remains unchanged. But a great deal is changing. Readers no longer perceive the boundaries between different kinds of content (books, articles, websites), and the library is less concerned with drawing in readers and more concerned with pushing out content. The curation and preservation of digital materials, including materials that fall under the rules for legal deposit, has created a set of new challenges. Meanwhile the UL has been increasingly concerned about working with academics in order to understand how they are using old and new technologies in their day-to-day lives, and to ensure that it provides a service tailored to real rather than imagined needs.

 

The third panel session of the day brought together four academics from different humanities disciplines to discuss the publishing landscape as they perceive it. Abigail Brundin, from the Department of Italian, insisted that the future is collaborative; collaboration offers an immediate way out of the often closed-off worlds of our specialisms, fosters interdisciplinary exchanges and allows access to serious funding opportunities. She took issue with any idea that the initiative in pioneering new forms of academic writing should come from early-career academics; it is those who are safely tenured who have a responsibility to blaze a trail. Matthew Champion, a Research Fellow in History, drew attention to the care that has traditionally gone into the production of academic books—care over the quality of the finished product and over its physical appearance, down to details such as the font it is printed in. He wondered whether the move to digital and to a higher speed of publication would entail a kind of flattening of perspectives and an increased sense of alienation on all sides. Should we care how many people see our work? Champion thought not: what we want is not 50,000 careless clicks but the sustained attention of deeply-engaged readers. Our third speaker, Liana Chua reported on the situation in Anthropology, where conservative publishing imperatives are being challenged by digital communications. Anthropologists usually write about living subjects, and increasingly those subjects are able to answer back. This means that the ‘finished-product’ model of the book is starting to die off, with more fluid forms taking its place. Such forms (including film-making) are also better-suited to capturing the experience of fieldwork, which the book does a great deal to efface. Finally Orietta da Rold, from the Faculty of English, questioned the dominance of the book in academia. Digital projects that she had been involved in had been obliged, absurdly, to dress themselves up as books, with introductions and prefaces and conclusions. And collections of articles that might better be published as individual interventions were obliged to repackage themselves as books. The oppressive desire for the ‘big thing’ obscures the important work that is being done in a plethora of forms.

 

In discussion it was suggested that the book form was a valuable identifier, allowing unusual objects like CD-ROMs or databases to be recognized and catalogued and found (the book, in this view, provides the metadata or the paratextual information that gives an artefact a place in the world). There was perhaps a division between those who saw the book as giving ideas a compelling physical presence and those who were worried about the versions of authority at stake in the monograph. The monograph model perhaps discourages people from talking back; this will inevitably come under pressure in a more ‘oral’ digital economy.

 

Our final ‘view’ of the day was ‘The View from Plurabelle Books’, offered by Michael Cahn but read in his absence by Gemma Savage. Plurabelle is a second-hand academic bookseller based in Cambridge; it was founded in 1996. Cahn’s talk focused on a different kind of ‘future’ of the academic book—the future in which the book ages and its owner dies. The books that may have marked out a mental universe need to be treated with appropriate respect and offered the chance of a new lease of life. Sometimes they carry with them a rich sense of their past histories.

 

A concluding discussion drew out several themes from the day:

 

(1) A particular concern had been where the impetus for change would and should come from—from individual academics, from funding bodies, or from government. The conservatism and two-sizes-fit-almost-all nature of the REF act as a brake on innovation and experiment, although the rising significance of ‘impact’ might allow these to re-enter by the back door. The fact that North America has remained impervious to many of the pressures that are affecting British academics was noted with interest.

 

(2) The pros and cons of peer review were a subject of discussion—was it the key to scholarly integrity or a highly unreliable form of gatekeeping that would naturally wither in an online environment?

 

(3) Questions of value were raised—what would determine academic value in an Open Access world? The day’s discussions had veered between notions of value/prestige that were based on numbers of readers and those that were not. Where is the appropriate balance?

 

(4) A broad historical and technological question: are we entering a phase of perpetual change or do we expect that the digital domain will eventually slow down, developing protocols that seem as secure as those that we used to have for print. (And would that be a good or a bad thing?) Just as paper had to be engineered over centuries in order to become a reliable communications medium (or the basis for numerous media), so too the digital domain may take a long time to find any kind of settled form. It was also pointed out that the academic monograph as we know it today was a comparatively short-lived, post-World War II phenomenon.

 

(5) As befits a conference held under the aegis of the Centre for Material Texts, the physical form of the book was a matter of concern. Can lengthy digital books be made a pleasure to read? Can the book online ever substitute for the ‘theatres of memory’ that we have built in print? Is the very restrictiveness of print a source of strength?

 

(6) In the meantime, the one thing that all of the participants could agree on was that we will need to learn to live with (sometimes extreme) diversity.

 

With many thanks to our sponsors, Cambridge University Press, the Academic Book of the Future Project, and the Centre for Material Texts. The lead organizer of the day was Jason Scott-Warren (jes1003@cam.ac.uk); he was very grateful for the copious assistance of Sam Rayner, Rebecca Lyons, and Richard Fisher; for the help of the staff at the Pitt Building, where the colloquium took place; and for the contributions of all of our speakers.

 

#AcBookWeek: The Manchester Great Debate

On Wednesday 11th November, the John Rylands Library, Manchester, played host to the Manchester Great Debate, a panel discussion dedicated to addressing the future of the academic monograph. The event was one of over sixty organised during Academic Book Week to celebrate the diversity, innovation, and influence of the academic monograph. While opinions remained varied, with panel representatives from both sides of the fence, the discussion always seemed to return to a few key thematic strands. How are people using books? How are people encountering books? And what future lies ahead for the academic book? Melek Karatas, Lydia Leech, and Paul Clarke (University of Manchester) report here on the event.

The reading room of the John Rylands Library

The reading room of the John Rylands Library

It was in the stunning Christie Room of the Rylands Library that Dr. Guyda Armstrong, Lead Academic for Digital Humanities at the University of Manchester, welcomed the audience of publishers, academics, librarians, early career researchers, and students with a shared concern for the future of how the humanities might be produced, read, and preserved over the coming years. Five panellists were invited to present their case to the group before the floor was opened and the debate got into full swing.

The session was chaired by Professor Marilyn Deegan, Co-Investigator of The Academic Book of the Future project. She began by outlining the project’s main objectives and its future activities, details of which can be found at www.academicbookfuture.org. Before commencing with the presentations she asked the audience to quite simply reflect on what they conceived of when trying to define the book. It was this question, with its rather complex and capacious ramifications, that was the fundamental core of the Manchester Great Debate.

The first of the panellists to present was Frances Pinter, CEO of Manchester University Press. Since print runs of academic books have decreased in volume and their prices increased beyond inflation, she firmly believes that the future of the academic monograph will be governed by the principles of Open Access (OA). She contends that although the journey to OA will be difficult, drawing on the Crossick report to highlight such obstacles as the lack of a skilled workforce and the high cost of publication, it is impossible to deny the potential of the digital age to advance knowledge and maximise discovery. She identified Knowledge Unlatched, a not-for-profit organisation dedicated to assisting libraries to co-ordinate the purchase of monographs, as a pioneer in overcoming some of these obstacles. Under the scheme, the basic cost of publication is shared, and the works are made readily available as a PDF with an OA license via OAPEN. An initial pilot project saw the publication of twenty-eight new books at the cost of just $1,120 per library. The digital copies of these books were also downloaded in a staggering 167 countries worldwide, a true testament to the benefits of the OA monograph.

Emma Brennan, Editorial Director and Commissioning Editor at Manchester University Press, followed with a convincing argument against the financial restraints of contemporary academic book publishing. The system, she claims, is fundamentally broken, favouring short-form sciences over the humanities. A key to this problem lies in the steep rises in purchase prices over recent years, with the result that a monograph, which once sold for around £50 in a run of five hundred copies, is now sold for upwards of £70 and oftentimes on a print-on-demand basis. However, more crucial still is the disparity between authorial costs and corporate profits. After all, typical profit margins for article processing charges (APCs) reached an astonishing 37% in 2014, undeniably privileging shareholders over authors. Under this current system, university presses are only ever able to operate on a not-for-profit basis whereby surplus funds must necessarily be reinvested to cover the costs of future APCs. Such a fragile structure can only continue in the short-term and so the need for a drastic upheaval is undeniable.

Next to present their case was Sandra Bracegirdle, Head of Collection Management at the University of Manchester Library. Through a variety of diagrams she was able to highlight a number of curious trends in the reading habits of library users. A particularly interesting point of discussion was the usability of both electronic and print resources amongst student readers. While those who tended to prefer the former valued the mobility of text and the equality of access, readers of the latter tended to prefer the readability of the physical text. Interestingly, 50% of the students questioned said that they were more likely to read a book if it were available digitally, suggesting that “access trumps readability”. The decreased popularity of physical books is reflected further still by the fact that 27% the books held within the library have not been borrowed for some ten years. She continued to suggest that the increased popularity of electronic formats, on the other hand, might be the result of a change in the way people use and encounter information, arguing that different book forms engender different cognitive styles. While she did not appear to have a strong predisposition one way or the other, she did point out the “emotional presence” of a physical book by concluding with a note from Cicero: “a room without books is like a body without a soul”.

She was followed by Dr. Francesca Billiani, Director of the Centre for Interdisciplinary Research in Arts and Languages (CIDRAL) at the University of Manchester. She fundamentally contends that the materialization of knowledge has in recent years changed beyond recognition and as such so the academic monograph must also adapt. After all, the book is no longer a stand-alone piece of writing for it is firmly rooted within a digital “galaxy of artefacts” comprising blog posts, photos, and videos. Many readers no longer rely exclusively on the academic book itself in their reading of a subject, nor will they necessarily read the work in its entirety choosing instead to read what they consider to be the most relevant fragments. Academics need to embrace these changes in their future writing by composing their monographs in a way that accommodates the new methods of knowledge dissemination. Yet, at the same time, she remained mindful of the fact that the monograph of the future must also retain its academic rigour and avoid falling into the trap of eclecticism.

The last of the speakers to present was Dr. George Walkden, Lecturer in English Linguistics at the University of Manchester. A self-proclaimed Open Access activist, he claims that academic books should not only be free in terms of cost but also in terms of what readers can do with them. He lamented those individuals who, in such a climate of exhaustive copyright limitations, are all too readily branded as pirates for attempting to disseminate knowledge publicly. Although he remains sceptical of what these individuals share with the “cannon-firing, hook-toting, parrot-bearing sailors of the seven seas”, he believes that such labelling elucidates the many issues that have constrained both historic and modern publication practices. In the first instance, publishers should value the transmission of knowledge over their own profitable gain. But, perhaps more crucially, it must be acknowledged that the copyright of a work should remain with its author. He fundamentally contends that academics predominantly write to increase societal wealth and readership, a mission that can only ever really be achieved through the whole-hearted acceptance of Open Access.

Once all of the panellists had concluded their presentations, the floor was opened to the audience and a stimulating debate ensued. One of the most contested issues to arise from the discussion was the ownership of copyright. Many institutions actually hold the copyright of works produced by academics whose research they fund, although they do not always choose to exert this right. Walkden questioned to what extent this practice effectively safeguards the interests of academics, particularly since it is oftentimes too costly for them to even justify challenging this. He argued that academics should be granted the power to make decisions concerning their own intellectual property, particularly regarding the OA nature of their work.

Copyright issues were complicated further by a discussion of Art History monographs, particularly with regards to third party content. The case of Art History is particularly curious since, as Brennan highlighted, print runs have continued to remain reasonably high. After all, many art historians tend to opt for physical books over their digital counterparts since problems can often arise with their visual reproductions if, for instance, the screen is not calibrated to the original settings used in its creation. The discussion then turned to the resurgence of a material culture, whereby consumers are returning to physical artefacts. The increased popularity of vinyl records in today’s digital music society was used to illustrate this point. It was nevertheless argued that such a comparison was counterproductive since consumers ultimately have the freedom to decide the medium through which they access music but do not always have this choice with regards to books. Perhaps the academic book of the future will permit such freedom.

A member of the audience then identified the notable absence of the student in such a discussion. Academics, both on the panel and in the audience, expressed concerns that students were able to access information too easily by simply using the key word search function to find answers. Many felt that the somewhat lengthy process of physically searching out answers was more valuable to developing their research skills. Students within the audience said that while they do like the speed with which they are able to process information, they also value the experience of going to bookshelves, possibly finding other items they had not initially set out to obtain. An interesting discussion followed on whether technology might ever be able to replicate the experience of a physical library, and to what extent learning can be productive within a digital environment.

While the future of the academic book remains unclear, certain issues materialise as central topics of debate. Concerns for copyright, visual reproductions, and third party content, for instance, must necessarily form a basis of this future discussion. But more so than this, authors must begin to write within the context of a rapidly emergent digital world by ensuring that their academic outputs engage precisely with new technological formats and platforms. The opening of the book has only just begun, and perhaps it is only through investment and interdisciplinary collaboration that the academic monograph will have a future.

 

 

Melek Karatas, Lydia Leech, and Paul Clarke are postgraduate students in medieval and early modern languages at the University of Manchester, with research interests in manuscript and print cultures of the literary book.

Three hundred years of piracy: why academic books should be free

This is a repost from George Walkden’s personal blog about Open Access in the context of academic linguistics. The original post can be found here.

I think academic books should be free.

It’s not a radically new proposal, but I’d like to clarify what I mean by “free”. First, there’s the financial sense: books should be free in that there should be no cost to either the author or the reader. Secondly, and perhaps more importantly, books should be free in terms of what the reader can do with them: copying, sharing, creating derivative works, and more.

I’m not going to go down the murky road of what exactly a modern academic book actually is. I’m just going to take it for granted that there is such a thing, and that it will continue to have a niche in the scholarly ecosystem of the future, even if it doesn’t have the pre-eminent role it has at present in some disciplines, or even the same form and structure. (For instance, I’d be pretty keen to see an academic monograph written in Choose Your Own Adventure style.)

Another thing I’ll be assuming is that technology does change things, even if we’re rather it didn’t. If you’re reluctant to accept that, I’d like to point you to what happened with yellow pages. Or take a look at the University of Manchester’s premier catering space, Christie’s Bistro. Formerly a science library, this imposing chamber retains its bookshelves, which are all packed full of books that have no conceivable use to man or beast: multi-volume indexes of mid-20th-century scientific periodicals, for instance. In this day and age, print is still very much alive, but at the same time the effects of technological change aren’t hard to spot.

With those assumptions in place, then, let’s move on to thinking about the academic book of the future. To do that I’m going to start with the academic book of the past, so let’s rewind time by three centuries. In 1710, the world’s first copyright law, the UK’sStatute of Anne, was passed. This law was a direct consequence of the introduction and spread of the printing press, and the businesses that had sprung up around it. Publishers such as the rapacious Andrew Millar had taken to seizing on texts that, even now, could hardly be argued to be anything other than public-domain: for instance,Livy’s History of Rome. (Titus Livius died in AD 17.) What’s more, they then claimed an exclusive right to publish such texts – a right that extended into perpetuity. This perpetual version of copyright was based on the philosopher John Locke’s theory of property as a natural right. Locke himself was fiercely opposed to this interpretation of his work, but that didn’t dissuade the publishers, who saw the opportunity to make a quick buck (as well as a slow one).

Fortunately, the idea of perpetual copyright was defeated in the courts in 1774, in the landmark Donaldson v. Becket case. It’s reared its ugly head since, of course, for instance when the US was preparing its 1998 Copyright Term Extension Act: it was mentioned that the musician Sonny Bono believed that copyright should last forever(see also this execrable New York Times op-ed). What’s interesting is that this proposal was challenged at the time, by Edinburgh-based publisher Alexander Donaldson – and, for his efforts to make knowledge more widely available, Donaldson was labelled a “pirate”. The term has survived, and is now used – for instance – to describe those scientists who try to access paywalled research articles using the hashtag #ICanHazPDF, and those scientists who help them. What these people have in common with the cannon-firing, hook-toting, parrot-bearing sailors of the seven seas is not particularly clear, but it’s clearly high time that the term was reclaimed.

If you’re interested in the 18th century and its copyright trials and tribulations, I’d encourage you to take a look at Yamada Shōji’s excellent 2012 book “Pirate” Publishing: the Battle over Perpetual Copyright in eighteenth-century Britain, which, appropriately, is available online under a CC-BY-NC-ND license. And lest you think that this is a Whiggish interpretation of history, let me point out that contemporaries saw things in exactly the same way. The political economist Adam Smith, in his seminal work The Wealth of Nations, pointed out that, before the invention of printing, the goal of an academic writer was simply “communicating to other people the curious and useful knowledge which he had acquired himself“. Printing changed things.

Let’s come back to the present. In the present, academic authors make almost nothing from their work: royalties from monographs are a pittance. Meanwhile, it’s an economic truism that each electronic copy made of a work – at a cost of essentially nothing – increases total societal wealth. (This is one of the reasons that intellectual property is not real property.) What academic authors want is readership and recognition: they aren’t after the money, and don’t, for the most part, care about sales. The bizarre part is that they’re punished for trying to increase wealth and readership by the very organizations that supposedly exist to help them increase wealth and readership. Elsevier, for instance, filed a complaint earlier this year against the knowledge sharing site Sci-Hub.org, demanding compensation. It beggars belief that they have the audacity to do this, especially given their insane 37% profit margin in 2014.

So we can see that publishers, when profit-motivated, have interests that run counter to those of academics themselves. And, when we look at the actions of eighteenth-century publishers such as Millar, we can see that this is nothing new. Where does this leave us for the future? Here’s a brief sketch:

  • Publishers should be mission-oriented, and that mission should be the transmission of knowledge.
  • Funding should come neither from authors nor from readers. There are a great many business models compatible with this.
  • Copyright should remain with the author: it’s the only way of preventing exploitation. In practice, this means a CC-BY license, or something like it. Certain humanities academics claim that CC-BY licenses allow plagiarism. This is nonsense.

How far are we down this road? Not far enough; but if you’re a linguist, take a look atLanguage Science Press, if you haven’t already.

In conclusion, then, for-profit publishers should be afraid. If they can’t do their job, then academics will. Libraries will. Mission-oriented publishers will. Pirates will.

It’s sometimes said that “information wants to be free”. This is false: information doesn’t have agency. But if we want information to be free, and take steps in that direction… well, it’s a start.


Note: this post is a written-up version of a talk I gave on 11th Nov 2015 at the John Rylands Library, as part of a debate on “Opening the Book: the Future of the Academic Monograph”. Thanks to the audience, organizers and other panel members for their feedback.

#AcBookWeek Events!

Academic Book Week (9-16 Nov) is next week! With a constellation of events being showcased all around the UK from Sussex to Edinburgh, this week highlights the wonderful work done by booksellers, libraries, academics, and publishers, and discusses the academic book across a spectrum of perspectives. Here we have collected events by location so scroll through to see what is happening near you!

We have also just announced some competitions and offers that will be happening during the week! Including but not limited to winning an #AcBookWeek tote bag, winning a special leather-bound edition of “The Complete Works of Shakespeare”, and 50% of all academic books and classics at Southcart Books. Find out about them more on this page and keep checking because more are being added all the time!

Cambridge

Cambridge will be hosting an exhibit for the entire week at the University Library, presenting a selection of books showing examples of the way readers have interacted with their textbooks from the fifteenth to twentieth centuries. And on the 9th November Dr Rosalind Grooms and Kevin Taylor explore how Cambridge has shaped the world of academic publishing, starting way back in 1534.

Oxford

There are four events taking place in Oxford throughout the week. On the 9th November Frank Furedi, Professor of Sociology at University of Kent, discusses his new book “The Power of Reading”. Furedi has constructed an eclectic and entirely original history of reading, and will deliver a similarly exciting discussion on the historical relevance of the reader. Peter Lang Oxford are showcasing a book exhibit presenting the past and present of the academic book from the 9th-16th November, this event requires no registration so just drop in anytime to have a look! On the 11th November Peter Lang again presents J. Khalfa and I. Chol who have recently published “Spaces of the Book”, exploring the life of books ‘beyond the page’. This launch will be followed by a drinks reception and discussion of the aforementioned week-long exhibit. The Oxford events culminate on the 12th November with a panel discussion on The Future of the Academic Monograph; four panelists and two respondents will address issues from their personal perspectives including academic librarianship, academic publishing, and academic bookselling.

Edinburgh

Edinburgh plays host to a series of debates around digital text during the week. The first is on the 9th November and the debate will cover online text and learning, the second on the 10th covers digital text and publishing, the third on the 11th covers open access textbooks, and the fourth on the 12th covers online learning. With speakers from eclectic backgrounds and unique perspectives these offer informative and insightful discussions. The week in Edinburgh finishes up on the 13th with a debate on the subject Is the Book Dead? This promises to be an interesting event with speakers from the Bookseller’s Association and Scottish Publishing covering issues about the future of books and reading.

Liverpool

Liverpool launches their Academic Book Week events with a talk at the University of Liverpool with Simon Tanner, from King’s College London, and member of the project team, as keynote speaker, and a subsequent overview of the week’s events. Simon will speak on ‘The Academic Book of the Future and Communities of Practice’ with Charles Forsdick and Claire Taylor responding from the perspectives of Translating Cultures and Digital Transformations, respectively. On the 10th November Claire Hooper of Liverpool University Press and Charlie Rapple from Kudos present ideas on how to promote your academic book via Kudos and social media, a fitting topic when thinking specifically about the future of the academic book. On the 11th Gina D’Oca of Palgrave MacMillan will speak about open access monographs and a representative from Liverpool University Press will give their perspective. The last event in Liverpool takes place on the 12th and will focus on the academic book as a free available source for students. Academics, librarians, and university presses should work together to create free open access sources for students, but how? Find out here!

Glasgow

John Smith’s Glasgow hosts all of the events taking place in the city throughout the week. The first night on the 9th the bookstore will stay open late and from 5:30-7:30pm all customers will receive special one-night-only discounts on items not already discounted! There will also be refreshments so there’s no excuse not to come and celebrate the longstanding partnership between John Smith’s and the University of Glasgow. On the 10th the bookstore hosts the launch of Iain Macwhirter’s new book, “Tsuanmi: Scotland’s Democratic Revolution”. On the 11th – purposefully coinciding with Armistice Day – John Smith’s caters an evening of discussion and readings exploring Edwin Morgan’s unique contribution to Scotland’s poetry in response to war. With readings and contributions from friends and trustees of Edwin Morgan this evening will be a personal and creative contribution to the week. John Smith’s unique events don’t stop there! On the 12th Louise Welsh, Professor of Creative Writing at Glasgow University, discusses her recent novels and the editing of a new anthology of supernatural stories – a perfectly atmospheric evening for the cold autumn evenings. John Smith’s last event takes place on the 13th, author and astronomer (what a combination!) Dr. Pippa Goldschmidt discusses co-editing a new collection “I Am Because You Are”. She will be joined by contributor Neil Williamson as they talk science and fiction. 

London

London has a large amount of events happening starting with a debate focusing on how the evolving technologies of the book have changed the way we read at The School of Advanced Study. The 10th sees two other events: Blackwell’s at UCL hosts the book launch of Shirley Simon’s “Narratives of Doctoral Studies in Science Education” and Rowman & Littlefield International offer a panel event on interdisciplinary publishing and research. The question being asked is how do academics and publishers reach a diverse, multidisciplinary audience and the panel will be followed by a Q&A session. The 11th plays host to two events: Palgrave MacMillan’s premiere academic series in the history of the book is being launched and elsewhere Charlotte Frost outlines the future of the art history book. She asks ‘what should the art history book of the future look like and what should it do differently for the discipline to evolve?’ Since 2015 marks the 400th anniversary of Richard Baxter’s birth, a symposium to honour his life and assess his significance takes place on the 13th, as well as a panel discussion at the Wellcome Collection specifically targeting questions related to STM publishing and issues facing humanities research. The 12th November also sees The Independent Publishers Guild Autumn Conference with representatives from Academic Book Week, Dr Samantha Rayner, Richard Fisher, Eben Muse and Peter Lake, speaking on a panel.

Hertfordshire & Cardiff

We have one event happening in Hertfordshire in conjunction with the University of Hertfordshire Press and Hertfordshire Archives and Local Studies! This is for anyone considering getting their research published as it addresses local history and publishing combined in an effort to help and advise about writing book proposals and approaching publishers. Similarly, in Cardiff there is an event on the 11th November utilising a forum to discuss innovative Open Access academic publishing ventures.

Manchester & Bristol

In Manchester on the 11th there will be a panel discussion presented by Digital Humanities Manchester and the University of Manchester Library as they get to the root of the issues presented in academic publishing. Multiple perspectives will make this a fascinating event as panelists attempt to answer questions such as what is the future of the academic long-form publication in the evolving publishing landscape? And is there still a future for the physical book? And in Bristol on the 10th November a panel tackles the questions facing the academic book from the perspective of the panel and the audience.

Sussex & Sheffield

In Sussex on the 11th there is a similar panel discussion; three speakers from different backgrounds grapple with the transformation of the academic book and what that will mean for the future. On the 11th in Sheffield an important and fascinating question is asked: Should we trust Wikipedia? Librarians and scholars from a range of backgrounds discuss the validity of information on there and address questions of integrity surrounding digital publishing. Sheffield finishes off Academic Book Week on the 13th with an open afternoon in the University of Sheffield Library’s Special Collections, introducing visitors to treasures from their collection.

Dundee & Stirling

Dundee and Stirling partake in the excitement of the week also! On the 11th November Dundee presents the Wikipedia Edit-A-Thon as part of the NEoN Digital Arts Festival and on the 13th a mini-symposium focused on the intersection of tradition and craft with the digital transformation of art and design. In Stirling on the 12th John Watson, Commissioning Editor for Law, Scottish Studies & Scottish History at Edinburgh University Press will be speaking to students of the Stirling Centre for International Publishing and Communication about academic publishing and his role as a commissioning editor.

Leicester & Nottingham

When we mentioned that events were happening all over the country we really mean it! If you are near the Midlands, DeMontford University is bringing together PhD students to think about the future of the English PhD and the future training of English academics. And last but not least on the 12th in Nottingham Sprinting to the Open FuTure takes place – a panel discussion event bringing together those who interact with academic books to explore questions about how students and staff publish, and the challenges they face.

With so much happening, it will be hard to choose  – we know we are already having trouble deciding between events. Come to as many as you can, and help support the future of the academic book!

Open Access and Academic Publishing

Independent information services professional Ian Lovecy suggests that there are a number of questions – philosophical and practical – which need to be answered before open access could be a sound and sustainable method of academic publishing. This post makes no attempt to answer them, but rather to identify them and perhaps open up some of the issues involved to discussion.

What do we mean by “open access”?

Time was, I could walk into my public library, ask for a book or a journal article, and if they didn’t have it they would obtain it for me through inter-library loan; that was open access to information, and it died in the ‘70s and ‘80s. In those decades, access remained open, but subject increasingly to charges, primarily to cover the administrative costs of the service. Increasingly, requests became subject to a form of censorship, requiring proof of need or (in Universities) a tutor’s signature.

Today we have the Internet, and access to much of the information on it is available to anyone with access to a computer. (This is theoretically anyone in the UK since computers are available in public libraries and Internet cafés, although opening hours, location, costs, line speed and computer literacy may all impose limitations.) Not all the information is available free of charge, but subject to questions of privacy and confidentiality, public interest, security and government policy on access, it is available to all.

Two questions relating to academic information immediately become apparent:

  • Do we mean free open access?
  • Do we mean open access to the entire world?

Equally, in the case of inter-library loans, it was understood that the material was governed by copyright legislation; frequently, especially in cases where material was provided as a photocopy, recipients had to sign a declaration that they would observe copyright. Items published on the Internet are, or at least can be declared to be, subject to the same legislation, but the enforcement is even harder than it is with library books (and I am sure many lecturers have used the occasional copyright photograph in their lectures without seeking permission). In theory, enforcement should be easier in the case of electronic access, since such access can be traced; in practice, with multiple access by people in several different jurisdictions control is effectively impossible. A further question is therefore:

  • Do we want to put restrictions on the use of the information?

 

What are the reasons for open access publishing?

A frequently-heard justification is that since public funding pays for the research the results should be publicly available. This is at best a slightly tenuous argument – even after the passing of the Freedom of Information Act there is still a great deal of publicly-funded information to which the public most decidedly do not have access. It can, in any case, apply only to a subset of research, primarily that funded wholly by the Research Councils. However, the current intention is that all material, if it is to be included in the REF, must be available on open access.

In the past, there has been an underlying assumption that all research undertaken in Universities is publicly funded; this is no longer tenable. Even ignoring the existence of entirely privately-funded Universities, much research – particularly in medicine, biochemistry and the social sciences – is jointly funded by research councils and either charities or business (or sometimes both); there may be restrictions on the amount of information which can be published because of commercial considerations. Many academic posts in the Humanities are now funded entirely by student fees – surely that cannot count as public funding?

It should not be forgotten that there exists also a group of independent researchers – retired academics, former students who have gone into non-academic work and self-taught members of the public with a keen interest in a specific topic. None of these is likely to be submitting material to the REF (with the possible exception of the first group) and they are not therefore under pressure to use open access publishing; they will, however, be affected by some of the consequences of it considered below.

There can be few researchers who do not wish their work to be read, appreciated and cited by others, and for many who publish in the form of journal articles this is indeed the only reward they have. It is understandable that they may feel exploited when they see the price charged for the journals in which they publish; it is even more understandable that institutions resent paying a high price to buy back the results of work which they feel they have funded. Is the correct answer to this problem making the information available to all? What about monographs? – in this case the authors may receive a (small) financial reward in the form of royalties. Are they to be denied this? After deciding what we mean by open access, the next question to answer is:

  • Is there any moral or philosophical justification for insisting on open access publishing?

What might be the practical effects of open access publishing?

The practical effects can be considered under five headings: the value of information, effects on conventional publishing, location and language of publication, universality of access and costs.

The value of information

A professor (of English literature, no less!) once told me there was no need for subject librarians because “all students had to do was use the Internet to find things”. I put the following fairly specific search into Bing: “studies in Shakespeare’s Henry VIII”. That is, of course, one of the most minor of the plays; the search returned 23,500,000 hits. The first 20 included a Wikipedia entry, several references to Spark notes, summaries and quizzes, one text, one (Spanish) production, and several references to A study of Shakespeare’s Henry VIII by Cumberland Clark. Which is doubtless an excellent book; but a similar search in Birmingham University Library’s catalogue shows in addition, in the first 10 items, books by Larry Champion, Alan Young, Sir Edward German, Tom Merriam, Maurice Hunt and Albert Cook, a text with a preface by Israel Gollancz, and a production by the Royal Shakespeare Company. Some of the books are on detailed aspects of the play or its authorship. It is a manageable list, and represents the selection (you could call it censorship) by a group of scholars over a number of years of books which say something worth reading about the play.

That selection is made in a number of ways, such as the reputation or place of work of the author, the reputation of the publisher, reviews in newspapers and professional journals. There can be dangers in all of these: an author may have a reputation as a maverick and be scorned by established academics; just because an academic doesn’t work in a Russell Group university it doesn’t mean he or she is not good; Mills and Boon might publish a scholarly book; reviewers may have personal axes to grind. However, behind all of this is the publisher: it is the publisher who publicises the book, sends around lists of forthcoming volumes to libraries and academics, sends out review copies. Going back one step, publishers’ editors decide which books to take on, and there can be problems here for those with radically new ideas; the existence of a flourishing, competitive industry is one way of minimising the risk of censorship.

In an open access world, the radical and the maverick are in less danger of being stifled by the establishment; but they have an even greater risk of being lost in the mass of irrelevance which comes pouring out of a search. Only their institution might help to refine the search, and even this might not assist given the lack of sophistication of most search engines: adding “published by Universities” to my search had some effect – it reduced it to a mere 9,300,000 hits. So a vital question in relation to open access is:

  • How do we sort the wheat from the chaff?

Effects on conventional publishing.

If open access publishing of monographs became the default option – as it might if open access became a requirement of the REF – the effects on the academic publishing industry could be severe to catastrophic. Much would depend on a question asked above, and explored further below: is open access to be free access? Electronic publication is not necessarily free – e-books are often cheaper than printed copies, but librarians would question whether even this is true of e-journals – but payment is made by somebody in some way. If, however, open access were to mean free or cheap access, academic publishing could become unsustainable; even today margins are small and there is often cross-subsidy within major publishers from more lucrative parts of the list. University presses are often subsided by parent institutions, usually as part of institutional marketing.

A significant decline in the number of academic publishers would (as indicated above) greatly affect the way in which published research was publicised. It would also leave independent scholars outside the university system with little or no choice of where to submit a manuscript, thus potentially reducing the amount of information and scholarship to which the world has access.

However, despite talk of “webs” and “clouds”, it must be remembered that the Internet is a very physical thing at heart: it needs servers which hold the information. Storage of digitised material is becoming ever cheaper; costs of maintenance of equipment are not. Servers sometimes go down – ask any customer of the Royal Bank of Scotland! – and the more information on a single server the more inconvenience caused when this happens. One way of minimising this problem is to scatter the information on a number of machines; another is to duplicate it on more than one server. Might publishers become involved in this? Would every university want to dedicate machines and staff time to such an operation? Who would publicise new monographs, or persuade people to review them? These questions could be summed up as:

  • Would there be a place for academic publishers in an open access system?

Location and language of publication

In the age of the Internet, research collaboration across national borders is common; however, with the important exception of the United States commitment to open access publication is not. For institutions and scholars in many countries, publication in respected journals which are not open access may be important for prestige or career purposes. Hitherto in the UK, this conundrum has usually been solved by the open access “green” version of a paper (the penultimate draft), leaving the final version to be published normally; the “green” version is acceptable to the Research Councils (and so far to the REF) as satisfying their conditions.

If it is decided that all material for submission to the REF must be available as open access, a further problem arises. Researchers in linguistics or the literature of other languages and cultures frequently publish in non-English languages in journals published in the relevant country. Open access journals in, for example, Mandarin or Sanskrit, Latin or even French, may be hard to find! Open access publication of monographs might be possible, but probably only through a UK publisher – depending on the answers to questions above; This could affect the breadth of the reception of the item, which as well as diminishing any royalties which might still be available could significantly reduce the impact in respect of a REF submission.

An important question to be considered if open access academic publishing is to become the default expectation is:

  • Are foreign language publications to be exempted, and if not what provision is to be made for them?

Access to “Open access” and its costs

As suggested above, “open access” is usually interpreted as free access, but this is not without cost. At present universities have been willing to place science articles on local servers at marginal cost; if humanities publishing and monographs are added, the costs of maintenance over the next fifty years will probably be less than marginal in research-intensive universities. Moreover, there will be a need for more sophisticated search software, akin to that in use by libraries – and as librarians will confirm, such software is not cost-free.

Moreover, the costs of indexing may be increased. If articles are not collected into journals, indexers will have to search over a hundred sites for potential material. This could be carried out by software, but again such software would have a cost; and there would be the added problem that software working by gleaning key words from titles or full text may not take account of the context. (It sometimes happens with human cataloguing – I have seen a book on Keats entitled The mirror and the lamp classified as optics!)

Alternatively, material (at least articles, although not monographs) could be collected into online journals. This could ease the problems of refereeing and therefore selection of useful material, although it would bring back the possible problems with the current system of refereeing – which have recently included the costs in terms of time if not of money. But online journals would need editors and some level of administrative staff – publishers, in other words – and there would be costs involved. Who would pay them? If it is expected to be users, we are back to the question of whether open access is to be free; and if it is paid for by institutions we are likely to find those who do not belong to such an institution disenfranchised.

There are also hidden costs in terms of the use of materials. Screens and readers are improving all the time (although that is also a cost – I don’t need equipment to read a book) but many people still find prolonged use uncomfortable. Hyperlinks can facilitate the movement from index to relevant page, but activities which require having more than one volume open at a time – comparing two editions, for example, or reading a critical work in conjunction with a text – can be awkward.

A book published 400 years ago is (generally) as easy to read as one published four days ago; computer software is upgraded frequently, and although upward compatibility is often included, there are sometimes step changes – Windows 10 has provided examples, and many word processing systems confine upward mobility to perhaps the last five versions. In my research I used a number of books and articles published 100 years previously, and probably little-used in between; how accessible will material published today be in 100 years, and what will be the costs of keeping it accessible?

There are a number of questions arising under this heading:

  • Will there be a need for new indexing and/or searching software, and if so who will pay?
  • Will in-built upward compatibility in software cope with material published a century earlier, and if not how will upgrading be managed?
  • If there are costs in respect of open access which are born collectively by institutions rather than by the end-user, will some potential end-users find themselves without access?
  • How can the problems related to potential inconvenience of use be overcome?

Ian Lovecy MA,PhD, Hon FCLIP, FCLIP, MAUA

What do you think of the issues and questions raised in this post?

Are there others?

Get in touch below!

Specialist perspectives: the Project works with the Miltonists

The Project was recently invited to speak at the Eleventh International Milton Symposium (University of Exeter, 20-24 July) by Professor Thomas Corns. Prof. Corns is a member of the Project’s Advisory Board as well an eminent Milton scholar – he was recently awarded a British Academy Fellowship in recognition of his contribution to Milton studies – and is therefore ideally situated to channel (and provoke!) conversation between the Project and this group of specialist researchers. This post is a summary of the issues, thoughts, concerns, and ideas that arose during this session.

Thanks to @RichardACarter for live-tweeting the session! Credit: @RichardACarter

                    Thanks to @RichardACarter for live-tweeting the session! Credit: @RichardACarter

After a brief presentation from Rebecca Lyons to introduce the Project, outline its aims, summarise progress to date, and explain why the Project was at a symposium on Milton, Prof. Corns took over. He started off with a quotation:

‘The monograph is something that every academic wants to write, few academics want to read, and no academic wants to buy’, as a distinguished commissioning editor once provocatively remarked.

Prof. Corns then put into play the view that the monograph constitutes the ‘gold standard’ for arts and humanities scholars, a view that certainly shaped institutional thinking across the sector in preparation for the recent REF, but he asked: if very few people want to read these books, and even fewer are buying them – what is the rationale behind this status? Why is the monograph still supreme?

A member of the audience responded, considering disciplines besides those in the arts and humanities:

 

‘I frequently work with colleagues in STEM (science, technology, engineering, and mathematics), and when you ask them to read a book, they’re reluctant because they work in articles. I love the book, but insisting on the monograph as the gold standard keeps the arts and humanities segregated from these other areas, and therefore somewhat limited.’

 

The issue of ‘monograph vs journal article’ has cropped up fairly regularly in Project conversations with other stakeholder groups and communities, from a range of angles – including the idea of ‘thesis-by-articles’ as an alternative to the 80-100,000 word monograph that has hitherto been the standard model. There have been a variety of responses to this proposal, ranging from enthusiastic to the horrified, so this was a pertinent point.

Another participant offered an alternative response:

 

‘If we bow to pressure to exclusively publish articles rather than books, then we will lose what we do really well in the arts and humanities. Yes, we can write very good articles too, and yes, it is a very good idea to engage with our counterparts in science and engineering – but it is not necessary to give up the long form monograph in order to do these things.’

 

The conversation shifted slightly, considering the implications of monographs and journals, hard copy and digital, for libraries and their expenditure on research resources. A Miltonist working in the US stated:

 

‘There is a huge crisis in library funding. My institution’s library has been cut so far to the bone that we don’t even automatically buy books published by the big university presses anymore like we used to. More and more we are relying on digital resources. Articles provide a much more accessible and immediate resource.’

 

But again, there was an alternative view (from another US-based scholar):

 

‘We have the opposite situation – my institution’s library doesn’t automatically buy all books but will buy all books on reading lists made by academics. It does not, however, subscribe to all the online journals as this is too expensive for our budgets.’

 

He went on to make the point that some universities feel “walled out” by subscription prices combined with restricted budgets:

 

‘$100 for one academic book is still cheaper than a $1000 journal subscription that expires within a year. And at least you get to keep the book! Digital, online content is not this egalitarian utopia it’s sometimes made out to be.’

 

Another comment on this came from another scholar, citing the need to distinguish long-term and short-term consultation of material:

 

‘There are several examples of texts that I’d want to access for five minutes, just to check something, but only a few where I’d actually want to own them.’

 

The subject of available institutional funding for the purchase of books and subscriptions seemed to be a pivotal concern. The conversation continued with a suggestion:

 

‘How about the interlibrary loan of digital texts? It’s what happens with physical books – why not digital ones?’

 

Here the conversation turned to other digital matters – starting with Open Access (OA). One scholar condemned OA in no uncertain terms:

 

‘It is the spume of the devil.’

 

Others had questions:

 

‘At places like the British Library or Library of Congress is there, or will there be, an obligation for digital books to be made available, as physical ones are?’

 

Or concerns, about the present state of things:

 

‘Intellectual property is an issue: if one of your books is available digitally – what it to stop it being misused? Many of us have seen agreements violated, for instance, and PhD theses sold immediately, despite an embargo. The more we move into the digital, the more likely this is to be a problem. We must be aware of how our work makes it into the public sphere – it has become necessary to Google ourselves and check regularly what is out there.’

 

As well as the future:

 

‘In the 2020 REF monographs will be excluded from the obligation to be OA, whereas articles won’t be – what will be the implications of this?’

 

Other concerns centred upon business models:

 

‘I work for a journal and if we are made to open up our content for free then we will disappear.’

 

Or career issues:

 

‘If your thesis is OA then it can problematic to have it published. It becomes an issue of hiring and tenure. The American History Society advised all graduate students not to have their thesis as OA.’

 

There were also suggestions:

 

‘Could University Presses create a consortium to open books up for a small subscription fee, like Spotify for books?’

 

Here the conversation shifted to the authors and how the drive towards OA affects them:

 

‘Academics as authors are increasingly threatened by these forces – we need better rights to protect the authors.’

 

Another scholar also commented on these ‘forces’, using the analogy of airlines in the US that are conglomerating:

 

‘You get less and less choice for more and more money. I am worried that this is happening with publishing and platforms. In terms of authors and editors, our individuality and choice is being taken away.’

 

I couldn’t help but think of huge supermarkets here, where small organic groceries have sprung up in response. Or instances where people start to grow vegetables themselves instead. Will people publish themselves in the future?

Some attendees wondered about teaching in a digital world – how do students use books, create their own content, and what other content do they use such as the excellent Milton Reading Room hosted by Dartmouth College (http://www.dartmouth.edu/~milton/reading_room/contents/text.shtml). How is teaching going to be affected by these new books, materials, and new contexts?

 

One scholar commented:

 

‘I work at an institution that has a footprint in one place but also has commitments in education elsewhere (Palestine), so the digital content that we subscribe to has a great reach, and is really valued by these students who wouldn’t be able to access this content otherwise.’

 

Prof. Corns was forced to draw the conversation to a close due to time constraints, but it was clear that we had only just started to scratch the surface. One final closing comment from an attendee resonated, not only with the aims and scope of the Project, but with the rest of the scholars in the room, and probably beyond:

 

‘The questions and comments are all too small. This is not about the Future of the Academic Book. This is about the Future of the Humanities.’

 


 

Do these points resonate in your discipline?

Are there are others for you and your colleagues?

Do you vehemently disagree with any of the above?

Get in touch using the comments below!

 

Note: The views given above are not necessarily those of the Project or its partners, or Milton scholars en masse! The Project has attempted, insofar as possible, to accurately capture the views and opinions expressed at this event. All opinions and comments have been anonymised.