Quadrivium XI – Day Two

Day Two of Quadrivium XI at De Montfort University highlighted the past, present and future of academic books for medievalists.

We started with hands-on workshops: ‘The making of a book in pre-digital age. How was a book “created” before digital technologies were introduced in the world of publishing? The participants made and wrote with quill pens in the Trinity House Scriptorium and experienced type-setting and hand-pressing in the printing workshop at the Centre for Textual Studies.

One thing for sure: we — as medievalists — appreciate handwriting and printing technologies, but we cannot ignore the impact of the digital technologies either.

Earlier in the 20th century, an academic book for medievalists was relatively easy to identify. It often embodied at least 20 years of rigorous scholarship. It was often a thick volume, hardcover, and published by a reputable publisher. It was often expensive, but that was acceptable, as the book was meant to be bought by university libraries and guaranteed to be kept on their shelves for hundreds of years. It was a big, significant and eye-opening book, which would be read, referred to and used over and over by all scholars in the field. Digital technologies have brought about a modification in the methodologies for researching, producing and delivering scholarship, however, and the impact of digital environments on scholarly publishing seems to be more than self-evident.

Prof. Wendy Scase remembers the days when she was a student. A computer back then was a huge machine, which filled up an entire room in a university. Since then, things have changed rapidly. In 2012, she and her team published a facsimile of the Vernon manuscript — one of the largest surviving medieval manuscripts, 22 kg, 350 leaves, 544mm x 393mm — on a single DVD-Rom.

For Dr Ryan Perry, the key academic book was, and still is, A manual of the Writings in Middle English. He was, however, also involved in many manuscript based online projects (Imagining History, Geographies of Orthodoxy), and is now thinking about a new project with ambitious digital aspects.

Dr Orietta Da Rold‘s career as a medievalist also started with a multi-volume hardcover academic book: Manly and Rickert’s The Text of the Canterbury Tales. Scrutinising a catalogue description in this book made her think further about the use of paper in Medieval England, and she is now working on a digital project The Mapping Paper in Medieval England.

Dr Hollie Morgan is probably one of the first medieval scholars to used “word clouds” in her PhD, ‘Between the Sheets: Reading Beds and Chambers in Late-Medieval England’. She is now working on Imprint Project, where the medieval texts meet the material and cutting-edge digital technologies.

Dr Takako Kato asked the participants to come up with their own ideas of how they would tackle the challenges and difficulties they might encounter, should they start digital projects now. Using The Production and Use of English Manuscripts 1060 to 1220 as a springboard, the participants discussed topics such as:

  • Longevity of the research data and how to keep the data updated.
  • Ideally the online framework should be updated regularly to incorporate the new technologies, such as apps for reading on hand-held devices.
  • An option to print the websites as books on demand.
  • The significance of sophisticated search engines.
  • Possibility of incorporating subscription fees to maintain the website.
  • Create a collaborative working environments using social media.
  • Interactive resources, for example, pronunciation guide.
  • Use of manuscript images online.
  • Use of word crowd.
  • Collaboration with other digital projects.

After two days of intensive discussions, QuadXI concluded with food for thought:

  • Do we read differently in print and on screen? Some of us do, some don’t; it depends on the nature of the texts too.
  • What are the perceptions of digital books? Are we happy to publish digital-only monographs? Or, do we still consider print books to be “better”?
  • Are current PhD students more equipped and trained to work in digital environment than PhD students 10-20 years ago? Not necessarily! We identified that current PhD students strongly feel the necessity of training in how to ask right questions using digital technologies.
  • Using digital technologies would make medievalists talk to specialists from different disciplines, like Dr Morgan, who now regularly discusses the taxonomy with a Forensic team.
  • If you work as a team member in a digital project, how is your work recognised?

We hope to see you at  Quadrium XII in Glasgow to continue these discussions!

 

Advertisements

#AcBookWeek: The Academic Book of the Future: Evolution or Revolution?

 

This post reflects on one of the events that took place during Academic Book Week in Cambridge. A colloquium was the basis of multiple viewpoints airing thoughts on where the future of the academic book lies from perspectives of booksellers, librarians, and academics. 

During the week of the 9th November the CMT convened a one-day colloquium entitled ‘The Academic Book of the Future: Evolution or Revolution?’ The colloquium was part of Cambridge’s contribution to a host of events being held across the UK in celebration of the first ever Academic Book Week, which is itself an offshoot of the AHRC-funded ‘Academic Book of the Future’ project. The aim of that project is both to raise awareness of academic publishing and to explore how it might change in response to new digital technologies and changing academic cultures. We were delighted to have Samantha Rayner, the PI on the project, to introduce the event.

 

The first session kicked off with a talk from Rupert Gatti, Fellow in Economics at Trinity and one of the founders of Open Book Publishers, explaining ‘Why the Future is Open Access’. Gatti contrasted OA publishing with ‘legacy’ publishing and emphasized the different orders of magnitude of the audience for these models. Academic books published through the usual channels were, he contended, failing to reach 99% of their potential audience. They were also failing to take account of the possibilities opened up by digital media for embedding research materials and for turning the book  into an ongoing project rather than a finished article. The second speaker in this session, Alison Wood, a Mellon/Newton postdoctoral fellow at the Centre for Research in the Arts, Social Sciences and Humanities in Cambridge, reflected on the relationship between academic publishing and the changing institutional structures of the university. She urged us to look for historical precedents to help us cope with current upheavals, and called in the historian Anthony Grafton to testify to the importance of intellectual communities and institutions to the seemingly solitary labour of the academic monograph. In Wood’s analysis, we need to draw upon our knowledge of the changing shape of the university as a collective (far more postdocs, far more adjunct teachers, far more globalization) when thinking about how academic publishing might develop. We can expect scholarly books of the future to take some unusual forms in response to shifting material circumstances.

 

The day was punctuated by a series of ‘views’ from different Cambridge institutions. The first was offered by David Robinson, the Managing Director of Heffers, which has been selling books in Cambridge since 1876. Robinson focused on the extraordinary difference between his earlier job, in a university campus bookshop, and his current role. In the former post, in the heyday of the course textbook, before the demise of the net book agreement and the rise of the internet, selling books had felt a little like ‘playing shops’. Now that the textbook era is over, bookshops are less tightly bound into the warp and weft of universities, and academic books are becoming less and less visible on the shelves even of a bookshop like Heffers. Robinson pointed to the ‘crossover’ book, the academic book that achieves a large readership, as a crucial category in the current bookselling landscape. He cited Thomas Piketty’s Capital as a recent example of the genre.

 

Our second panel was devoted to thinking about the ‘Academic Book of the Near-Future’, and our speakers offered a series of reflections on the current state of play. The first speaker, Samantha Rayner (Senior Lecturer in the Department of Information Studies at UCL and ‘Academic Book of the Future’ PI) described the progress of the project to date. The first phase had involved starting conversations with numerous stakeholders at every point in the production process, to understand the nature of the systems in which the academic book is enmeshed. Rayner called attention to the volatility of the situation in which the project is unfolding—every new development in government higher education policy forces a rethink of possible futures. She also stressed the need for early-career scholars to receive training in the variety of publishing avenues that are open to them. Richard Fisher, former Managing Director of Academic Publishing at CUP, took up the baton with a talk about the ‘invisibles’ of traditional academic publishing—all the work that goes into making the reputation of an academic publisher that never gets seen by authors and readers. Those invisibles had in the past created certain kinds of stability—‘lines’ that libraries would need to subscribe to, periodicals whose names would be a byword for quality, reliable metadata for hard-pressed cataloguers. And the nature of these existing arrangements is having a powerful effect on the ways in which digital technology is (or is not) being adopted by particular publishing sectors. Peter Mandler, Professor of Modern Cultural History at Cambridge and President of the Royal Historical Society, began by singing the praises of the academic monograph; he saw considerable opportunities for evolutionary rather than revolutionary change in this format thanks to the move to digital. The threat to the monograph came, in his view, mostly from government-induced productivism. The scramble to publish for the REF as it is currently configured leads to a lower-quality product, and threatens to marginalize the book altogether. Danny Kingsley, Head of Scholarly Communication at Cambridge, discussed the failure of the academic community to embrace Open Access, and its unpreparedness for the imposition of OA by governments. She outlined Australian Open Access models that had given academic work a far greater impact, putting an end to the world in which intellectual prestige stood in inverse proportion to numbers of readers.

 

In the questions following this panel, some anxieties were aired about the extent to which the digital transition might encourage academic publishers to further devolve labour and costs to their authors, and to weaken processes of peer review. How can we ensure that any innovations bring us the best of academic life, rather than taking us on a race to the bottom? There was also discussion about the difficulties of tailoring Open Access to humanities disciplines that relied on images, given the current costs of digital licences; it was suggested that the use of lower-density (72 dpi) images might offer a way round the problem, but there was some vociferous dissent from this view.

 

After lunch, the University Librarian Anne Jarvis offered us ‘The View from the UL’. The remit of the UL, to safeguard the book’s past for future generations and to make it available to researchers, remains unchanged. But a great deal is changing. Readers no longer perceive the boundaries between different kinds of content (books, articles, websites), and the library is less concerned with drawing in readers and more concerned with pushing out content. The curation and preservation of digital materials, including materials that fall under the rules for legal deposit, has created a set of new challenges. Meanwhile the UL has been increasingly concerned about working with academics in order to understand how they are using old and new technologies in their day-to-day lives, and to ensure that it provides a service tailored to real rather than imagined needs.

 

The third panel session of the day brought together four academics from different humanities disciplines to discuss the publishing landscape as they perceive it. Abigail Brundin, from the Department of Italian, insisted that the future is collaborative; collaboration offers an immediate way out of the often closed-off worlds of our specialisms, fosters interdisciplinary exchanges and allows access to serious funding opportunities. She took issue with any idea that the initiative in pioneering new forms of academic writing should come from early-career academics; it is those who are safely tenured who have a responsibility to blaze a trail. Matthew Champion, a Research Fellow in History, drew attention to the care that has traditionally gone into the production of academic books—care over the quality of the finished product and over its physical appearance, down to details such as the font it is printed in. He wondered whether the move to digital and to a higher speed of publication would entail a kind of flattening of perspectives and an increased sense of alienation on all sides. Should we care how many people see our work? Champion thought not: what we want is not 50,000 careless clicks but the sustained attention of deeply-engaged readers. Our third speaker, Liana Chua reported on the situation in Anthropology, where conservative publishing imperatives are being challenged by digital communications. Anthropologists usually write about living subjects, and increasingly those subjects are able to answer back. This means that the ‘finished-product’ model of the book is starting to die off, with more fluid forms taking its place. Such forms (including film-making) are also better-suited to capturing the experience of fieldwork, which the book does a great deal to efface. Finally Orietta da Rold, from the Faculty of English, questioned the dominance of the book in academia. Digital projects that she had been involved in had been obliged, absurdly, to dress themselves up as books, with introductions and prefaces and conclusions. And collections of articles that might better be published as individual interventions were obliged to repackage themselves as books. The oppressive desire for the ‘big thing’ obscures the important work that is being done in a plethora of forms.

 

In discussion it was suggested that the book form was a valuable identifier, allowing unusual objects like CD-ROMs or databases to be recognized and catalogued and found (the book, in this view, provides the metadata or the paratextual information that gives an artefact a place in the world). There was perhaps a division between those who saw the book as giving ideas a compelling physical presence and those who were worried about the versions of authority at stake in the monograph. The monograph model perhaps discourages people from talking back; this will inevitably come under pressure in a more ‘oral’ digital economy.

 

Our final ‘view’ of the day was ‘The View from Plurabelle Books’, offered by Michael Cahn but read in his absence by Gemma Savage. Plurabelle is a second-hand academic bookseller based in Cambridge; it was founded in 1996. Cahn’s talk focused on a different kind of ‘future’ of the academic book—the future in which the book ages and its owner dies. The books that may have marked out a mental universe need to be treated with appropriate respect and offered the chance of a new lease of life. Sometimes they carry with them a rich sense of their past histories.

 

A concluding discussion drew out several themes from the day:

 

(1) A particular concern had been where the impetus for change would and should come from—from individual academics, from funding bodies, or from government. The conservatism and two-sizes-fit-almost-all nature of the REF act as a brake on innovation and experiment, although the rising significance of ‘impact’ might allow these to re-enter by the back door. The fact that North America has remained impervious to many of the pressures that are affecting British academics was noted with interest.

 

(2) The pros and cons of peer review were a subject of discussion—was it the key to scholarly integrity or a highly unreliable form of gatekeeping that would naturally wither in an online environment?

 

(3) Questions of value were raised—what would determine academic value in an Open Access world? The day’s discussions had veered between notions of value/prestige that were based on numbers of readers and those that were not. Where is the appropriate balance?

 

(4) A broad historical and technological question: are we entering a phase of perpetual change or do we expect that the digital domain will eventually slow down, developing protocols that seem as secure as those that we used to have for print. (And would that be a good or a bad thing?) Just as paper had to be engineered over centuries in order to become a reliable communications medium (or the basis for numerous media), so too the digital domain may take a long time to find any kind of settled form. It was also pointed out that the academic monograph as we know it today was a comparatively short-lived, post-World War II phenomenon.

 

(5) As befits a conference held under the aegis of the Centre for Material Texts, the physical form of the book was a matter of concern. Can lengthy digital books be made a pleasure to read? Can the book online ever substitute for the ‘theatres of memory’ that we have built in print? Is the very restrictiveness of print a source of strength?

 

(6) In the meantime, the one thing that all of the participants could agree on was that we will need to learn to live with (sometimes extreme) diversity.

 

With many thanks to our sponsors, Cambridge University Press, the Academic Book of the Future Project, and the Centre for Material Texts. The lead organizer of the day was Jason Scott-Warren (jes1003@cam.ac.uk); he was very grateful for the copious assistance of Sam Rayner, Rebecca Lyons, and Richard Fisher; for the help of the staff at the Pitt Building, where the colloquium took place; and for the contributions of all of our speakers.

 

Open Access: A Personal Take

In the second of our blogs this week on OA, following on from Open Access Week last week, Alastair Horne gives his personal reflection on the challenges ahead…

image_1

I have a few reservations about Open Access.

In some respects, that’s hardly surprising. After all, I work for a big publisher – not, admittedly, an Elsevier, but still one of the world’s largest university presses, one of those not-for-profit organisations whose deep differences from the likes of Elsevier are too commonly elided in the recurrent syllogism that ‘Elsevier is a publisher; Elsevier is a profiteer; publishers are profiteers.’

On the other hand, it’s also very surprising indeed. I’m an instinctive socialist who broadly supports concepts like Labour’s long-abandoned Clause Four, who still regards ‘the common ownership of the means of production, distribution and exchange’ as a laudable aspiration, and who would happily vote to renationalise the railways, for starters. On that basis, why wouldn’t I support a system that seeks to liberate scholarly research from private enterprise and make it freely available to those who need it?

A third factor in this complicated relationship with open access is that I’m also a humanities researcher manqué; an English graduate with an unfinished PhD thesis (which celebrated its twentieth anniversary last year; there wasn’t a party). As it happens, the debate on open access that I attended last Friday – the prompt for all this self-indulgent soul-searching – took place at Cambridge’s Divinity School, where I sat the last of my undergraduate exams in English twenty years ago, and made such a singularly bad fist of writing essays on twentieth century poetry that I imperilled my funding for that PhD.

But enough about me – for the moment, at least – and let’s focus on the debate itself, held under the auspices both of the global Open Access week and Cambridge’s own Festival of Ideas, an annual series of events ‘celebrating the arts, humanities, and social sciences’. Under the chairmanship of Stephen Curry, described as ‘the world’s most amiable open access advocate’, four academics debated whether ‘society can afford open access’. Representing the humanities (in practice, if not necessarily in theory) were Dr Daniel Allington, researcher in Digital Cultures at the University of the West of England, and Professor Peter Mandler, President of the Royal Historical Society; representing the sciences (again, in practice rather than in theory), were Dr Theo Bloom, Executive Editor at the BMJ, and Dr Danny Kingsley, Head of Scholarly Communications for Cambridge University.

Given the festival’s focus, it was perhaps unsurprising that the debate tended more effectively to question whether the humanities and social sciences, rather than society itself, can afford open access. Mandler’s key point – and one that I found largely persuasive – was that since the principles of open access weren’t designed for humanities research, the humanities should therefore not be bound by them. Open access was developed first to solve problems encountered by creative artists, and then by scientists; not those experienced by humanities researchers. The Finch report that informed subsequent UK government policies on open access, he told us, was drawn up by a committee that lacked any representation from the humanities. Any subsequent accommodations that policy-makers had ultimately made towards the humanities had been hard won through vigorous intervention.

One such accommodation could be found in politicians’ reluctant acceptance of Green open access as a legitimate alternative to Gold. Much humanities research is unfunded – Allington insisted that almost all of his own had been – and even the funded research was supported by budgets that were tiny compared to those supporting scientists. When Bloom pointed out that research conducted by an academic whose salary was paid by their university was still publicly-funded, even though it was not directly supported by a funding body, Allington responded that many academics in the humanities are either part-time or paid only for teaching, and as a result, have neither the cash nor the moral imperative to pay the article processing charges required to make their work available through Gold open access. Curry’s suggestion that making humanities research open access might somehow attract more funding seemed, to my mind, somewhat optimistic.

Allington and Mandler also raised concerns about the creative commons licenses required by many funding bodies in order for researchers to comply with their open access policies. Allington pointedly described the author of these licenses, Lawrence Lessig, as essentially a Google-funded advocate, and expressed strong objections to having his work remixed and reworked without his consent. Though Bloom insisted that CC licenses’ requirement for attribution meant that Allington need not worry about being misrepresented, I found Kingsley’s response more persuasive: the open access movement needs to acknowledge that different disciplines have different requirements for CC licenses, and – presumably – work with researchers to create the new licenses needed. Mandler’s assertion that he’d been told by politicians that different disciplines could not have different licenses was worrying.

Discussion turned to the possible impact on journals and societies – and specifically the good work they do in other areas – of losing the money they make from subscriptions. Bloom questioned why that work should be funded through the money they make from research, and was answered pragmatically by Mandler, who pointed out that that was where the money was. Asked by a member of the audience why journals even needed to exist, Kingsley responded that individual researchers tended not to be interested in self-organising (though the development of initiatives such as the Open Library of the Humanities by Caroline Edwards and Martin Eve suggests that this is thankfully by no means universal).

The attitude towards publishers was thankfully more nuanced than is sometimes the case, despite – in a statement whose subtleties I undoubtedly missed in the rush of live-tweeting – Kingsley at one point suggesting that large publishers belonged in the same category as tobacco companies. The panel agreed that with open access creating greater transparency over what publishing actually costs, it was harder now for publishers to justify profits of 30-40%. Bloom was happy with profit being reinvested by publishers, but not with it leaving the system to enrich shareholders. (And on this we were in rare agreement.)

So, where does all this leave me, and the concerns I expressed at somewhat self-indulgent length at the start of this piece? The debate rather brought them into focus: though I support Open Access in principle, I fear the consequences of it being over-rigorously applied to the humanities and social sciences. I’d have liked to hear more about some of the initiatives that – rather than insisting that the humanities and social sciences will be just fine under a model that ignores their particular requirements – are actually trying to find ways to make open access work for these disciplines. (Though the Open Library of the Humanities was briefly mentioned in passing early on, this could have been discussed at more length, and there was no mention made of, say, Knowledge Unlatched’s experiments in funding monographs, or UCL press.)

I’m also still a little concerned about the zeal with which some advocates pursue open access. Perhaps I’m just over-sensitive, but even in the faultlessly polite debate I saw on Friday, there still seemed at times traces of an inflexible rigour that worried me: the belief, however civilly expressed, that the opponents of open access must be either misinformed or exhibiting bad faith. In his opening speech, moderator Stephen Curry asked whether publishers might be dressing up fears about profit margins as concerns for sustainability; in the discussion on funding, there seemed a marked reluctance to believe that the money just isn’t there in the humanities. More often, though, there was an open-mindedness that reassured me. Kingsley’s insistence on the diversity within the open access movement – that though many people round the world supported its ideals, they disagreed on how to achieve them – encouraged me to believe that ways will be found to find models that will work for the humanities and social sciences, and that publishers will have a role to play in them.

Alastair Horne runs webinars and a blog for language teachers at Cambridge University Press; he tweets as @pressfuturist, blogs occasionally at www.pressfuturist.com and is currently working on a novel set in a Parisian cemetery.

This blog post can also be found on Alastair’s own blog, here: http://pressfuturist.com/2015/10/25/open-access-a-personal-take/

Investigating the REF2014 as another means of understanding academic books

In this blog post, Simon Tanner reveals some of the early results of his research into the last REF, looking at Arts and Humanities panels and their submissions.

The recent Association of Learned and Professional Society Publishers (ALPSP) 2015 International Conference presented a session on the Academic Book of the Future, chaired by Richard Fisher. The session, Something Understood Scholarly Communications, included a presentation by Simon Tanner from the ABoF project and also significant contributions from Professor John Holmwood (University of Nottingham and past President of the British Sociological Association) and Professor Peter Mandler (University of Cambridge and President of the Royal Historical Society).

You can find an ALPSP blog covering the whole session here (insert link: http://blog.alpsp.org/2015/09/the-academic-book-of-future.html)
The ALPSP have also provided a full video of the whole session here (insert link: https://www.youtube.com/watch?v=ALOS2G_PYpc)

In this blog post we would like to focus upon the aspect of Simon’s presentation that considered the REF 2014 book submissions. You can find Simon’s ALPSP presentation slides here (insert link or maybe embed in blog: http://www.slideshare.net/KDCS/the-academic-book-of-the-future-progress-ref2014-data)

The REF 2014 submission data provides a rich data set that Simon is investigating as a means of finding out more about the academic books submitted in the last REF cycle. The analysis of the data will provide useful indicator data about academic book writing and publishing, and will further augment the analysis already provided by HEFCE.

The research focuses upon the Main Panel D for Arts and Humanities. Within this Panel, data can be investigated by Unit of Assessment Subject Area and by Research Output Type. A broad slice can be taken across the whole Panel or Output Type, then each Subject Area can be interrogated in detail, providing information about the publishing trends in these subjects, as well as the REF submission trends.

One useful area of exploration is the identification of preferred publishers. These can be presented in terms of the actual numbers or proportions of books submitted and will indicate for each Subject Area which publishers have precedence in REF and also identify those specialist publishers which are submitted only a few times. This information may be surprising to academics, publishers and libraries alike – it would certainly provoke a debate in those communities. In this phase of the project this is one of our objectives – to raise evidence that will challenge or confirm received opinion and thus stimulate a community response.

Another possible avenue of investigation might be to correlate of publishers’ lists of published monographs against those that were submitted to REF to find out why some books are submitted and others not, without making any assumptions about the quality of the books. Further, an investigation of whether and which books are cited in Impact Case Studies would provide an indication of how books connect to the impact factors described in the REF. A whole series of other queries can be made once we have the dataset. For instance, looking at gender of authors, book format/length etc, books per submitting institution, number of open access books etc. Some of these measures may prove more achievable than others, given the available data but are worth considering.

Our goal in this phase of investigation is not to prove any particular point but to see where the data leads us and what discussions can thus be stimulated.

Figure 1 shows an initial investigation of the proportions of research output type by Subject Area. It throws up some interesting comparisons and these are further explored in Figures 2 and 3. As can be seen there are some strong similarities in the proportions of books, book chapters and journal submissions made across subject areas. But we also observe that certain subject areas, such as Music, Drama, Dance & Performing Arts or Art & Design, show enormous shifts in output types to include a broader range of research outputs for these subjects, including Compositions, Exhibitions, Performance and Design for example.

Figure 1:

Tanner_ABoF_ALPSP_Presentation_2015-Figure001

Figure 2:

Tanner_ABoF_ALPSP_Presentation_2015-Figure002

Figure 3:

Tanner_ABoF_ALPSP_Presentation_2015-Figure003

Considering Publishing and History – REF 2014 Unit of Assessment 30

Having compared subject we can deep dive a specific subject, in this case History. It should be noted that extracting this data is time consuming and relatively complex due to the variations in the data provided by academics to the REF. We see books with no ISBN, books with publishers so obscure they did not appear in search engines and the variant use of publisher name (such that Oxford University Press for instance is expressed in over a dozen different ways).

For History we found:

  • 1657 Books in the following output types
    • Authored Books (1320),
    • Edited Books (290) and
    • Scholarly Editions (47)
  • 295 unique Publishers were found
  • Top 10 most used Publishers = 930 books or 56%
  • 258 Publishers (87%) had 5 or fewer books submitted
  • 171 Publishers (57%) had one book submitted – mostly non-UK
  • 761 books submitted (46%) were from a University Press
    • Outside the top 5 these were mostly non-UK publishers

We can also provide a list of the only Publishers with >10 books submitted for History in the REF 2014.

  • 213 Oxford University Press
  • 162 Cambridge University Press
  • 143 Palgrave  Macmillan
  • 98 Manchester University Press
  • 74 Ashgate
  • 70 Routledge
  • 52 Boydell & Brewer
  • 51 Yale University Press
  • 40 Brill Academic Publishers
  • 27 Continuum International Publishing
  • 27 Edinburgh University Press
  • 21 I B Tauris
  • 21 Pickering & Chatto
  • 20 Harvard University Press
  • 19 Bloomsbury Publishing
  • 16 Penguin
  • 14 Allen Lane
  • 14 British Academy/Oxford University Press
  • 14 Liverpool  University Press
  • 14 University of Wales Press
  • 12 University of Chicago Press
  • 11 Reaktion Books

Your Thoughts?

As we said earlier, our goal in this phase of investigation is not to set out to prove a point but to see where the data leads us and what discourses are thus stimulated. So please do get in touch and share your thoughts. Do these results confirm or contradict your expectations? Do you have further data you’d like to share with the project to augment this provided here?

The Academic Book in Sudan

One of the sub-projects that is being carried out as part of The Academic Book of the Future is a piece of research into the academic book in the geographical south, in particular in Africa and India.  The researchers on this project are Dr Caroline Davis of Oxford Brookes University and Professor Marilyn Deegan of King’s College London. In this week’s post, Prof. Deegan talks about their recent trip to Sudan to discuss the Project.

I made a visit to Sudan in February 2015 as part of an ongoing project to digitise Sudanese cultural resources held in libraries, archives, museums and private collections throughout the country: Digital Sudan.  This is something I have been working for the last two years with a Sudanese cultural NGO: SUDAAK, the Sudanese Association for Archiving Knowledge.  My visit to Sudan seemed an ideal opportunity to connect with colleagues for discussions on the academic book in the region, and so I was invited to give a paper on the project at Alzaim Alzazhari University in Khartoum North, organized by the Sudanese Library Association.

Pyramids at Meroc

Pyramids at Meroc (Credit: Marilyn Deegan)

The lecture was attended by around 70 librarians and academics, and they could not have been more enthusiastic about the project.  There was a lively debate after the presentation, and they expressed a willingness to be involved in the project.  They are planning to set up a local Academic Book committee, co-ordinated by Fawzia Galeledin on behalf of SUDAAK, and they will contact local publishers and academics and organise joint events.  Most academic publishing in Sudan is in Arabic, but Sudanese scholars would like their work to be more widely known and accessible, so the possibility of being translated into English was discussed.  They have access to online books and journals in English through various international initiatives, but they were very interested in the possibility of a more two-way dialogue which would only be possible if their work were more widely accessible—which means it being in English. 

The committee will organise focus groups to debate a range of research questions that we can supply, though they will probably need to be amended for local use.  They were also extremely excited at the idea of Academic Book Week and will arrange some events to correspond with this.  We also discussed the possibility of an exchange in Academic Book Week: perhaps someone from Sudan could come to London, and I could  go to Sudan.

The reception of the project in a country far removed from us was astonishing, and the opportunities our Sudanese colleagues could see in discussing the future of academic publishing with us was heartening.