Home University of Edinburgh Library Essentials
April 7, 2026

To commemorate the 600th anniversary of the death of Church reformer Jan Hus (d.1415), the Centre for Research Collections is currently displaying the Bohemian Protest, a document surrounded by the seals of 100 members of the Czech nobility in protest at Hus’ treatment at the hands of the Council of Constance.
Hus had travelled to Council to lobby for religious reform, protected by a safe-conduct pass from the Emperor – but on his arrival he was seized and executed as a heretic on 6th July 1415. This document is dated 2nd September 1415 and contains the names and seals of 100 member of the Bohemian nobility – it has come to represent the cause for religious freedom.
It was acquired by theologian William Guild (1586-1657) while he was on the continent during the British civil wars, and subsequently bequeathed to the University of Edinburgh. The manuscript was written on vellum and the weight of the 100 wax seals makes it very fragile. This is the first time it has been displayed since receiving modern conservation treatment and rehousing.
The document can be viewed in high resolution on the University of Edinburgh Image Collections.
The Bohemian Protest is one of the University of Edinburgh Iconic items – you can learn more about this collection of objects on the Iconic webpages.
Fran Baseby, CRC Service Delivery Curator
Last week a few us attended ReCon 2015 here in Edinburgh which proved to be very interesting.
Session 1: Beyond the paper: publishing data, software and more
Attended by Scott Renton
Scott Edmunds from Gigascience works on a platform which attempts to keep the publication of research data up to speed with its mind-boggling rate of production. Now that we’re moving “beyond dead trees” into the digital realm, many problems are having to be tackled, such as meaningful peer review (which won’t result in retractions), skewed incentive systems (reward republication, not advertising; reward executable data) etc. He praised such tools as knitr, which effectively generate papers from data, and pointed out the importance of getting the research out there (citing ebola and climate change as examples where problems could be mitigated in this way).
Arfon Smith from GitHub linked the whole Research Data publication problem to his own speciality: the Git version control system. He is aware that Open is the new norm, that PDFs are unsatisfactory, and that we don’t share in a useful, creditable way. Communities form around a shared challenge/shared data, and this open source model can be utilised by academia, in a similar way to software development.
Stephanie Dawson from scienceopen spoke about aggregators, stating that these are the drivers of impact. She is very skeptical about any kind of journal level impact rankings (“negotiated, irreproducible, and unsound”), but is far more optimistic about article level tools where usage stats, citations etc. are meaningful.
Peter Burnhill from Edina talked about “link rot” and “content drift”, and suggested that more thought in general needs to be given to the persistence of URLs and content in order that the data remains available. He offered up- with some interesting case studies- some quite depressing statistics about how much of the PMC and Elsevier corpora are “rotten”, but he did suggest that the rates are steadily improving, so there is some good news out there!
Session 2: Digital communities and social networks for researchers by Steve Wheeler (@timbuckteeth)
Attended by Stephanie (Charlie) Farley (@Sfarley_Charlie)
Steve had a lot to say about the importance of open access to education in the digital age. We have the technology but do we have the will to be open? When you publish research you’re educating your community, hoarding knowledge from those who desperately need it is almost perverse. Additionally, there is a payback to making your work open and available. Steve recommended using Google Scholar to compare your own closed vs. open journal Google citations list and see for yourself.
He spoke of how open access works to serve the community of users. Communities are no longer related to or defined by locations, today’s definition of community is about a common interest. Communities celebrate, connect, communicate (online en masse), and in this way everything can be collaborative if we want it to be. Steve encouraged attendees to look up Dave Cormier’s research on the rhizomatic model of learning. Rhizomatic curriculum is not driven by predefined inputs from experts; it is constructed and negotiated in real time by the contributions of those engaged in the learning process. More of Cormier’s research on this can be found on his blog davecormier.com/edblog.
Some teachers don’t trust what the students are doing, they don’t like students sitting behind laptops with hidden windows and screens, but he again encouraged educators to take up this technology in an open and useful way. Create unique hashtags for your course/lectures and have a Twitter back channel during each lecture session. In one of his lectures students were using the Twitter back channel to discuss a book that was part of that week’s readings, he ReTweeted their question, tagging the author, and the author joined the conversation!
In addition to being useful in the classroom, Twitter can be an incredibly useful tool for academics and the power of networking can greatly improve your likelihood of being cited. Note: I went and did a quick bit of research on this myself and found some interesting research showing that actually yes, ‘highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles’.(Eyesenbach, ‘Can tweets predict citations?’, J Med Internet Res 2011, 13(4), DOI: 10.2196/jmir.2012)
The session was incredibly interesting and also provided my new word for the week: Darwikianism
Session 3: Altmetrics, analytics and tracking engagement
Attended by Dominic Tate & Ianthe Sutherland
The first presentation after lunch from was Euan Adie who is the CEO of Altmetric. He spoke about the lessons that have been learned by trying to measure impact – overall lesson is that you can’t measure impact! He spoke about how it’s hard to engage academics with impact as many feel that the quality of their research should speak for itself. Citation counts just give academic interest rather than the impact an article has made – impact should be academic attention as well as broader attention. Social media attention and blog comments doesn’t really speak for the quality of the research but can show how much research is being talked about online. Academics don’t seem to be credited for policy decisions and documents and this could be a direct impact of their research. Euan also spoke about Altmetrics, a manifesto (altmetrics.org/manifesto) and a conference 2:AM (www.altmetricsconference.com) coming up in October.
Then Anna Clements – Assistant Director (Digital Research), St Andrews University Library – gave an excellent presentation highlighting the role of a University Library in supporting the research undertaken at an institution. Anna made a series of good points about the importance of partnering with researchers during the research process rather than simply dealing with the outputs retrospectively. Anna also highlighted the changing role of the library as seeks to support new areas such as research data management and the research impact agenda.
The final presentation was from Kaveh Bazargan, Director, River Valley Technologies. He spoke about how research is submitted in PDF form and all the metadata about the research, such as the title, author, email, abstract keywords (in XML format) is then reverse engineered from the PDF when this information should just be provided with the PDF in the first place. He gave a good analogy that the PDF is represented by a smoothie and we need to know what the fruit is (the XML). He then gave a live demo of River Valley’s product for writing which automatically produces the PDF in TeX format and the associated metadata XML without any reverse engineering required, as the XML is produced at the source. It is easy to make updates and re-produce the PDF and XML as needed.
Dawsonera have now launched their new online reader. This includes a number of enhancements to improve the user experience of reading e-books on their platform.
The main enhancement we have all been waiting for – page range printing, is now available.
The new reader includes the following enhancements:
– An expanded view of the e-book to give full screen reading as well as a zoom in/out feature. Accessibility has been audited by the RNIB (Royal National Institute for the Blind).
– Printing Improvements – you can now print page ranges rather than one page at a time. You have a print preview of the pages selected before you send them to the printer and a print allowance is shown for each book consisting of what you have already printed, and how many more pages you are allowed to print.*
– Copying Improvements – the copy button now shows allowance per book – copies used and remaining, there is a preview page before copying to your clipboard.*
– Searching within an e-book and hit highlighting.
– Improved page navigation using the table of contents, improvements to scrolling and intuitive icons.
– Private and shared notes. See the user guide below for full details on how this works.
– Improvements to the reader layout on a mobile device so that it has the same features as the desktop version…the PowerPoint link below shows screenshots of the new Dawsonera reader on a PC, mobile and tablet.
Some library staff met with Dawsonera recently to see a preview of the new reader. See the slides demonstrating the various enhancements at http://www.docs.is.ed.ac.uk/docs/Libraries/Main/E-Resources/E-Books/DawsonEraNewReader.pptx
A user guide for the new reader is now available at http://www.docs.is.ed.ac.uk/docs/Libraries/Main/E-Resources/E-Books/DawsoneraOnlineReader_UserGuide.pdf
Dawsonera have a list of FAQs here.
We welcome feedback on the new reader – please contact your academic support librarian to give your feedback.
*Can’t print or copy if offline, the reader needs online access for DRM management.
Last week I attended the CERN Workshop on Innovations in Scholarly Communication (OAI9) in Geneva, a workshop looking at developments in scholarly communication, it was a diverse programme and attracted people from all sectors of scholarly communication, here are some of my highlights from each day;
Day One – Beginning the first day were a choice of tutorials, I elected the institution as publisher: getting started presented by UCL who are the first fully Open Access University Press in the UK. This offers a real alternative to commercial publishers, at the moment the majority of UCL Press authors are internal, external authors are liable to pay an APC.
The Keynote by Michael Nielsen, Beyond Open Access, looked at how open access policies should be crafted so they don’t inhibit innovation by constraining experimentation, new media forms and different types of publication. Following this was the session Looking at Barriers and Impact, Erin McKiernan, who is an early career researcher talked about her own experience and barriers she has faced in accessing research, she has made a pledge to be open, her opinion was early career researchers are in a position to be game changers in terms of making their research open. Finally Joseph McArthur, talked about the Open Access button helping readers find open access versions of research outputs, this is a tool created by young people who frequently faced barriers to accessing research.
Day Two – The second day of the workshop was held at the Campus Biotech
The first session looked at Open Science Workflows, CHORUS and SHARE, Jeroen Bosman and Bianca Kramer from Utrecht gave a really interesting presentation on the changing workflow of research – they had three goals for science and scholarship – Open, Efficient, Good. We should be supporting open science instead of just open access leading to diminishing traditional journals. This was highlighted using a diagram they created – 101 innovations in scholarly communication, highlighting the patterns and processes of innovation in this field. This is an ongoing survey of scholarly communication tool usage – part of an ongoing effort to chart the changing landscape of scholarly communication.
Following this session was Quality Assurance, focusing on researchers and reforming the peer review process. Janne-Tuomas Seppanen from Peerage of Science stated that some peer reviews are excellent – some are not, Peerage of Science tries to address this by the scoring of peer reviews, the idea being that peer reviewers are themselves peer reviewed increasing and quantifying the quality of peer review. This service is free for academics and publishers pay. Andrew Preston from Publons is looking at speeding up science by making peer review faster, more efficient, and more effective. The incentive for reviewers? Making peer review a measurable research output.
I also attended the break out session on Copyright in Data and Text Mining which gave an overview of the legal framework and an introduction to The Hague Declaration on Knowledge Discovery in the Digital Age launched in May this year which ‘aims to foster agreement about how to best enable access to facts, data and ideas for knowledge discovery in the Digital Age. By removing barriers to accessing and analysing the wealth of data produced by society, we can find answers to great challenges such as climate change, depleting natural resources and globalisation.’
The second day of the workshop ended in style at the Ariana, the Swiss museum of glass and ceramics which opened its doors especially for attendees of OAI9.
Day Three – The focus of the first session was the Institution as Publisher, a new theme for OAI. Catriona Maccallum from PLOS focused on the need for transparency, publishing is a cycle, not just about content provision. The services an institution can offer include; Open Access, Open Access Presses, transparency, assessment, rewards and incentives, she went on to say the institution should be driving changes. Rupert Gatti, Open Book Publisher talked about bringing publishing to a research centre level, open access allows direct dissemination to a different audience and would allow authors to disseminate not just books and articles but other types of scholarly output.The Final speaker in this session, Victoria Tsoukala from National Documentation Centre, National Hellenic Research Foundation talked specifically about open access publishing in the Humanities and gave an overview of University led publishing within her institution looking at the various challenges (funding, outputs being perceived as poorer quality) and the opportunities (ability to regain control, innovation, transparency and fairness and assuming new roles for libraries).
If you’re interested in finding out more, all the presentations are available online by clicking through the programme.
We are pleased to report that the County Surveys of Great Britain 1793 – 1817 project, which is related to the Statistical Accounts, has now released an online bibliographic search tool. This is a key output of this pilot project and will be of wide interest to historians and researchers in many fields.
Here we re-post of the County Surveys blog announcement:
We are delighted to announce that our bibliographic search tool is now live and accessible from the ‘Search’ tab in the menu above.
Our demonstrator includes bibliographic data from some of the best collections of the surveys and, where possible, provides links to library catalogue entries and digital editions. Researchers can search by modern county name, by series, by county and by author. Results are presented in a new tab after each search, so that you can compare multiple search results by toggling between pages. There are also detailed analyses of collections, revealing the extent of holdings and coverage, and indicating which surveys would be needed to complete each collection.
We hope that the demonstrator will be a useful finding aid and discovery tool for those interested in the County Surveys, the history of statistical reporting and British history more broadly. We would welcome any feedback on the tool, and would be very keen to hear about how it is used or whether it could usefully offer other features and information. If you have ideas, please get in touch with us at edina@ed.ac.uk.
Blog post by University of Manchester project developer Tom Higgins:
Yesterday I gave a short presentation on the Data Vault project at an event in Lancaster:
https://www.eventbrite.co.uk/e/research-data-management-solutions-for-your-needs-tickets-17100593335
I based this on the original pitch with a few updates reflecting the work we’ve done over the last couple of months.
Here’s some of the feedback and questions from the event – I think a lot of these are more relevant for “phase 2 and beyond” than the current prototyping:
Here are some examples of “Active” and “Archive” systems which might be useful targets for integration:
The development model we chose for the Data Vault is to get us all in a room (Robin, Tom, Claire, Mary, Stuart) and to collaboratively develop the proof of concept system over a few days. We were kindly hosted by the University of Manchester IT services in their Sackville Street building.
We started by looking at the skeleton framework that Tom and Robin had worked on, and then assigned areas of code to each person to write. For example work was required on the user interface that the user sees, the broker in the middle that manages the system, and the backend workers that perform the archiving.
All of the code is stored openly in github, and is open source with an MIT license:

Work is now continuing following the hackathon to complete a few areas of remaining code before the next Jisc Data Spring programme meeting where we can share the system with others.
On 10th of June, the Data Library team ran two workshops in association with the EU Horizon 2020 project, FOSTER (Facilitate Open Science Training for European Research), and the Scottish Graduate School of Social Science.
The aim of the morning workshop, “Good practice in data management & data sharing with social research,” was to provide new entrants into the Scottish Graduate School of Social Science with a grounding in research data management using our online interactive training resource MANTRA, which covers good practice in data management and issues associated with data sharing.
The morning started with a brief presentation by Robin Rice on ‘open science’ and its meaning for the social sciences. Pauline Ward then demonstrated the importance of data management plans to ensure work is safeguarded and that data sharing is made possible. I introduced MANTRA briefly, and then Laine Ruus assigned different MANTRA units to participants and asked them to briefly go through the units and extract one or two key messages and report back to the rest of the group. After the coffee break we had another presentation on ethics, informed consent and the barriers for sharing, and we finished the morning session with a ‘Do’s and Dont’s exercise where we asked participants to write in post-it notes the things they remembered, the things they were taking with them from the workshop: green for things they should DO, and pink for those they should NOT. Here are some of the points the learners posted:
DO
– consider your usernames & passwords
– read the Data Protection Act
– check funder/institution regulations/policies
– obtain informed consent
– design a clear consent form
– give participants info about the research
– inform participants of how we will manage data
– confidentiality
– label your data with enough info to retrieve it in future
– develop a data management plan
– follow the certain policies when you re-use dataset[s] created by others
– have a clear data storage plan
– think about how & how long you will store your data
– store data in at least 3 places, in at least 2 separate locations
– backup!
– consider how/where you back up your data
– delete or archive old versions
– data preservation
– keep your data safe and secure with the help of facilities of fund bodies or university
– think about sharing
– consider sharing at all stages. Think about who will use my data next
– share data (responsibly)
DON’T
– unclear informed consent
– a sense of forcing participants to be part of research
– do not store sensitive information unless necessary
– don’t staple consent forms to de-identified data records/store them together
– take information security for granted
– assume all software will be able to handle your data
– don’t assume you will remember stuff. Document your data
– assume people understand
– disclose participants’ identity
– leave computer on
– share confidential data
– leave your laptop on the bus!
– leave your laptop on the train!
– leave your files on a train!
– don’t forget it is not just my data, it is public data
– forget to future proof
Our message was that open science will thrive when researchers:
The afternoon workshop on “Overcoming obstacles to sharing data about human subjects” built on one of the main themes introduced in the morning, with a large overlap of attendees. The ethical and regulatory issues in this area can appear daunting. However, data created from research with human subjects are valuable, and therefore are worth sharing for all the same reasons as other research data (impact, transparency, validation etc). So it was heartening to find ourselves working with a group of mostly new PhD students, keen to find ways to anonymise, aggregate, or otherwise transform their data appropriately to allow sharing.
Robin Rice introduced the Data Protection Act, as it relates to research with human subjects, and ethical considerations. Naturally, we directed our participants to MANTRA, which has detailed information on the ethical and practical issues, with specific modules on “Data protection, rights & access” and “Sharing, preservation & licensing”. Of course not all data are suitable for sharing, and there are risks to be considered.
In many cases, data can be anonymised effectively, to allow the data to be shared. Richard Welpton from the UK Data Archive shared practical information on anonymisation approaches and tools for ‘statistical disclosure control’, recommending sdcMicroGUI (a graphical interface for carrying out anonymisation techniques, which is an R package, but should require no knowledge of the R language).
Finally Dr Niamh Moore from University of Edinburgh shared her experiences of sharing qualitative data. She spoke about the need to respect the wishes of subjects, her research gathering oral history, and the enthusiasm of many of her human subjects to be named in her research outputs, in a sense to own their own story, their own words.
Links:
Rocio von Jungenfeld & Pauline Ward
EDINA and Data Library
Introducing ArchivesSpace for researchers and public users, as well as the administrative side for our Archives Team within the Centre for Research Collections, has been an ongoing project for the last 18 months. It has taken us a while to get the service live for a number of reasons and we have learnt lots along the way.
ArchivesSpace is free open source software and is easy to set-up using Jetty and MySQL, however some of our requirements have meant getting to grips with the underlying set-up and APIs of the system. We have also joined ArchivesSpace as paid members as this enables us to get additional support through documentation and mailing lists.
Import of authority controls
We had an existing MySQL database containing thousands of authority terms collected by the Archives Team. It was very important for us to keep these and import them into our ArchivesSpace instance. We imported the subjects using the ArchivesSpace API. Learning how to use the API was made easier by the Hudson Molonglo Youtube videos. We have written simple PHP scripts to allow us to connect to the ArchivesSpace backend and import the subjects and agents from MySQL database exports of our existing authority terms. After some trial and error we have imported 9275 subjects and 13703 agents into ArchivesSpace.
For a while the authorities were not linking with the resources migrated into ArchivesSpace by the Archives Team, via the EAD importer. To enable the authorities to link we had to make modifications to the EAD importer in the plugins. The changes are available to view on our Github code repository. We also made changes to the importer to allow us to get a greater understanding of why EAD imports were failing. The reasons why EAD failed to import have changed as new versions of ArchivesSpace were released and the EAD importer is quite strict. The Archives Team migrated 16836 resources (including components) for launch on 9th June.
API for other things
We have also used the API to run through all resources imported from EAD and publish them. By default they were not all published and a lot of the notes and details of the resources were hidden from the public interface. Therefore being able to script the publishing was a great time saver.
Tomcat set-up
We decided to run ArchivesSpace under Tomcat as it is a web server that we have a lot of experience with. However, ArchivesSpace runs easily under Jetty and running it under Tomcat has caused us some headaches, due to URLs issues and the fact that the Tomcat installation script adds a lot of files to Tomcat and not just the web apps.
Customisation
We have customised the user interface for the administrative and public front ends of ArchivesSpace. These changes were made within the local plugin. The look and feel has been made to fit in with our other services such as collections.ed and the colour scheme of the University. This was relatively straightforward as ArchivesSpace UI is based on Twitter Bootstrap. Unfortunately the public UI images were displaying when running in Jetty but not in Tomcat. After some copying of files the images appeared.
The Public ArchivesSpace Portal http://archives.collections.ed.ac.uk
Early Adopters
It has taken longer than we had initially hoped to launch ArchivesSpace for a number of reasons. Primarily as early adopters of software there were issues that we did not foresee when the initial version was made available. The ArchivesSpace members mailing list is very active, as it is a new system there are lots of shared questions from those getting to grips with the system and working through their implementation. ArchivesSpace, particularly Chris Fitzpatrick, have helped steer us in the right direction and shared code. The migration of EAD has been a huge task that has been undertaken by Deputy Archives Manager, Grant Buttars, it has been great to work with him and to get a greater understanding of the format of EAD when resolving issues with failing imports.
We still have lots to do with the system to leverage its full functionality and fully showcase our amazing archives collection through links to http://collections.ed.ac.uk and our image repository. So watch this space.
This post follows on from Grant’s post https://libraryblogs.is.ed.ac.uk/edinburghuniversityarchives/2015/06/22/implementing-archivesspace/
Claire Knowles
Library Digital Development Team
Last week saw the start of a new project- photographing many of the University’s Musical Instruments while they are in storage at the Library during the re-development of St. Cecilia’s Music Hall. These images are planned for use in the new museum space, in printed materials, for social media and interactive Apps. The only guidance we have been given is ‘coffee-table book’ which gives the DIU team huge scope for interpretation and creativity. As the project progresses we hope to bring 3D photography into the mix, but for starters, this week the musical instruments team brought me 3 items for some studio shots.
The first was a Triple-fretted clavichord, possibly Flemish and c1620 (ref. 4486). Although this piece was quite simple and unadorned, it did have a bright red ribbon woven through the strings and the keys made a beautiful pattern, so I decide on a detail shot to highlight the mechanism.
The second item was a Rahab from Western Malaysia, c1977 (ref. 2101). This was a far more ornate and colourful piece. In fact, I was torn- both the front and back of the instrument presented interesting features to photograph, but how to get both sides at once? While at the Rijksmuseum conference Malcolm and I were impressed by their use of a black reflective surface in the photography of fashion accessories (see https://www.rijksmuseum.nl/formats/accessoires/index.jsp?lang=en). Malcolm suggested that we might be able to get a similar effect using a piece of black velvet and some glass, so I set up the studio to try it out. In the end I chose an angle looking down on the instrument that allowed details of both the strings and the red woollen back to be seen, however, the reflection adds further interest to the shot.
The final piece presented quite a different challenge. It is very rare that an object comes to us that leaves me scratching my head, but the ‘Jingling Johnny’ or Chapeau (ref. 6110) certainly did. A large, top heavy shiny brass instrument covered with dangling bells and fragile metalwork set atop a stick- how to keep it upright and perfectly still? The many shiny surfaces indicate that we will need to build a light tent to minimise reflections. This was clearly going to require some thought and planning, so we reluctantly decided to return this one to the store to reconvene another day!
In the coming months we will keep you posted on the projects progress.
Susan Pettigrew, Photographer
Hill and Adamson Collection: an insight into Edinburgh’s past
My name is Phoebe Kirkland, I am an MSc East Asian Studies student, and for...
Cataloguing the private papers of Archibald Hunter Campbell: A Journey Through Correspondence
My name is Pauline Vincent, I am a student in my last year of a...
Cataloguing the private papers of Archibald Hunter Campbell: A Journey Through Correspondence
My name is Pauline Vincent, I am a student in my last year of a...
Archival Provenance Research Project: Lishan’s Experience
Presentation My name is Lishan Zou, I am a fourth year History and Politics student....