All posts by Scott Renton

Museums sites go IIIF

The main portal into Library and University Collections’ Special Collections content, collections.ed.ac.uk, is changing. A design overhaul which will improve discovery both logically and aesthetically is coming very soon, but in advance, we’ve implemented an important element of functionality, namely the IIIF approach to images.

Two sites have been affected to that end: Art (https://collections.ed.ac.uk/art– 2859 IIIF images in 2433 manifests across 4715 items) and Musical Instruments (https://collections.ed.ac.uk/mimed– 8070 IIIF images in 4097 manifests across 5105 items)) now feature direct IIIF thumbnails, embedded image zooming and manifest availability. A third site, the St Cecilia’s Hall collection (https://collections.ed.ac.uk/stcecilias) already had the first two elements, but manifests for its items are now available to the user.

What does this all mean? To take each element in turn:

Direct IIIF thumbnails

The search results pages on the site no longer directly reference images on the collections.ed servers, but bring in a LUNA URL using the IIIF image API format, which offers the user flexibility on size, region, rotation and quality.

Embedded image zooming

Using IIIF images served from the LUNA server and the OpenSeadragon viewer, images can now be zoomed directly on the page, where previously we needed an additional link out to the LUNA repository.

Manifest availability

Based on the images attached to the record in the Vernon CMS, we have built IIIF manifests and made them available, one per object. Manifests are a set of presentation instructions to render a set of images according to curatorial choice, and they can be dropped into standard IIIF viewers. We have created a button to present them in Universal Viewer (UV), and will be adding another to bring in Mirador in due course.

Watch this space for more development on these sites in the very near future. The look-and-feel will change significantly, but the task will be made easier with IIIF as a foundation.

Scott Renton, Digital Development

Library Digital Development investigate IIIF

Quick caveat: this post is a partner to the one Claire Knowles has written about our signing up to the IIIF Consortium, so the explanation of the acronym will not be explained here!

The Library Digital Development team decided to investigate the standard due to its appearance at every Cultural Heritage-related conference we’d attended in 2015, and we thought it would be apposite to update everyone with our progress.

First things first: we have managed to make some progress on displaying IIIF formatting to show what it does. Essentially, the standard allows us to display a remotely-served image on a web page, with our choice of size, rotation, mirroring and cropped section without needing to write CSS, HTML, or use Photoshop to manipulate the image; everything is done through the URL. The Digilib IIIF Server was very simple to get up and running (for those that are interested, it is distributed as a Java webapp that runs under Apache Tomcat), so here it is in action, using the standard IIIF URI syntax of [http://[server domain]/[webapp location]/[specific image identifier]/[region]/[size]/[mirror][rotation]/[quality].[format]]!

The URL for the following (image 0070025c.jpg/jp2) would be:

[domain]/0070025/full/full/0/default.jpg

Poster

This URL is saying, “give me image 0070025 (in this case an Art Collection poster), at full resolution, uncropped, unmirrored and unrotated: the standard image”.

[domain]/0070025/300,50,350,200/200,200/!236/default.jpg

posterbit

This URL says, “give me the same image, but this time show me co-ordinates 300px in from the left, 50 down from the top, to 350 in from the left, to 200 down from the top (of the original); return it at a resolution of 200px x 200px, rotate it at an angle of 236 degrees, and mirror it”.

The server software is only one part of the IIIF Image API: the viewer is very important too. There are a number of different viewers around which will serve up high-resolution zooming of IIIF images, and we tried integrating OpenSeaDragon with our Iconics collection to see how it could look when everything is up and running (this is not actually using IIIF interaction at this time, rather Microsoft DeepZoom surrogates, but it shows our intention). We cannot show you the test site, unfortunately, but our plan is that all our collections.ed.ac.uk sites, such as Art and Mimed, which have a link to the LUNA image platform, can have that replaced with an embedded high-res image like this. At that point, we will be able to hide the LUNA collection from the main site, thus saving us from having to maintain metadata in two places.

deepzoom

We have also met, as Claire says, the National Library’s technical department to see how they are doing with IIIF. They have implemented rather a lot using Klokan’s IIIFServer and we have investigated using this, with its integrated viewer on both Windows and Docker. We have only done this locally, so cannot show it here, but it is even easier to set up and configure than Digilib. Here’s a screenshot, to show we’re not lying.

eyes

Our plan to implement the IIIF Image API involves LUNA though. We already pay them for support and have a good working relationship with them. They are introducing IIIF in their next release so we intend to use that as a IIIF Server. It makes sense- we use LUNA for all our image management, it saves us having to build new systems, and because the software generates JP2K zoomable images, we don’t need to buy anything to do that (this process is not open, no matter how Open Source the main IIIF software may be!). We expect this to be available in the next month or so, and the above investigation has been really useful, as the experience with other servers will allow us to push back to LUNA to say “we think you need to implement this!”. Here’s a quick prospective screenshot of how to pick up a IIIF URL from the LUNA interface.

IIIFMenu

We still need to investigate more viewers (for practical use) and servers (for investigation), and we need to find out more about the Presentation API, annotations etc., but we feel we are making good progress nonetheless.

Scott Renton, Digital Developer

Bridging Gaps at the British Museum

IMG_1790The overwhelming setting of the British Museum played host to this year’s Museums Computer Group “Museums and the Web” Conference, and as usual, a big turnout from museums institutions all over the UK came, bursting with ideas and enthusiasm. The theme (“Bridging Gaps and Making Connections”) was intended to encourage thought about identifying creative spaces between physical museums collections and digital developments, where such spaces are perhaps too big, and how they can be exploited. As usual, there was far too much interesting content to cover fully in a blogpost- everything was thought-provoking, but I’ve picked out a few highlights.

Two projects highlighted collaboration between museums, which can be creatively explosive, and immediately improve engagement. Russell Dornan at The Wellcome Institute showed us #MuseumInstaSwap, where museums paired off and filled their social media feeds with the other museum’s content. Raphael Chanay at MuseoMix, meanwhile, arguably took this a step further by getting multiple institutions to bring their objects to a neutral location (Iron Bridge in Shropshire, Derby Silk Mill), and forming teams to build creative prototypes out of them across the digital and physical spaces. Could our museums collections be exploited in similar ways? Who could we partner up with?

I like to think that our “digital and physical” teams in L&UC collaborate very effectively. Keynote speaker John Coburn from TWAM (Tyne and Wear Archives and Museums) spoke of the importance of this intra-institution collaboration. You will (almost) never find a project that is run entirely from within the digital or physical sphere (Fiona Talbott from the HLF confirmed this- 510 of 512 recent bids had digital outputs relating to physical content), and the ability of the digital area and the content providers to communicate and work together is key. One very good example of this was the Tributaries app, built with sound artists, the history team, archives and so on, to put together an immersive audio experience of lost Tyneside voices from World War I. He also spoke of their TNT (Try New Things) initiative (also creatively explosive!) where staff sign up to do innovation with the collections, effectively in their spare time. With the Innovation Fund encouraging creativity, how do we work this into our daily lives? Can we? If not, how do we incentivise people to do it outwith their spare time? One of the gloomier observations of the day was that, with austerity, there is less and less money in the sector, which is likely to get worse after next month’s spending review. This austerity can breed creativity, though, and it’s good for digital, because people need to ‘work smarter’.

Another really interesting project is going on at the Tate, where they are combining their content with the Khan Academy learning platform. Rebecca Sinker and colleagues showed us how content can be levered and resurrected through a series of video tutorials around the content (be they archival, technical, biographical etc). Pushing the collaborative textual content from the comments area on the tutorials through to social media allows further engagement and new perspectives on the museum objects. Speaking personally, I have had little exposure to our VLE, but I’m quite sure that developing an interface between it and our collections sites could be highly beneficial.

That’s all the tip of the iceberg, though, so take a look at the programme link at the top to find out about lots of other interesting projects.

Outside of the lecture theatre, I had some really interesting conversations with people who have exactly the same problems as ourselves: building image management workflows, incorporating technological enhancements to content-driven websites, and thinking about beacon technology (the sponsors, Beacontent, deserver top marks for the name at least). Additionally, a tour of The Samsung Digital Discovery Centre– where state of the art technology meets British Museum content to improve the experience for children, teenagers, and families- was highly informative.

Scott Renton, Digital Developer

Food for thought at Europeana Tech

smallinvader

While our main contribution to the Europeana Tech revolved around the metadata games on this site, there was a veritable feast of things for us to consider for our future work.

In no particular order, I’d just like to say a little bit about the best of them, to focus us on where we could be improving processes.

  • Image strategy in general. It pains me to say this, as such a large proportion of my work in this job has been with the LUNA imaging system, but I can see the way the wind is blowing, and it would be churlish not to acknowledge it. The IIIF– International Image Interoperability Framework is increasingly becoming the standard for open sharing and hosting of images. With a host of open source tools for storage and discovery, such as OpenSeaDragon, which zooms at least as well as LUNA does, we could be looking at options to have all of our collections in one application, instead of linking out. We could be sharing images to other tools without having to store so many derivatives. We would be in a position of confidence that everything is being done to a standard. It’s still in its infancy, but the Bodleian- who used to use LUNA- have moved over, and the National Library of Wales are using it too.
  • APIs for data. The Europeana APIs are there for our use, to let developers from contributing institutions just get in and build stuff. We could be employing this to pick up metadata for our LUNA images as an alternative to the LUNA API (which we will need to use when the database goes in v7), and thus could employ it in our Flickr API, our metadata games, and our Google Analytics API. More than that, though, with a small tweak, we could be pointing metadata games to the WHOLE of Europeana, thus allowing us to do a service to other institutions- getting their data enriched, and supplying them with crowdsourced information. This would be great for our profile.
  • Using Linked Open Data. This comes up again and again, and would definitely come into play if we were to build an authorities repository. Architecturally, the approach is likely to involve RDF, although cataloguing can be done through CIDOC-CRM, from which RDF can be extracted. CIDOC-CRM is looking to have an extension for SPECTRUM, which Vernon uses, so there could be some interesting changes to how Vernon looks in the years ahead.
  • Alternatives to searching. One of the messages that rang out loud and clear at the conference is that people do not go to a museum to DO A SEARCH. Ways of presenting data without a search button as such are being looked at, and some sites which do this are here:
    V&A Spelunker
    Serendipomatic
    Netflixomatic
  • One other thing that occurred to me, thanks to Seb Chan at Cooper Hewitt, in relation to our work for St Cecilia’s- videos which show objects in the round, 3-D versions. Is it enough to show a flat image, or a bit of audio, for something that is in a display case?

In the tradition of Library Labs, this is a bit of a brain dump, and I will inevitably think of more content for this post over the next few days. It’s a start though!