IIIF Conference, Washington, May 2018

Washington Monument & Reflecting Pool
Washington Monument & Reflecting Pool

We (Joe Marshall (Head of Special Collections) and Scott Renton (Library Digital Development)) visited Washington DC for the IIIF Conference from 21st-25th May. This was a great opportunity for L&UC, not only to visit the Library of Congress- the mecca of our industry in some ways- but also to come back with a wealth of knowledge which we could use to inform how we operate.

Edinburgh gave two papers- the two of us delivering a talk on Special Collections discovery at the Library and how IIIF could make it all more comprehensible (including the Mahabharata Scroll), and Scott spoke with Terry Brady of Georgetown University showing how IIIF has improved our respective repository workflows.

From a purely practical level, it was great to meet face to face with colleagues from across the world- we have a very real example of a problem solved with Drake from LUNA, which we hope to be able to show very soon. It was also interesting to see how the API specs are developing- the presentation API will be enhanced with AV in version 3, and we can already see some use cases with which to try this out; search and discovery are APIs we’ve done nothing with, but these will help the ability to search within and across items, which is essential to our estate of systems, and 3D, while not having an API of its own, is also being addressed by IIIF, and it was fascinating to see the work that Universal Viewer and Sketchfab (which the DIU use) are doing to accommodate it.

The community groups are growing too, and we hope to increase our involvement with some of the less technical areas- Manuscripts, Museums, and the newly-proposed Archives group in the near future.

Among a wealth of great presentations, we’ve each identified one as our favourite:

Scott: Chifumi Nishioka – Kyoto University, Kiyonori Nagasaki – The University of Tokyo: Visualizing which parts of IIIF images are looked by users

This fascinating talk highlighted IIIF’s ability to work out which parts of an image, when zoomed in, are most popular. Often this is done by installing special tools such as eyetrackers, but the nature of IIIF- where the region is displayed as part of the URL- the same information can be visualised by interrogating Apache access logs. Chifumi and Kiyonori have been able to generate heatmaps of the most interesting regions on an item, and the code can be re-used if the logs can be supplied.

Joe: Kyle Rimkus – University of Illinois at Urbana-Champaign, Christopher J. Prom – University of Illinois at Urbana-Champaign: A Research Interface for Digital Records Using the IIIF Protocol

This talk showed the potential of IIIF in the context of digital preservation, providing large-scale public access to born-digital archive records without having to create exhaustive item-level metadata.  The IIIF world is encouraging this kind of blue-sky thinking which is going to challenge many of our traditional professional assumptions and allow us to be more creative with collections projects.

It was a terrific trip, which has filled us with enthusiasm for pushing on with IIIF beyond its already significant place in our set-up.

Joe Marshall & Scott Renton

Library Of Congress Exhibition
Library Of Congress Exhibition

Museums sites go IIIF

The main portal into Library and University Collections’ Special Collections content, collections.ed.ac.uk, is changing. A design overhaul which will improve discovery both logically and aesthetically is coming very soon, but in advance, we’ve implemented an important element of functionality, namely the IIIF approach to images.

Two sites have been affected to that end: Art (https://collections.ed.ac.uk/art– 2859 IIIF images in 2433 manifests across 4715 items) and Musical Instruments (https://collections.ed.ac.uk/mimed– 8070 IIIF images in 4097 manifests across 5105 items)) now feature direct IIIF thumbnails, embedded image zooming and manifest availability. A third site, the St Cecilia’s Hall collection (https://collections.ed.ac.uk/stcecilias) already had the first two elements, but manifests for its items are now available to the user.

What does this all mean? To take each element in turn:

Direct IIIF thumbnails

The search results pages on the site no longer directly reference images on the collections.ed servers, but bring in a LUNA URL using the IIIF image API format, which offers the user flexibility on size, region, rotation and quality.

Embedded image zooming

Using IIIF images served from the LUNA server and the OpenSeadragon viewer, images can now be zoomed directly on the page, where previously we needed an additional link out to the LUNA repository.

Manifest availability

Based on the images attached to the record in the Vernon CMS, we have built IIIF manifests and made them available, one per object. Manifests are a set of presentation instructions to render a set of images according to curatorial choice, and they can be dropped into standard IIIF viewers. We have created a button to present them in Universal Viewer (UV), and will be adding another to bring in Mirador in due course.

Watch this space for more development on these sites in the very near future. The look-and-feel will change significantly, but the task will be made easier with IIIF as a foundation.

Scott Renton, Digital Development

Automated item data extraction from old documents

Overview

The Problem

We have a collection of historic papers from the Scottish Court of Session. These are collected into cases and bound together in large volumes, with no catalogue or item data other than a shelfmark. If you wish to find a particular case within the collection, you are restricted to a manual, physical search of likely volumes (if you’re lucky you might get an index at the start!).

Volumes of Session Papers in the Signet Library, Edinburgh
Volumes of Session Papers in the Signet Library, Edinburgh

The Aim

I am hoping to use computer vision techniques, OCR, and intelligent text analysis to automatically extract and parse case-level data in order to create an indexed, searchable digital resource for these items. The Digital Imaging Unit have digitised a small selection of the papers, which we will use as a pilot to assess the viability of the above aim.

Stage One – Image preparation

Using Python and OpenCV to extract text blocks

I am indebted to Dan Vanderkam‘s work in this area, especially his blog post ‘Finding blocks of text in an image using Python, OpenCV and numpy’ upon which this work is largely based.

The items in the Scottish Session Papers collection differ from the images that Dan was processing, being images of older works, which were printed with a letterpress rather than being typewritten.

The Session Papers images are lacking a delineating border, backing paper, and other features that were used to ease the image processing. In addition, the amount, density and layout of text items is incredibly varied across the corpus, further complicating the task.

The initial task is to find a crop of the image to pass to the OCR engine. We want to give it as much text as possible in as few pixels as possible!

Due to the nature of the images, there is often a small amount of text from the opposite page visible (John’s blog explains why) and so to save some hassle later, we’re going to start by cropping 50px from each horizontal side of the image, hopefully eliminating these bits of page overspill.

A cropped version of the page
A cropped version of the page

Now that we have the base image to work on, I’ve started with the simple steps of converting it to grayscale, and then applying an inverted binary threshold, turning everything above ~75% gray to white, and everything else to black. The inversion is to ease visual understanding of the process. You can view full size versions by clicking each image.

A grayscale version of the page
Grayscale
75% Threshold
75% Threshold

The ideal outcome is that we eliminate smudges and speckles, leaving only the clear printed letters. This entailed some experimenting with the threshold level, as you can see in the image above, a lot of speckling remains. Dropping the threshold to only leave pixels above ~60% gray was a large improvement, and to ~45% even more so:

60% Threshold
60% Threshold
45% Threshold
45% Threshold

At a threshold of 45%, some of the letters are also beginning to fade, but this should not be an issue, as we have successfully eliminated almost all the noise, which was the aim here.

We’re still left with a large block at the top, which was the black backing behind the edge of the original image. To eliminate this, I experimented with several approaches:

  • Also crop 50px from the top and bottom of the images – unfortunately this had too much “collateral damage” as a large amount of the images have text within this region.
  • Dynamic cropping based on removing any segments touching the top and bottom of the image – this was a more effective approach but the logic for determining the crop became a bit convoluted.
  • Using Dan’s technique of  applying Canny edge detection and then use a rank filter to remove ~1px edges – this was the most successful approach, although it still had some issues when the text had a non-standard layout.

I settled on the Canny/Rank filter approach to produce these results:

Result of Canny edge finder
Result of Canny edge finder
With rank filter
With rank filter

Next up, we want to find a set of masks that covers the remaining white pixels on the page. This is achieved by repeatedly dilating the image, until only a few connected components remain:

You can see here that the “faded” letters from the thresholding above have enough presence to be captured by the dilation process. These white blocks now give us a pretty good record of where the text is on the page, so we now move onto cropping the image.

Dan’s blog has a good explanation of solving the Subset Sum problem for a dilated image, so I will apply his technique (start with the largest white block, and add more if they improve the amount of white pixels at a favourable increase in total area size, with some tweaking to the exact ratio):

With final bounding
With final bounding

So finally, we apply this crop to the original image:

Final cropped version
Final cropped version

As you can see, we’ve now managed to accurately crop out the text from the image, helping to significantly reduce the work of the OCR engine.

My final modified version of Dan’s code can be found here: https://github.com/mbennett-uoe/sp-experiments/blob/master/sp_crop.py

In my next blog post, I’ll start to look at some OCR approaches and also go through some of the outliers and problem images and how I will look to tackle this.

Comments and questions are more than welcome 🙂

Mike Bennett – Digital Scholarship Developer

 

IIIF Technical Workshop and Showcase March 2017

Improving Access to Image Collections

On 16th and 17th March the University of Edinburgh and National Library of Scotland will be hosting two International Image Interoperability Framework events.

IIIF Showcase

The IIIF Showcase brings together developers and early adopters to explain the background and value of IIIF, its growing community, and the potential of the Framework and the innovative ways in which it can be used to present digital image collections. There will be presentations from Edinburgh University Library, National Library of Scotland, National Library of Wales, Durham University, University College Dublin, The Bodleian Library, Digirati, Cogapp and others.

Logistics

  • Registration: Registration is free but capacity is limited.
  • Date: Friday, March 17, 2017
  • Location: National Library of Scotland (NLS) Boardroom on George IV Bridge (see map)
  • Audience: Individuals and institutional representatives interested in learning more about IIIF
  • Code of Conduct: The IIIF Code of Conduct applies to all IIIF events and related activities.
  • Social Media: Tweets about the event should use #iiif and @iiif_io.

IIIF Technical Workshop

The IIIF Technical Workshop unconference, hosted by the University of Edinburgh at Argyle House, will bring together colleagues who have implemented IIIF services, are developing the Framework and associated tools, and working on community initiatives. The workshop will provide opportunities to discuss implementations, issues, initiatives and developments and the forthcoming Annual IIIF conference in June.

Logistics

  • Registration: Registration is free but capacity is limited.
  • Date: Thursday, March 16, 2017
  • Location: University of Edinburgh Argyle House (see map)
  • Audience: Developers already working with IIIF or considering an implementation
  • Code of Conduct: The IIIF Code of Conduct applies to all IIIF events and related activities.
  • Social Media: Tweets about the event should use #iiif and @iiif_io.

Library Digital Development investigate IIIF

Quick caveat: this post is a partner to the one Claire Knowles has written about our signing up to the IIIF Consortium, so the explanation of the acronym will not be explained here!

The Library Digital Development team decided to investigate the standard due to its appearance at every Cultural Heritage-related conference we’d attended in 2015, and we thought it would be apposite to update everyone with our progress.

First things first: we have managed to make some progress on displaying IIIF formatting to show what it does. Essentially, the standard allows us to display a remotely-served image on a web page, with our choice of size, rotation, mirroring and cropped section without needing to write CSS, HTML, or use Photoshop to manipulate the image; everything is done through the URL. The Digilib IIIF Server was very simple to get up and running (for those that are interested, it is distributed as a Java webapp that runs under Apache Tomcat), so here it is in action, using the standard IIIF URI syntax of [http://[server domain]/[webapp location]/[specific image identifier]/[region]/[size]/[mirror][rotation]/[quality].[format]]!

The URL for the following (image 0070025c.jpg/jp2) would be:

[domain]/0070025/full/full/0/default.jpg

Poster

This URL is saying, “give me image 0070025 (in this case an Art Collection poster), at full resolution, uncropped, unmirrored and unrotated: the standard image”.

[domain]/0070025/300,50,350,200/200,200/!236/default.jpg

posterbit

This URL says, “give me the same image, but this time show me co-ordinates 300px in from the left, 50 down from the top, to 350 in from the left, to 200 down from the top (of the original); return it at a resolution of 200px x 200px, rotate it at an angle of 236 degrees, and mirror it”.

The server software is only one part of the IIIF Image API: the viewer is very important too. There are a number of different viewers around which will serve up high-resolution zooming of IIIF images, and we tried integrating OpenSeaDragon with our Iconics collection to see how it could look when everything is up and running (this is not actually using IIIF interaction at this time, rather Microsoft DeepZoom surrogates, but it shows our intention). We cannot show you the test site, unfortunately, but our plan is that all our collections.ed.ac.uk sites, such as Art and Mimed, which have a link to the LUNA image platform, can have that replaced with an embedded high-res image like this. At that point, we will be able to hide the LUNA collection from the main site, thus saving us from having to maintain metadata in two places.

deepzoom

We have also met, as Claire says, the National Library’s technical department to see how they are doing with IIIF. They have implemented rather a lot using Klokan’s IIIFServer and we have investigated using this, with its integrated viewer on both Windows and Docker. We have only done this locally, so cannot show it here, but it is even easier to set up and configure than Digilib. Here’s a screenshot, to show we’re not lying.

eyes

Our plan to implement the IIIF Image API involves LUNA though. We already pay them for support and have a good working relationship with them. They are introducing IIIF in their next release so we intend to use that as a IIIF Server. It makes sense- we use LUNA for all our image management, it saves us having to build new systems, and because the software generates JP2K zoomable images, we don’t need to buy anything to do that (this process is not open, no matter how Open Source the main IIIF software may be!). We expect this to be available in the next month or so, and the above investigation has been really useful, as the experience with other servers will allow us to push back to LUNA to say “we think you need to implement this!”. Here’s a quick prospective screenshot of how to pick up a IIIF URL from the LUNA interface.

IIIFMenu

We still need to investigate more viewers (for practical use) and servers (for investigation), and we need to find out more about the Presentation API, annotations etc., but we feel we are making good progress nonetheless.

Scott Renton, Digital Developer

Board Game Jam: Creating Openly-Licensed Board Games

At Innovative Learning Week this year we worked with students to develop board games using images from the CRC Flickr account as inspiration. Their challenge was to design a game which used at least three images from Open Educational Resource sites, one of which had to come from the CRC collection. The games also had to include at least three different game mechanics, be openly licensed and have a full set of rules.

Our groups created four fantastic and diverse games and we filmed them explaining their games. Read more below and view the full playlist at https://www.youtube.com/playlist?list=PLwJ2VKmefmxqqLjTK3kQrsASfefaVWz_K.

Apocalypse Later

Apocalypse Later is a card game in which players cooperate to overcome challenges ranging from volcano eruptions through to a zombie apocalypse, drawing and playing cards to gain advantages and advance in the game. One character is secretly a ‘mole’, whose sole purpose is to prevent the team from winning the game! The game features images from Anton Koberger’s German bible, the seal of Robert the Bruce and a decorated page from the Hours of the Blessed Virgin Mary from the CRC image collection.

Game rules: bit.ly/1TgeKbf

Cultured Ai (Arts for Ai)

In this art-themed board game, players take control of larvae hunting for works of art in various locations across the University. The larvae are highly cultured beings and need inspiration from art works in order to stay alive! Players draw cards representing different types of art (e.g. painting, sculpture) and have to decide whether to play them immediately for in-game bonuses / penalties or retain them for scoring at the end of the game. The player with the highest art value at the end is the winner. Cultured Ai (Arts for Ai) uses CRC plans of McEwan Hall, the Medical School and Glencoe Ballachulish for the game board.

Game Rules: Bit.ly/1mwFqGk

 The Mouse Hunt

In The Mouse Hunt, players compete in two teams vying for domination of an 18th century Edinburgh tenement! On one side, a team of mice attempts to drive the human inhabitants mad by digging tunnels and making a lot of noise. On the other side, humans set traps and try to rid the house of the rodent infestation! The house in which the game is set was inspired by historical images of Edinburgh from the CRC collection.

Game Rules: Bit.ly/1ox6G9y

Mythical Continents

In Mythical Continents, players sail the seven seas fighting monsters and collecting relics hidden across the globe. Movement is governed by a wind dial (modelled on the Kalendar and Astronomical Tables from the CRC collection) and players complete to bring all treasures back to Nessie, drawing event and monster cards along the way!

Game Rules: bit.ly/20Zi3os

We had great fun designing board games  and would be very keen to run the session again – please do get in touch if you’d be interested in being involved!

More information on finding, creating, and sharing your own Open Educational Resources can be found on the Open.Ed website.

Gavin Willshaw and Stephanie (Charlie) Farley

(Thanks also to Danielle Howarth for all the pictures and videos!)

IIIF – International Image Interoperability Framework

The next big thing
Inspirational quote on the side of a University of Ghent building, St. Pietersnieuwstraat 33.

The adoption of IIIF (International Image Interoperability Framework) has been gaining momentum over the past few years for digitised images. Adoption of IIIF for serving images allows users to rotate, zoom, crop, and compare images from different institutions side by side. Scott and I attended the IIIF conference in Ghent earlier this month to learn more about IIIF, so we can decide how we can move forward at the University of Edinburgh to adopt IIIF for our images.

On the Monday we attended a technical meeting at the University of Ghent Library, this session really helped us to understand the architecture of the two IIIF APIs (image and presentation) and speak to others who have implemented IIIF at their institutions.

The main event was on Tuesday at the beautiful Ghent Opera House, where there were lots of short presentations about different use-cases for IIIF adoption and the different applications that have been developed. If you are interested in adoption IIIF at your institution I recommend looking at Glen Robson’s slides on how the National Library of Wales has implemented IIIF. I can see myself coming back to these slides again and again, along with those on the two APIs.

Whilst we were in Ghent there was a timely update from LUNA Imaging, whose application we use as an imaging repository on their plans to support IIIF.

Thanks to everyone we met in Ghent who was willing to share with us their experiences of implementing IIIF and to the organisers for a great event in a beautiful city (and our stickers).

IIIF Meeting in Ghent Opera House
IIIF Meeting in Ghent Opera House

If you want to keep up to date with IIIF development please join the Google Group iiif-discuss@googlegroups.com

Claire Knowles and Scott Renton

Library Digital Development Team

 

 

Opening Doors with Bluetooth Beacons

Location-based intelligence is a growing area of importance in the academic library environment (as identified in the most recent NMC Horizon Report) and we’ve been exploring how Bluetooth beacons can be used to deliver information and content to users based on their location in the library space.

Earlier this year we used the technology with Google Glass to create an immersive visitor experience as part of the Something Blue exhibition: beacons were placed next to several exhibits in the gallery space and when users came within proximity, a video was activated on the Glass headset.GG1

More recently, we’ve started to explore ways in which beacons can be used to provide tours of the library building itself. There are several potential use cases for this, such as a tour for new undergraduates showing them where key services are located, or a tour of the paintings on display in the main library for art enthusiasts, but we decided to create a tour of the building for the general public in order to tie in with Doors Open Day 2015. Our library was designed by the British architect Sir Basil Spence and A-listed in 2006: its history is of real interest to our visitors.

Working with colleagues from across Information Services, we developed a tour app (available from the Apple App Store and Google Play) which uses beacons to tell the story of the library building and service.Doors Open Day App 1Beacons were set up at seven locations and users who had installed the app on their phone were sent a notification whenever they came into proximity of one – tapping the notification provided the user with a short, 1-2 minute long, video about the area they were in, such as this general introduction to the building:

https://vimeo.com/145490614

We had originally hoped to use beacons to create a form of ‘internal GPS’ to show the user their location in the library space (much like the blue ‘you are here’ dot on Google Maps) but we found that their inaccuracy over three metres made it impossible to trilaterate location accurately enough.

Doors-Open-Day-App-4

Around 50 people downloaded the tour over the weekend and the feedback was extremely positive. We learned some important lessons from this application, which will inform future uses of technology in this way.

  • Have a backup content delivery mechanism: make sure the content can be accessed manually through the app if the beacons don’t work. This also allows visitors to access the videos once they have left the building.
  • Have staff on hand to help people download the app: many visitors needed assistance to access the Wi-Fi network and download the app from the relevant online store.
  • Make sure Wi-Fi is available: provide Wi-Fi so that visitors don’t need to use their data connection to download the app, particularly as apps can be quite large (our app was 50MB). We set up a Wi-Fi hotspot for people who didn’t have access to the network.
  • Provide some basic signage in the physical space: let people know when they are in a beacon zone and provide QR codes linking to the app stores in order to assist with the download.
  • Bluetooth: make sure users switch it on!
  • The content is more important than the medium: we got good feedback for our experiment but, ultimately, a beacon is just a delivery mechanism and it was crucial that we provided high quality content. It took in the region of 50 staff hours to create seven two-minute videos.
  • Having to download an app is a huge barrier: the need to download an app prevented many visitors from engaging with the tour.

We’re continuing to explore the use of beacons in the library space and recently secured funding to see how Google’s new Eddystone beacon can be used to provide information and updates to library users throughout the building. We are especially keen on exploring the potential for Eddystone to bypass the need to download an app and will blog more as the project progresses!

Gavin Willshaw (Library & University Collections) , Ben Butchart (Edina), Sandy Buchanan (Edina), Claire Knowles (Library & University Collections)