A Stitch in Time: Mahābhārata Delivered Online

I don’t know if I have ever been more excited about a digitisation project going live:  the Edinburgh University’s 1795 copy of the Mahābhārata is now available online.  This beautiful scroll is one of the longest poems ever written, containing a staggering 200,000 verses spread along 72 meters of richly decorated silk backed paper. As one of the Iconic items in our Collection it was marked as a digitisation priority, so when a customer requested the 78 miniatures back in April 2017 it seemed like a good opportunity to digitise it in its entirety. There was just one problem: it was set to go on display in the ‘Highlands to Hindustan’ exhibition, which opened at the Library in July. This left us with only a narrow window of opportunity for the first stages of the project: conservation and photography.

Before we could even start the photography, Conservator Emily Hick had to stabilise the scroll edges which the wooden housing was causing to fray and snag. We had a plate made to insert under the scroll to provide a flat area for Emily to work on, which could also be used during photography.

By June, conservation had been completed and photography was underway. This original required a certain amount of problem solving when planning the studio set up. The scroll is elaborately decorated in gold and has dense, tiny text. The fine text was so small that it is difficult to read and had to be photographed at circa 2x magnification to enable readability. Despite the housing and roller arrangement, as the scroll was wound on there was a certain amount of lateral movement, around 1cm side to side. Additionally, it would flex and ripple. It would also sometimes slide backwards as soon as the tension on the winding mechanism was released. We were using a Hasselblad H5D-100 multi-shot camera that necessitates a perfectly still subject, so any movement was particularly problematic. Furthermore, while the extensive gold is part of what makes this item so special, photographically it adds further layers of complexity. Gold illuminations can be hard to light- light from the side and it barely looks like gold, light from above and risk vast areas of specular burn out. Also, the ripple movement would cause the gold to reflect light at different angles, complicating the next stage of the project: stitching the images back together. Due to the fragility of the surface and critical light positioning we were not able to use glass to steady the surface, instead opting for ‘snakes’ –strings of weights used in the Special Collections Reading Room.

The photographs below show the studio set up used:

Both scroll and camera were levelled and made parallel to the frame. LED lighting was placed high and close in on either side, but not directly overhead to allow good gold rendering and I used white tissue over the top to bounce a little more light down.

A calibration was done to ensure the lighting was even and neutral. I allowed a large overlap to aid stitching as shown here by the horizontal guides. There are vertical guides that show the box is parallel to the frame, but that the scroll isn’t- this shows how it would meander from side to side. Focus was set manually and I used a smaller than usual aperture of f.16 to give a little more depth of field to account for the ripples in the manuscript.

Undertaking a stitching job of this scale presented a significant project for the DIU: even assuming I could successfully stitch together the full scroll, it would make for an unimaginable file size and how would we deliver this in a useful format online?

When we blogged about the scoping of this project back in 2016 (see http://libraryblogs.is.ed.ac.uk/diu/2016/05/24/a-lengthy-challenge-photographing-the-mahabharata/) a comment from Stuart M pointed us in the direction of a possible delivery method. My colleague Scott Renton from the Digital Library Team started doing some research and discovered that developments in IIIF (see http://iiif.io) would allow for scroll delivery using a tiling system- similar to how Google Earth works. Our image delivery system LUNA had just adopted IIIF, so, I needed to join our images together, only to cut them back up again into seamless tiles!

In recent years there has been some development in the automation of image stitching with improved algorithms of software such as Adobe and PTGui. However, I found that the dense text, the variations in the gold rendering and the ripple movement of the scroll all conspired against successful automation. Stitching together manually took just under 3 months of work so the completion of the scroll was a moment of celebration in the DIU!

You can hear more about this project from Dr Naomi Appleton, Senior Lecturer in Asian Religions and Conservator Emily Hick, in this video https://youtu.be/spYehaknO_A .

I’ll hand over to Scott now, who can tell you more about rebuilding the tiles for online delivery.

Susan Pettigrew, Photographer, Digital Imaging Unit

Warning- IIIF parlance may be used in this passage! Unfamiliar terms are beautifully defined at http://iiif.io .

Initially, we thought that delivering the scroll online through IIIF should be achievable, but we hadn’t really investigated that side of the framework, so it was only a feeling to begin with. Some conversations with Digirati (Tom Crane and Matt McGrattan) made it more concrete, suggesting that the code that drives the Universal Viewer (originally created for the Wellcome Trust (see https://wellcome.ac.uk/home) , but now IIIF’s most popular read-only Presentation API viewer) had some special characteristics. One was that a viewingHint:continuous parameter on the manifest would force the viewer to present its sequence of canvases with no gaps between them, and a viewingDirection:top-to-bottom argument would tell it how to build it up – i.e. it should be possible to display the scroll as a whole object. Essentially this, and Susan and Emily’s hard work, meant that the heavy lifting had really been done before I started to work with the images.

To start, I created a new collection in LUNA (so as not to cause any duplication with the nicely cropped highlights in Oriental Manuscripts) and uploaded two images to LUNA and exported them using the front-end “share this” functionality to create a collection manifest of search results. This would not be usable by Universal Viewer, though, so I wrote a Python script which would parse its way through the exported manifest and generate a new one based on the requirements for scroll creation. I then added in the relevant bits and pieces to allow it to show. It worked immediately.

However, there is a difference between two (of 350) small tiled sections being pulled across a network and 72m worth of it. As we started adding sections, it became apparent that browsers were struggling to cope. The first showable “fifth”, 77 sections, took a minute or so to come up on a standard browser over our generously quick ethernet, and this didn’t augur well for the public’s patience and slower speeds when the entire object should become available. Once all the photography and processing was done, we looked at increasing the sizes of the sections, meaning fewer images were required (we’d need as many tiles, but fewer individual places to hit), and it looked as though there was a slight improvement. Working on this rationale, we made the images as big as LUNA could handle (really a 750Mb tiff was the best we could do), taking us down to 41 sections. Still, though, we were looking at 3 minutes to load the whole item, and only on certain browsers, which generally had to be primed beforehand. It wasn’t unreasonable as the full item is >1.6m pixels long by 4000 pixels wide: its total area of 6.8bn pixels suggested that, with 100 pixels of white space either side, we could give every man, woman and child on the planet a pixel of it!

Something did seem odd, though. Conversations with LUNA suggested that certain items in the David Rumsey map collection which were very wide were rendering instantly. Even comparing one section of scroll showed us the difference. It looked as though they weren’t processing landscape and portrait items the same, but somehow phone calls did not get us to this point; it was only when, in a room with (among others) Drake Zabriskie, CTO of LUNA in the Library of Congress at the IIIF Conference was I really able to explain. Looking at the accompanying info.json, it became apparent that the Rumsey items scaled down to 128, meaning a tile size of 750 could be rendered 1/128 if that was all it needed, or about 6 pixels. Our scroll went to 8, meaning we were pulling in a chunk of nearly 100 pixels for our very smallest rendering (utterly unnecessary when the preview is so skinny, it needs only a few pixels in width). I examined the widest item I could think of in our collection- Barker’s Panorama- and there was the proof: it was not a hugely high-res image, but it was scaling down to 32. Therefore, it became obvious that LUNA were not serving up info.json calculations identically for landscape and portrait. Drake was quickly able to confirm it, phoned his developers, and two days later was able to provide us with a preview version of LUNA 7.4, which we patched in at the first possible opportunity.

I expected it to perform better, but probably still to be suffer from network issues, particularly over wifi. When I loaded it up, though, I was really quite stunned to see how quickly it rendered, regardless of the device or internet speed. You can see it here:

https://librarylabs.ed.ac.uk/iiif/uv/?manifest=https://librarylabs.ed.ac.uk/iiif/manifest/mahabharataFinal.json

We’ve only really just started telling people about it, but it is already having an impact across the community. For example, I was very pleasantly surprised to see it picked up by the guys at Kanzaki, whose viewer does something out-of-the-box that UV doesn’t seem to: navigation.

http://www.kanzaki.com/works/2016/pub/image-annotator?u=https://librarylabs.ed.ac.uk/iiif/manifest/mahabharataFinal.json

Where do we go from here? Obviously we will surface it through our iconics site once that goes live; we’ll make it prominent through collections.ed and we will endeavour to make the Polyanno tool available to allow people to annotate the Sanskrit with correct translation and transcription. It’s shown what we’re capable of from a digitisation perspective as well as being testament to hard work and good communication across a disparate team! Thanks again to Susan and Emily for providing Drake and I with the content.

Scott Renton, Library Digital Development

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *