End of an era – 2017-2020 RDM Roadmap Review (part 1)

Looking back on three years that went into completing our RDM Roadmap in this period of global pandemic and working from home, feels a bit anti-climactic. Nevertheless, the previous three years have been an outstanding period of development for the University’s Research Data Service, and research culture has changed considerably toward openness, with a clearer focus on research integrity. Synergies between ourselves as service providers and researchers seeking RDM support have never been stronger, laying a foundation for potential partnerships in future.

thumbnail image of poster

FAIR Roadmap Review Poster

A complete review was written for the service steering group in October last year (available on the RDM wiki to University members). This was followed by a poster and lightning talk prepared for the FAIR Symposium in December where the aspects of the Roadmap that contributed to FAIR principles of research data (findable, accessible, interoperable, reusable) were highlighted.

The Roadmap addressed not only FAIR principles but other high level goals such as interoperability, data protection and information security (both related to GDPR), long-term digital preservation, and research integrity and responsibility. The review examined where we had achieved SMART-style objectives and where we fell short, pointing to gaps either in provision or take-up.

Highlights from the Roadmap Review

The 32 high level objectives, each of which could have more than one deliverable, were categorised into five categories. In terms of Unification of the Service there were a number of early wins, including a professionally produced short video introducing the service to new users; a well-designed brochure serving the same purpose; case study interviews with our researchers also in video format – a product of a local Innovation Grant project; and having our service components well represented in the holistic presentation of the Digital Research Services website.

Gaps include the continuing confusion about service components starting with the name ‘Data’___ [Store, Sync, Share, Vault]; the delay of an overarching service level definition covering all components; and the ten-year old Research Data Policy. (The policy is currently being refreshed for consultation – watch this space.)

A number of Data Management Planning goals were in the Roadmap, from increasing uptake, to building capacity for rapid support, to increasing the number of fully costed plans, and ensuring templates in DMPOnline were well tended. This was a mixed success category. Certainly the number of people seeking feedback on plans increased over time and we were able to satisfy all requests and update the University template in DMPOnline. The message on cost recovery in data management plans was amplified by others such as the Research Office and school-based IT support teams, however many research projects are still not passing on RDM costs to the funders as needed.

Not many schools or centres created DMP templates tailored to their own communities yet, with the Roslin Institute being an impressive exception; the large majority of schools still do not mandate a DMP with PhD research proposals, though GeoSciences and the Business School have taken this very seriously. The DMP training our team developed and gave as part of scheduled sessions (now virtually) were well taken up, more by research students than staff. We managed to get software code management into the overall message, as well as the need for data protection impact assessments (DPIAs) for research involving human subjects, though a hurdle is the perceived burden of having to conduct both a DPIA and a DMP for a single research project. A university-wide ethics working group has helped to make linkages to both through approval mechanisms, whilst streamlining approvals with a new tool.

In the category of Working with Active Data, both routine and extraordinary achievements were made, with fewer gaps on stated goals. Infrastructure refreshment has taken place on DataStore, for which cost recovery models have worked well. In some cases institutes have organised hardware purchases through the central service, providing economies of scale. DataSync (OwnCloud) was upgraded. Gitlab was introduced to eventually replace Subversion for code versioning and other aspects of code management. This fit well with Data and Software Carpentry training offered by colleagues within the University to modernise ways of doing coding and cleaning data.

A number of incremental steps toward uptake of electronic notebooks were taken, with RSpace completing its 2-year trial and enterprise subscriptions useful for research groups (not just Labs) being managed by Software Services. Another enterprise tool, protocols.io, was introduced and extended as a trial. EDINA’s Noteable service for Jupyter Notebooks is also showcased.

By far and away the most momentous achievement in this category was bringing into service the University Data Safe Haven to fulfil the innocuous sounding goal of “Provide secure setting for sensitive data and set up controls that meet ISO 27001 compliance and user needs.” An enormous effort from a very small team brought the trusted secure environment for research data to a soft launch at our annual Dealing with Data event in November 2018, with full ISO 27001 standard certification achieved by December 2019. The facility has been approved by a number of external data providers, including NHS bodies. Flexibility has been seen as a primary advantage, with individual builds for each research project, and the ability for projects to define their own ‘gatekeeping’ procedures, depending on their requirements. Achieving complete sustainability on income from research grants however has not proven possible, given the expense and levels of expertise required to run this type of facility. Whether the University is prepared to continue to invest in this facility will likely depend on other options opening up to local researchers such as the new DataLoch, which got its start from government funding in the Edinburgh and South East Scotland region ‘city deal’.

As for gaps in the Working with Data category, there were some expressions of dissatisfaction with pricing models for services offered under cost recovery although our own investigation found them to be competitively priced. We found that researchers working with external partners, especially in countries with different data protection legislation, continue to find it hard work to find easy ways to collaborate with data. Centralised support for databases was never agreed on by the colleges because some already have good local support. Encryption is something that could benefit from a University key management system but researchers are only offered advice and left to their own mechanisms not to lose the keys to their research treasures; the pilot project that colleagues ran in this area was unfortunately not taken forward.

In part 2 of this blog post we will look at the remaining Roadmap categories of Data Stewardship and Research Data Support.

Robin Rice
Data Librarian and Head of Research Data Support
Library and University Collections

Dealing With Data 2018: Summary reflections

The annual Dealing With Data conference has become a staple of the University’s data-interest calendar. In this post, Martin Donnelly of the Research Data Service gives his reflections on this year’s event, which was held in the Playfair Library last week.

One of the main goals of open data and Open Science is that of reproducibility, and our excellent keynote speaker, Dr Emily Sena, highlighted the problem of translating research findings into real-world clinical interventions which can be relied upon to actually help humans. Other challenges were echoed by other participants over the course of the day, including the relative scarcity of negative results being reported. This is an effect of policy, and of well-established and probably outdated reward/recognition structures. Emily also gave us a useful slide on obstacles, which I will certainly want to revisit: examples cited included a lack of rigour in grant awards, and a lack of incentives for doing anything different to the status quo. Indeed Emily described some of what she called the “perverse incentives” associated with scholarship, such as publication, funding and promotion, which can draw researchers’ attention away from the quality of their work and its benefits to society.

However, Emily reminded us that the power to effect change does not just lie in the hands of the funders, governments, and at the highest levels. The journal of which she is Editor-in-Chief (BMJ Open Science) has a policy commitment to publish sound science regardless of positive or negative results, and we all have a part to play in seeking to counter this bias.

Photo-collage of several speakers at the event

A collage of the event speakers, courtesy Robin Rice (CC-BY)

In terms of other challenges, Catriona Keerie talked about the problem of transferring/processing inconsistent file formats between heath boards, causing me to wonder if it was a question of open vs closed formats, and how could such a situation might have been averted, e.g. via planning, training (and awareness raising, as Roxanne Guildford noted), adherence to the 5-star Open Data scheme (where the third star is awarded for using open formats), or something else? Emily earlier noted a confusion about which tools are useful – and this is a role for those of us who provide tools, and for people like myself and my colleague Digital Research Services Lead Facilitator Lisa Otty who seek to match researchers with the best tools for their needs. Catriona also reminded us that data workflow and governance were iterative processes: we should always be fine-tuning these, and responding to new and changing needs.

Another theme of the first morning session was the question of achieving balances and trade-offs in protecting data and keeping it useful. And a question from the floor noted the importance of recording and justifying how these balance decisions are made etc. David Perry and Chris Tuck both highlighted the need to strike a balance, for example, between usability/convenience and data security. Chris spoke about dual testing of data: is it anonymous? / is it useful? In many cases, ideally it will be both, but being both may not always be possible.

This theme of data privacy balanced against openness was taken up in Simon Chapple’s presentation on the Internet of Things. I particularly liked the section on office temperature profiles, which was very relevant to those of us who spend a lot of time in Argyle House where – as in the Playfair Library – ambient conditions can leave something to be desired. I think Simon’s slides used the phrase “Unusual extremes of temperatures in micro-locations.” Many of us know from bitter experience what he meant!

There is of course a spectrum of openness, just as there are grades of abstraction from the thing we are observing or measuring and the data that represents it. Bert Remijsen’s demonstration showed that access to sound recordings, which compared with transcription and phonetic renderings are much closer to the data source (what Kant would call the thing-in-itself (das Ding an sich) as opposed to the phenomenon, the thing as it appears to an observer) is hugely beneficial to linguistic scholarship. Reducing such layers of separation or removal is both a subsidiary benefit of, and a rationale for, openness.

What it boils down to is the old storytelling adage: “Don’t tell, show.” And as Ros Attenborough pointed out, openness in science isn’t new – it’s just a new term, and a formalisation of something intrinsic to Science: transparency, reproducibility, and scepticism. By providing access to our workings and the evidence behind publications, and by joining these things up – as Ewan McAndrew described, linked data is key (this the fifth star in the aforementioned 5-star Open Data scheme.) Open Science, and all its various constituent parts, support this goal, which is after all one of the goals of research and of scholarship. The presentations showed that openness is good for Science; our shared challenge now is to make it good for scientists and other kinds of researchers. Because, as Peter Bankhead says, Open Source can be transformative – Open Data and Open Science can be transformative. I fear that we don’t emphasise these opportunities enough, and we should seek to provide compelling evidence for them via real-world examples. Opportunities like the annual Dealing With Data event make a very welcome contribution in this regard.

PDFs of the presentations are now available in the Edinburgh Research Archive (ERA). Videos from the day are published on MediaHopper.

Other resources

Martin Donnelly
Research Data Support Manager
Library and University Collections
University of Edinburgh

New video: the benefits of RDM training

A big part of the role of the Research Data Service is to provide a mixture of online and (general/tailored) in-person training courses on Research Data Management (RDM) to all University research staff and students.

In this video, PhD student Lis talks about her experiences of accessing both our online training and attending some of our face-to-face courses. Lis emphasises how valuable both of these can be to new PhD candidates, who may well be applying RDM good practice for the first time in their career.

[youtube]https://youtu.be/ycCiXoJw1MY[/youtube]

It is interesting to see Lis reflect on how these training opportunities made her think about how she handles data on a daily basis, bringing a realisation that much of her data was sensitive and therefore needed to be safeguarded in an appropriate manner.

Our range of regularly scheduled face-to-face training courses are run through both Digital Skills and the Institute of Academic Development – these are open to all research staff and students. In addition, we also create and provide bespoke training courses for schools and research groups based on their specific needs. Online training is delivered via MANTRA and the Research Data Management MOOC which we developed in collaboration with the University of North Carolina.

In the video Lis also discusses her experiences using some RDS tools and services, such as DataStore for storing and backing-up her research data to prevent data loss, and contacting our team for timely support in writing a Data Management Plan for her project.

If you would like to learn more about any of the things Lis mentions in her interview you should visit the RDS website, or to discuss bespoke training for your school or research centre / group please contact us via data-support@ed.ac.uk.

Kerry Miller
Research Data Support Officer
Library and University Collections
The University of Edinburgh

New team members, new team!

Time has passed, so inevitably we have said goodbye to some and hello to others on the Research Data Support team. Amongst other changes, all of us are now based together in Library & University Collections – organisationally, that is, while remaining located in Argyle House with the rest of the Research Data Service providers such as IT Infrastructure. (For an interview with the newest team member there, David Fergusson, Head of Research Services, see this month’s issue of BITS.)

So two teams have come together under Research Data Support as part of Library Research Support, headed by Dominic Tate in L&UC. Those of us leaving EDINA and Data Library look back on a rich legacy dating back to the early 1980s when the Data Library was set up as a specialist function within computing services. We are happy to become ‘mainstreamed’ within the Library going forward, as research data support becomes an essential function of academic librarianship all over the world*. Of course we will continue to collaborate with EDINA for software engineering requirements and new projects.

Introducing –

Jennifer Daub has worked in a range of research roles, from lab-based parasite genomics at the University of Edinburgh to bioinformatics at the Wellcome Trust Sanger Institute. Prior to joining the team, Jennifer provided data management support to users of clinical trials management software across the UK and is experienced managing sensitive data.

As Research Data Service Assistant, Jennifer has joined veterans Pauline Ward and Bob Sanders in assisting users with DataShare and Data Library as well as the newer DataVault and Data Safe Haven functions, and additionally providing general support and training along with the rest of the team.

Catherine Clarissa is doing her PhD in Nursing Studies at the University of Edinburgh. Her study is looking at patients’ and staff experiences of early mobilisation during the course of mechanical ventilation in an Intensive Care Unit. She has good knowledge of good practice in Research Data Management that has been expanded by taking training from the University and by developing a Data Management Plan for her own research.

As Project Officer she is working closely with project manager Pauline Ward on the Video Case Studies project, funded by the IS Innovation Fund over the next few months. We have invited her to post to the blog about the project soon!

Last but not least, Martin Donnelly will be joining us from the Digital Curation Centre, where he has spent the last decade helping research institutions raise their data management capabilities via a mixture of paid consultancy and pro bono assistance. He has a longstanding involvement in data management planning and policy, and interests in training, advocacy, holistic approaches to managing research outputs, and arts and humanities data.

Before joining Edinburgh in 2008, Martin worked at the University of Glasgow, where he was involved in European cultural heritage and digital preservation projects, and the pre-merger Edinburgh College of Art where he coordinated quality and accreditation processes. He has acted as an expert reviewer for European Commission data management plans on multiple occasions, and is a Fellow of the Software Sustainability Institute.

We look forward to Martin joining the team next month, where he will take responsibility as Research Data Support Manager, providing expertise and line management support to the team as well as senior level support to the service owner, Robin Rice, and to the Data Safe Haven Manager, Cuna Ekmekcioglu – who recently shifted her role from lead on training and outreach. Kerry Miller, Research Data Support Officer, is actively picking up her duties and making new contacts throughout the university to find new avenues for the team’s outreach and training delivery.

*The past and present rise of data librarianship within academic libraries is traced in the first chapter of The Data Librarian’s Handbook, by Robin Rice and John Southall.

Robin Rice
Data Librarian and Head, Research Data Support
Library & University Collections