End of an era – 2017-2020 RDM Roadmap Review (part 1)

Looking back on three years that went into completing our RDM Roadmap in this period of global pandemic and working from home, feels a bit anti-climactic. Nevertheless, the previous three years have been an outstanding period of development for the University’s Research Data Service, and research culture has changed considerably toward openness, with a clearer focus on research integrity. Synergies between ourselves as service providers and researchers seeking RDM support have never been stronger, laying a foundation for potential partnerships in future.

thumbnail image of poster

FAIR Roadmap Review Poster

A complete review was written for the service steering group in October last year (available on the RDM wiki to University members). This was followed by a poster and lightning talk prepared for the FAIR Symposium in December where the aspects of the Roadmap that contributed to FAIR principles of research data (findable, accessible, interoperable, reusable) were highlighted.

The Roadmap addressed not only FAIR principles but other high level goals such as interoperability, data protection and information security (both related to GDPR), long-term digital preservation, and research integrity and responsibility. The review examined where we had achieved SMART-style objectives and where we fell short, pointing to gaps either in provision or take-up.

Highlights from the Roadmap Review

The 32 high level objectives, each of which could have more than one deliverable, were categorised into five categories. In terms of Unification of the Service there were a number of early wins, including a professionally produced short video introducing the service to new users; a well-designed brochure serving the same purpose; case study interviews with our researchers also in video format – a product of a local Innovation Grant project; and having our service components well represented in the holistic presentation of the Digital Research Services website.

Gaps include the continuing confusion about service components starting with the name ‘Data’___ [Store, Sync, Share, Vault]; the delay of an overarching service level definition covering all components; and the ten-year old Research Data Policy. (The policy is currently being refreshed for consultation – watch this space.)

A number of Data Management Planning goals were in the Roadmap, from increasing uptake, to building capacity for rapid support, to increasing the number of fully costed plans, and ensuring templates in DMPOnline were well tended. This was a mixed success category. Certainly the number of people seeking feedback on plans increased over time and we were able to satisfy all requests and update the University template in DMPOnline. The message on cost recovery in data management plans was amplified by others such as the Research Office and school-based IT support teams, however many research projects are still not passing on RDM costs to the funders as needed.

Not many schools or centres created DMP templates tailored to their own communities yet, with the Roslin Institute being an impressive exception; the large majority of schools still do not mandate a DMP with PhD research proposals, though GeoSciences and the Business School have taken this very seriously. The DMP training our team developed and gave as part of scheduled sessions (now virtually) were well taken up, more by research students than staff. We managed to get software code management into the overall message, as well as the need for data protection impact assessments (DPIAs) for research involving human subjects, though a hurdle is the perceived burden of having to conduct both a DPIA and a DMP for a single research project. A university-wide ethics working group has helped to make linkages to both through approval mechanisms, whilst streamlining approvals with a new tool.

In the category of Working with Active Data, both routine and extraordinary achievements were made, with fewer gaps on stated goals. Infrastructure refreshment has taken place on DataStore, for which cost recovery models have worked well. In some cases institutes have organised hardware purchases through the central service, providing economies of scale. DataSync (OwnCloud) was upgraded. Gitlab was introduced to eventually replace Subversion for code versioning and other aspects of code management. This fit well with Data and Software Carpentry training offered by colleagues within the University to modernise ways of doing coding and cleaning data.

A number of incremental steps toward uptake of electronic notebooks were taken, with RSpace completing its 2-year trial and enterprise subscriptions useful for research groups (not just Labs) being managed by Software Services. Another enterprise tool, protocols.io, was introduced and extended as a trial. EDINA’s Noteable service for Jupyter Notebooks is also showcased.

By far and away the most momentous achievement in this category was bringing into service the University Data Safe Haven to fulfil the innocuous sounding goal of “Provide secure setting for sensitive data and set up controls that meet ISO 27001 compliance and user needs.” An enormous effort from a very small team brought the trusted secure environment for research data to a soft launch at our annual Dealing with Data event in November 2018, with full ISO 27001 standard certification achieved by December 2019. The facility has been approved by a number of external data providers, including NHS bodies. Flexibility has been seen as a primary advantage, with individual builds for each research project, and the ability for projects to define their own ‘gatekeeping’ procedures, depending on their requirements. Achieving complete sustainability on income from research grants however has not proven possible, given the expense and levels of expertise required to run this type of facility. Whether the University is prepared to continue to invest in this facility will likely depend on other options opening up to local researchers such as the new DataLoch, which got its start from government funding in the Edinburgh and South East Scotland region ‘city deal’.

As for gaps in the Working with Data category, there were some expressions of dissatisfaction with pricing models for services offered under cost recovery although our own investigation found them to be competitively priced. We found that researchers working with external partners, especially in countries with different data protection legislation, continue to find it hard work to find easy ways to collaborate with data. Centralised support for databases was never agreed on by the colleges because some already have good local support. Encryption is something that could benefit from a University key management system but researchers are only offered advice and left to their own mechanisms not to lose the keys to their research treasures; the pilot project that colleagues ran in this area was unfortunately not taken forward.

In part 2 of this blog post we will look at the remaining Roadmap categories of Data Stewardship and Research Data Support.

Robin Rice
Data Librarian and Head of Research Data Support
Library and University Collections

Research Data Workshops: DataVault Summary

Having soft-launched the DataVault facility in early 2019, the Research Data Support team -with the support of the project board – held five workshops in different colleges and locations to find out what the user community thought about it. This post summarises what we learned from participants, who were made up roughly equally of researchers (mainly staff) and support professionals (mainly computing officers based in the Schools and Colleges).

Each workshop began with presentations and a demonstration by Research Data Service staff, explaining the rationale of the DataVault, what it should and should not be used for, how it works, how the University will handle long-term management of data assets deposited in the DataVault, and practicalities such as how to recover costs through grant proposals or get assistance to deposit.

After a networking lunch we held discussion groups, covering topics such as prioritisation of features and functionality, roles such as the university as data asset owner, and the nature of the costs (price).

The team was relieved to learn that the majority (albeit from a somewhat self-selecting sample) agreed that the service fulfilled a real need; some data does need to be kept securely for a named period to comply with research funders’ rules, and participants welcomed a centralised platform to do this. The levels of usability and functionality we have managed to reach so far were met with somewhat less approval: clearly the development team has more work to do, and we are glad to have won further funding from the Digital Research Services programme in 2019-2020 in order to do it.

Attitudes toward university ownership of data assets was also a mixed bag; some were sceptical and wondered if researchers would participate in such a scheme, but others found it a realistic option for dealing with staff turnover and the inevitability of data outlasting data owners. Attitudes toward cost were largely accepting (the DataVault provides a cheaper alternative than our baseline DataStore disk storage), but concerns about the safekeeping of legacy and unfunded research data were raised at each workshop.

A sample of points raised follows:

  • Utility? “Everyone I know has everything on OneDrive.”
  • Regarding prioritisation of features – security first; file integrity first; putting data from other sources than DataStore; facilitating larger deposit sizes; ease of use.
  • Quickness of deposit and retrieval? Deposit was deemed more important to be quick than retrieval.
  • University as data asset owner?
    • Under GDPR the data are already university assets (because the Uni is the data controller).
    • People who manage the data should be close to the research; IT people can manage users but shouldn’t be making decisions about data. Danger that because it’s related to IT it gets dumped on IT officers. The formal review process helps to ensure decisions will be made properly. Include flexibility into the review hierarchy to allow for variation in school infrastructure.
    • When I heard that I was – not shocked – but concerned. If I move to another university how do I get access? This might be a problem. Researchers might prefer to retain three copies themselves.
  • Is the cost recovery mechanism valid?
    • Vault costs are legitimate costs.
    • Ideally should come from grant overheads, until then need to charge.
    • Possible to charge for small / medium/large project at start rather than per TB?
  • Is the 100 GB threshold sufficient for unfunded research? How else could unfunded or legacy data be covered (who pays)?
    • Alumni sponsor a dataset scheme?
    • There will be people with a ‘whole bunch of data somewhere’ that would be more appropriately stored in DataVault.

The team is grateful to all of the workshop participants for their time and thoughts; the report will be considered further by the project board and the Research Data Service Steering Group members. The full set of workshop notes are colour-coded to show comments from different venues and are available to read on the RDM wiki, for anyone with a University log-in (EASE).


Robin Rice
Data Librarian and Head, Research Data Support
Library & University Collections

University of Edinburgh Data Safe Haven: a new facility for sensitive data

Information Services has implemented a remote-access “Safe Haven” environment to protect data confidentiality, satisfy concerns about data loss and reassure Data Controllers about the University’s secure management and processing of their data in compliance with Data Protection Legislation.

The Data Safe Haven (DSH) provides a secure storage space and a secure analytic environment that is appropriate for all research projects working with different kinds of sensitive data. It has its own firewall and is isolated from the University network. It is located in a secure facility with controlled access. All traffic between the DSH and the user’s computer is encrypted and no internet access is available. Access to the DSH is only for authorized users via an assigned ‘Yubikey’ and secure VMware Horizon Client, and will only be available from the managed desktops that are white listed for access to the DSH.

Provision of a range of analytic and supporting applications (e.g., SPSS, STATA, SAS, MATLAB, and R) is available. These are delivered dynamically and are assigned to the project. The applications that are available to the users will depend on the type of arrangement that has been made with the DSH technical team prior to the project registration and on the licensing arrangements with the software provider.

The DSH initial security review (penetration test) was carried out by a CREST accredited organisation in August 2018. The DSH exhibited an overall good security stance and demonstrated resilience against the various types of tests performed by the consultants. This was the initial review that formed part of our ongoing drive towards ISO 27001 certification. We expect to complete this phase of the project and obtain the certificate by November 2019.

We have successfully closed the pilot phase of the DSH with five projects in October 2018, and softly launched the service at our “Dealing with Data” conference in November 2018. At present, the DSH Technical team has been migrating Centre for Clinical Brain Sciences – National CJD Research and Surveillance project data from the walled garden into the DSH.

The DSH operates on a cost recovery basis and this cost should be included in grant applications. We welcome enquiries from researchers as early as possible in their project planning. Costing is based on bespoke project requirements (see DSH Overview for users at https://www.ed.ac.uk/is/data-safe-haven.

The DSH Operations team also provides:

  • advice and input for funding and permissions applications;
  • guidance on meeting Approved Researcher requirements;
  • advice about meeting data sharing requirements and archiving of data.

We can set up a demo environment for researchers on request to explore the use of the DSH for their projects. If you need further information, please contact the RDS Team via data-support@ed.ac.uk.

Cuna Ekmekcioglu, Data Safe Haven Manager, Research Data Service

Updates from the fourth meeting of the RDM Forum

Guest blog post by Ewa Lipinska

On 28th August members of the RDM Forum gathered in the stunning Old Library at the Department of Geography in the Old Infirmary building, to hear the latest updates from the Research Data Service team and discuss all things data. It’d been a good few months since the last time we met, so the event presented us with the perfect opportunity to catch up on new developments, network with colleagues working on RDM in different parts of the University, and prepare ourselves for the new academic year which will see the University take up a pivotal role in making Edinburgh the Data Capital of Europe.

We started off with an RDM update from Cuna Ekmekcioglu, who gave us an overview of developments to University research data services: the launch of interim DataVault long-term retention service, continuing development of Data Save Haven aimed at research projects dealing with sensitive data, and a new release of DataShare which will allow larger datasets. We also learned about RDM training courses planned for the new academic year, most of which can be booked via MyEd.

Next, Pauline Ward gave a presentation which went into a bit more detail about the DataVault service allowing researchers to comply with their funders’ requirements to preserve data for the long term in cases where the datasets cannot be made public. The current interim service requires a mediated deposit which can be done by contacting data-support[at]ed.ac.uk. Comprehensive guidance on how to prepare your data before storing it in DataVault can be found on the service website.

This was followed by a demonstration of the new Research Data Service promotional video which outlines the range of tools and support offered by the team, and which can be a very good resource for new members of staff who would like to find out about the types of services available. Diarmuid McDonnell who presented the video also gave us a quick overview of a recent project called Scoping Statistical Analysis Support, which looked at the demand for statistical analysis training for current postgraduate students. The final project report is full of current information about statistical training around the University.

We then went on to discuss the potential impact of data sharing, which tied in nicely with a recent panel discussion at Repository Fringe 2017 that focused on how repositories and associated services can feature in supporting researchers to achieve and evidence impact in preparation for the next Research Excellence Framework exercise (live notes from the day are available). Pauline Ward presented examples of popular public datasets by Edinburgh University researchers, described ways to access information about their usage, and talked about how datasets can be shared more widely to engage external audiences, which may lead to potential impact. Even though on their own research data usage statistics are not enough to demonstrate significant impact beyond academia, they are a good (though perhaps still slightly overlooked) starting point for tracking how and by whom datasets are used, and how that benefits individuals and communities.

The meeting concluded with a presentation by Robin Rice, who shared with us the draft Research Data Service Roadmap. As the goals set out in the previous roadmap have now largely been achieved, the time has come to look to the future and identify new objectives for the next few years. It was interesting to hear about the team’s long-term plans which include unification of the service (aiming to ensure the best user experience and interoperability between systems), advocacy of data management planning, support around active data, enhanced data stewardship, improved communications and more training opportunities.

Overall, it was a very useful and informative meeting, and I’d very much encourage anyone interested in research data management and sharing to join us next time. In the meantime Cuna’s slides, together with lots of other useful resources and points for discussion, are available on the RDM Sharepoint (access on request).

Ewa Lipinska
Research Outcomes Co-Ordinator
College of Arts, Humanities and Social Sciences