Large scale data transport service launched

Research Services, IT Infrastructure Division, are pleased to report that a project that allows researchers to transfer terabytes of data between the University of Edinburgh and external collaborators has been completed. The service uses a transport mechanism known as Globus to set up multiple connections between host and client to transfer data instead of relying on a single point-to-point connection. This results in very large data being transferred between sites in parallel, allowing faster transfer.

The service is integrated with the University’s research data platform, DataStore, allowing researchers to specify specific folders that can be used as “endpoints” to the transfer. Many users have already taken advantage of the service, but it is key to note that this will not improve data transfer speeds within the University itself, rather that bottlenecks in the wider Internet can be mitigated.

For more information, University of Edinburgh users may view the RSS Wiki.

Mike Wallis
IS ITI

New feature: Sharing DataVault data with an external user

By popular demand, the Research Data Service is pleased to announce the arrival of a brand new feature: the DataVault Outward Staging Area (DOSA), a free-of-charge benefit to DataVault depositors.

Engraving depicting a stagecoach with people in front of a building

What is a staging area? Somewhere your data can be held temporarily, on the way to somewhere else. Just like a traditional staging post for stagecoaches, as shown in this engraving.

Imagine: your multi-terabyte dataset is safe-and-sound in your vault, you’ve cited it in a paper you’ve just published, and an external researcher has asked you for a copy. What will you do?

Simple: send a request to IS Helpline (or data-support@ed.ac.uk) asking us to create a DOSA folder for your data.

We’ll then use DOSA to give temporary (two months) external access to a copy of your deposit, using a Globus FTP endpoint. We’ll retrieve a copy of your data to the folder. And we’ll provide you with the Globus endpoint, which you send to the researcher. They may need to install some software to get the data. Alternatively, for datasets under 500 GB, we suggest a DataSync link will be more suitable. We set that up and provide it to you in the same way as the Globus endpoint. The difference for the end user is they can use the DataSync link (+ password) from their browser. Let us know if you have a preference for a Globus endpoint or a DataSync link (otherwise we’ll decide automatically based on the size).

 

Workflow diagram showing data moving from DataVault into DOSA, and from DOSA out to DataSync or globus

Workflow: We retrieve your deposit to your DOSA folder. We provide you with either a Globus endpoint or a DataSync link, to provide to your external person who made the request.

The DOSA is part of our networked active data storage, DataStore, but separate from the other staging area we provide for users making a new deposit (‘the DataVault staging area’), for the inward route.

Since 2016 researchers have been archiving data in Edinburgh DataVault. The DOSA is available for any DataVault deposit, old or new.

DataVault Outward Staging Area (DOSA): Sharing data with an external user

Not sure you’ll remember the name of the service? Worry not! I have a mnemonic device for you: just remember that a ‘dosa’ is an Indian savoury pancake. What’s not to like?

Photo of a folded dosa pancake on a tray with dishes of savoury sauces.

Pauline Ward
Data Repository Operations Officer
University of Edinburgh

DataShare awarded CoreTrustSeal trustworthy repository status

CoreTrustSeal has recognised Edinburgh DataShare as a trustworthy repository.

What does this mean for our depositors? It means you can rest assured that we look after your data very carefully, in line with stringent internationally-recognised standards. We have significant resources in place to ensure your dataset remains available to the academic community and the general public at all times. We also have digital preservation expertise and well-planned processes in place, to protect your data from long-term threats. The integrity and reusability of your data are a priority for the Research Data Service.

Book to attend our practical “Archiving your Research Data” course

The certification involves an in-depth evaluation of the resilience of the repository, looking at procedures, infrastructure, staffing, discoverability, digital preservation, metadata standards and disaster recovery. This rigorous process took the team over a year to complete, and prompted a good deal of reflection on the robustness of our repository. We compiled responses to sixteen requirements, a task which I co-ordinated. The finished application contained over ten thousand words, and included important contributions from colleagues in the Digital Library team and from the university Digital Archivist Sara Thomson.

Our CoreTrustSeal application in full   

The CTS is a prestigious accreditation, held by many national organisations such as the National Library of Scotland, the UK’s Centre for Environmental Data Analysis and UniProt. Ours is the first institutional research data repository in the UK to receive the CoreTrustSeal (the Cambridge Crystallographic Data Centre has the CTS but, in contrast to DataShare, is a disciplinary repository which archives data from the international research community).

DataShare is a trustworthy repository, where you as a researcher (staff or student) at the University of Edinburgh can archive your research data free of charge. Bring us your dataset – up to 100 GB(!) – and we will look after it well, to maximise its discoverability and its potential for reuse, both in the immediate term and long beyond the lifetime of your research project.

Edinburgh DataShare

All CTS certified repositories

circular logo bearing a tick mark and the words 'Core Trust Seal'

The Research Data Support team has earned the right to display this CTS logo on the DataShare homepage

Pauline Ward
Research Data Support Assistant
Library & University Collections

University of Edinburgh’s new Research Data Management Policy

Following a year-long consultation with research committees and other stakeholders, a new RDM Policy (www.ed.ac.uk/is/research-data-policy) has replaced the landmark 2011 policy, authored by former Digital Curation Centre Director, Chris Rusbridge, which seemed to mark a first for UK universities at the time. The original policy (doi: 10.7488/era/1524) was so novel it was labeled ‘aspirational’ by those who passed it.

"Policy"

CC-BY-SA-2.0, Sustainable Economies Law Centre, flickr

RDM has come a long way since then, as has the University Research Data Service which supports the policy and the research community. Expectation of a data management plan to accompany a research proposal has become much more ordinary, and the importance of data sharing has also become more accepted in that time, with funders’ policies becoming more harmonised (witness UKRI’s 2016 Concordat on Open Research Data).

What has changed?

Although a bit longer (the first policy was ten bullet points and could fit on a single page!), the new policy adds clarity about the University’s expectations of researchers (both staff and students), adds important concepts such as making data FAIR (explanation below) and grounding concepts in other key University commitments and policies such as research integrity, data protection, and information security (with references included at the end). Software code, so important for research reproducibility, is included explicitly.

CC BY 2.0, Big Data Prob, KamiPhuc on flickr

Definitions of research data and research data management are included, as well as specific references to some of the service components that can help – DMPOnline, DataShare, etc. A commitment to review the policy every 5 years, or sooner if needed, is stated, so another ten years doesn’t fly by unnoticed. Important policy references are provided with links. The policy has graduated from aspirational – the word “must” occurs twelve times, and “should” fifteen times. Yet academic freedom and researcher choice remains a basic principle.

Key messages

In terms of responsibilities, there are 3 named entities:

  • The Principle Investigator retains accountability, and is responsible as data owner (and data controller when personal data are collected) on behalf of the University. Responsibility may be delegated to a member of a project team.
  • Students should adhere to the policy/good practice in collecting their own data. When not working with data on behalf of a PI, individual students are the data owner and data controller of their work.
  • The University is responsible for raising awareness of good practice, provision of useful platforms, guidance, and services in support of current and future access.

Data management plans are required:

  • Researchers must create a data management plan (DMP) if any research data are to be collected or used.
  • Plans should cover data types and volume, capture, storage, integrity, confidentiality, retention and destruction, sharing and deposit.
  • Research data management plans must specify how and when research data will be made available for access and reuse.
  • Additionally, a Data Protection Impact Assessment is required whenever data pertaining to individuals is used.
  • Costs such as extra storage, long-term retention, or data management effort must be addressed in research proposals (so as to be recovered from funders where eligible).
  • A University subscription to the DMPOnline tool guides researchers in creating plans, with funder and University templates and guidance; users may request assistance in writing or reviewing a plan from the Research Data Service.

FAIR data sharing is more nuanced than ‘open data’:

  • Publicly funded research data should be made openly available as soon as possible with as few restrictions as necessary.
  • Principal Investigators and research students should consider how they can best make their data FAIR in their Data Management Plans (findable, accessible, interoperable, reusable).
  • Links to relevant publications, people, projects, and other research products such as software or source code should be provided in metadata records, with persistent identifiers when available.
  • Discoverability and access by machines is considered as important as access by humans. Standard open licences should be applied to data and code deposits.

Use data repositories to achieve FAIR data:

  • Research data must be offered for deposit and retention in a national or international data service or domain repository, or a University repository (see next bullet).
  • PIs may deposit their data for open access for all (with or without a time-limited embargo) in Edinburgh DataShare, a University data repository; or DataVault, a restricted access long-term retention solution.
  • Research students may deposit a copy of their (anonymised) data in Edinburgh DataShare while retaining ownership.
  • Researchers should add a dataset metadata record in Pure to data archived elsewhere, and link it to other research outputs.
  • Software code relevant to research findings may be deposited in code repositories such as Gitlab or Github (cloud).

Consider rights in research data:

  • Researchers should consider the rights of human subjects, as well as citizen scientists and the public to have access to their data, as well as external collaborators.
  • When open access to datasets is not legal or ethical (e.g. sensitive data), information governance and restrictions on access and use must be applied as necessary.
  • The University’s Research Office can assist with providing templates for both incoming and outgoing research data and the drafting and negotiation of data sharing agreements.
  • Exclusive rights to reuse or publish research data must not be passed to commercial publishers.

Robin Rice
Data Librarian and Head, Research Data Support
Library & University Collections