DeepMind's access to Royal Free data deemed inappropriate

Posted May 17, 2017

Powis was told in December that Caldicott "did not believe that when the patient data was shared with Google DeepMind, implied consent for direct care was an appropriate legal basis".

Her legal opinion, though, suggests that the Royal Free's basis for sharing the patient data with the online information company might not have been legal.

Under common law patients are "implied" to have consented to their information being shared if it was shared for the objective of "direct care". However, the NDG points out that DeepMind was not using the medical records to provide direct care, but to train and test the medical app it has been developing for the NHS.

The trust's agreemeent with DeepMind hit the headlines in April 2016 when New Scientist reported that the AI firm had been given access to five years' worth of data, covering 1.6 million patients, most of whom had not had acute kidney injury.

In a letter to NHS Royal Free Hospital Trust's medical director Stephen Powis, leaked to Sky News, Caldicott says that the data received from the hospital by the London-based AI research lab, without the patients' knowledge, did not follow guidelines regarding implied consent. It is now in use at the Royal Free, and is helping clinicians provide better, faster care to our patients.

The Information Commissioner's Office (ICO) is investigating the transfer of data.

"My considered opinion therefore remains that it would not have been within the reasonable expectation of patients that their records would have been shared for this objective", she says.

Dr Julia Powles, a researcher at Cornell Tech in NY and an expert in technology law, said there were "fundamental errors" at the beginning of the data sharing project and warned that these errors could put other data sharing deals in "real peril".

As with all information-sharing agreements with non-NHS organisations, patients can opt out of any data-sharing system by contacting the trust's data protection officer.

Google has previously said that the data collected is encrypted, will not be used commercially and Google staff will be unable to personally identify patients, as well as bring kept separate to other Google data.

"Taking into account what you have now clarified, it is my view and that of my panel that the objective for the transfer of 1.6 million identifiable patient records to Google DeepMind was for the testing of the Streams application, and not for the provision of direct care to patients".

Nicola Perrin, leading the patient task force at the Wellcome Trust, says: "New digital technologies, such as the DeepMind Streams app, offer real potential to provide better clinical care, but there must be appropriate governance so that everyone can have confidence that patient data is being used responsibly". They did not obtain consent from patients but argued that they had "implied consent", which is considered to be the case when data is...

She goes on to state that she is writing to the ICO to communicate her advice to feed into its ingoing investigation of the data-sharing arrangement. No responsible hospital would ever deploy a system that hadn't been thoroughly tested. Such testing is essential, but there must be clarity about the regulatory framework and transparency for patients. "We're glad the NDG has said that further guidance would be useful to organisations which are undertaking work to test new technologies", said the company.

However, the letter also reveals that while only "synthetic data (non-identifiable dummy data)" was used in the design and development of the Streams product, confidential patient information was used during testing.

"We also recognise that there needs to be much more public engagement and discussion about new technology in the NHS".