In the second and final day here at the Trust in the Digital World conference in Madrid, I have attended a panel debate on the subject of E-Health.
With speakers from SAP, ATOS and hospitals in Spain, the concept was mostly on sharing anonymised data for the benefit of research and action, whilst keeping the patient data private.
Chair Volkmar Lutz, head of applied research at SAP, said that the view of the sector is that it is difficult and there is a problem of trust and security, and this is a “rare sector” as risk is high and data is sensitive, but the data “affects all of us”.
Alberto Sanna, director of e-services at the Life and Health Scientific Institute in San Raffaele, stressed the importance of the profile belonging to the individual who approves the products and services that their data is used with, saying it is more of a “pull mode” with the platform, as the patient chooses rather than being pushed on it. “You own the data and just use the platform to select what is needed for you,” he said.
Asked if users trust services, Sanna said that it is implementing a strategy on co-creation, but there is a lot of trust and users accept mistakes are made and the “offer is scientific”.
It all seems fairly sensible, but could someone in an incoherent state really be approved to choose which products and services their data can be used in, or does this fall to the next of kin? Another speaker, Roberto Sanz from Quiron Hospital, the largest private hospital in Spain, said that the patient is changing to care how their data is handled in order to obtain the best diagnosis.
He said: “We have a high amount of information on one patient. We generate a huge amount of information, but maybe want to store it for later for grandchildren who want information on our health and genetics. All information has to be important as we need an ability to put information together in a common source.”
Sanna made the point of in a hospital with 1,000 employees, there is a challenge of the “social circle of access to data”. Another point well made, if all the data is in one place how can it be secure, and how can you know that the access to the data is approved and for the correct people? What if those people leave, is privileged access properly managed? This was leaving me with too many questions!
ATOS secure ID analyst Ross Little said that data is collected through explicit consent, but this changes from nation to nation, as in Italy you need written consent, but in the UK it is done by informed consent and can be a tickbox on a webpage.
“If you want to use Big Data it has to be in an anonymised format,” he said. “Information is intelligently analysed, and make available to carers etc. Through anonymisation we can offer Big Data and data-as-a-service to non-profit organisations.”
Little said that data protection regulations should be “looked beyond” and more focus put on ethics, as this is not just about giving Big Data to any organisation who are willing to pay for it, there have to be reasons for wanting it and not encourage profiling, which is against basic human rights.
“You do not want to store personally identifiable information in the cloud and it is hard to identify the individual, so we federate to the cloud using one-way cryptographic pseudonyms,” he said.
I feel we are getting somewhere. There were points made on securing that Big Data, and storing it in the right way with the right controls, and making sure that it is not identifiable to the patient.
Last up was Dominik Bertram, development manager at the SAP Innovation Centre, who said that increasing amounts of patient data is needed for the benefit of medicine and for research projects. “What you would like is a unified repository and give access to those treating the patient and researchers, but you need access to make research more efficient,” he said.
Of course, you want that data to show common symptoms maybe by geographical location, but also common cures that have worked in the past. All anonymised of course.
Bertram said that you want to anonymise as much data as possible, and acknowledged that there is a conflict between the utility of data and risk, so you remove the attributes that are not useful and not needed, and at some point you make the data set to be more safe.
He said that it is not possible for data to be completely anonymous, but you only want to access to be given to those with privileged access, but you can make a trade-off based on role.
He highlighted the three levels of access and consent: the first is only to allow access on aggregated data sets, and only run queries on at least 20 patients – which he acknowledged is hard to enforce as you still want to keep flexibility for researchers to define their own queries and it would make it possible to identify someone. The second level is about individual access to de-identified records, while the third level is about wanting to identify a person to give them treatment, but to be useful it has to adapt to this situation and allow movement, and this poses the challenge on privacy.
Bertram concluded by saying that for every change, privacy has to be considered to “a sufficient degree” and if there is access, it does need to considered that it may be legitimate, but is it legitimate for this situation with specific consent? “Every step is not facilitated or automated by IT,” he said.
He made the point at the end there, that if there is access made, can it be guaranteed that it is the right person accessing the data? The point that I didn’t feel had been addressed was surrounding the security of the data, and what about accidental loss?
I asked the speakers about these points, and Bertram said: “There is a huge problem with security and I think that from our point of view, one of the first steps to improve that would be to have infrastructure to make it work, as if you don’t have that and they start putting things on USB drives, it is about making it as useful as possible so you cover as many mistakes as possible.
“That is the first step as then you can put it in a secure facility and manage it with access controls.”
Scott said this comes down to basic security management, and asked how hospitals are managing security, Sanna said that in some cases by putting too much security in, you are under-serving an environment which is dynamic by definition. “It is more important to create an environment which is consistently secure enough for the purpose as then you can create the strength of your defences,” he said. “There must be both free and strong, as you can have fantastic security and the weakness is the human factor.”
There were some interesting and valid points made here on the collection and use of healthcare data, but I cannot help but feel that the amount of security on the data is an afterthought for this, and if that is the case, how can this be taken seriously?