Back To Top
DataEthics4All-Ethics1st-Live-Ethics-of-Healthcare-Apps-Jan-20, 2022

DataEthics4All’s Ethics 1st Live: Ethics of Healthcare for profit Apps

“There are people in our community who do not understand how the technology works, but trust the technology so much – and I think that is the biggest vulnerability.”

– Susanna Raj

“If we are to rely on tele-health apps, it might be that we’re not only using it for diagnosis but we’re actually using it for informing the dosage as well, and that could could put us at risk.”

– Sam Wigglesworth

Talk Summary

 

DataEthics4All brings this Series of Ethics 1stᵀᴹ Live Talks for Leaders with an Ethics 1stᵀᴹ Mindset, those who put People above Profits.

In this week’s discussion, we explore ‘The Ethics of Healthcare Apps. What are some Regulations when it comes to for-profit healthcare apps? Vulnerability, Trust, and Manipulation: Key Concepts for the Ethical Evaluation of For-Profit Health Apps, the important distinction between Supportive and Manipulative Digital Health Environments, as well as Health Apps and Unfair Commercial Practices.

We started out by discussing some of the key regulations are when it comes to for-profit healthcare apps.

 

Transcript

 

00:32 Susanna Raj:

Hello everyone, welcome to our Ethics 1st Live talk, where we come every week and discuss lively topics surrounding the topic of ethics and how Ethics 1st organizations, applications, companies, people and products can work together to create an environment where ethics is at the forefront of everything we do and think about. Today I am going to be the one hosting – usually Shilpi does this, she had to leave for another event today. Here with me is Sam, and you all know Sam, she is a member of our leadership team and a teacher of languages – welcome, Sam.

 

 

1:22 Sam Wigglesworth:

Hi, good evening. It’s great to be here for another Ethics 1st Live event, and the topic’s going to be really fun to discuss.

Sam: We’re focusing on the ethics of applications in healthcare, and we’re going to be looking at regulation, what applications are out there, the vulnerabilities around them and how they could be used in a supportive way – or not, as the case may be!

 

 

2:01 Susanna:

That is the topic; we all use healthcare apps on our phone, and even on our watches now – some of you may own a wearable watch like an Apple watch or a Fitbit or any of those. Healthcare tracking apps are on our wearable devices, as well as on our phones, and on our laptops – there are so many things that can track your steps, your activity level, your heart rate. My mom uses a blood pressure machine; that brand has its own app, and it seems after we upgraded to the new IOS, data was being pulled from the tab into the healthcare app and it was automatically gathering all the data from there. It was a little creepy! Then you had to go back to that app and take out that information because it was tracking her blood pressure levels. They have given you all those options where you can and cannot give access to other apps; the other medical apps – but to me this is an area of concern, because now my parents’ medical institution also uses an app to schedule all their appointments, which have been video appointments the last two years. So that app is also there; and Apple is giving me a list of all those apps in the Apple healthcare record and asking, do you want to connect all the data from all of this? I’m just wondering what are the regulations around healthcare apps – Sam, what do you think?

3:52 Sam:

It’s a really good point about monitoring patient data. From my research, I’ve not found that much around specific regulations that govern the data that’s captured on applications, particularly monitoring health data and patient data.

We know that there’s a significant amount of legislation in place in the UK and Europe when it comes to data protection – the Data Protection Act in the UK and GDPR in the EU – but I feel from what I’ve seen that there isn’t significant rigorous legislation that covers this, and it’s a grey area.

Sam: While we focus on security and data security, we’ve actually focused very little on bioethics and data. There’s a gap there, I think.

4:56 Susanna:

That is what I also found out; there’s the GDPR as you said, and all of those regulations cover personal data and privacy, but in terms of healthcare, these healthcare apps aggregate the information they have on you to create a health profile of you.

They are accumulating and aggregating information from different sources to say if you haven’t walked this many steps; if you haven’t slept this many hours;  if you haven’t drank water, if your blood pressure is low – whatever it is. It’s giving you information and trying to tell you whether you are a healthy person or an unhealthy person or on the borderline, so it’s actually giving you medical information; medical feedback.

Susanna: That is such a very concerning thing to me, because if I am getting this feedback, I may decide to give it the same validity as a doctor or a health institution – many people do that, and that to me is very manipulative in a way; it is not actually very supportive of me.

I think we have a few comments here – there’s a LinkedIn user saying there are examples of sensors in your shoes also sending data – oh my goodness, shoes! I haven’t seen that, but yeah, that could track how much pressure you are putting on the ground and whether you are using your heel or the front of your foot – I could see use cases for that. People who walk putting more pressure on their heels may not be walking at the right posture, and all of those things are there – but what I am concerned about is how much this feedback influences us as individuals where we are internalizing all of this information and thinking that this is validated information coming from a knowledgeable entity like a physician or a medical institution. That to me is the point that requires more regulation.

 

We have another couple of questions that Shilpi usually has written out for us – what are the key concepts of the ethical evaluation of for-profit health apps? So, in terms of for-profit health apps; all of them I think are for-profit, I don’t want to name all names, but all of those health tracking apps and devices that are being introduced into the market are all very for-profit; our medical institution is also for-profit – that is my opinion, it’s not a non-profit thing! But we have so many stringent regulations on how medical services should be distributed, but that kind of regulation is not there for the for-profit health apps. What do you think, what are the vulnerabilities?

8:54 Sam:

I completely chime with what you’re saying. I think from a medical perspective if you’re working in the public sector, data is very much secured internally if it’s patient sensitive, but when it comes to commercial apps the vulnerabilities exist for users across different age groups.

 

Sam: I think one of the key vulnerabilities is that actually as users we all have apps on our phones, on Internet of Things devices, that we use on a daily basis that are stored on cloud servers through our mobile devices, and they’re at risk – that’s the vulnerability.

They could be exposed (we’ve talked about this before in previous discussions) to malware, or somebody may be having access to that data if they wish to. We do often find that we lose this information. IoT devices can cause risks and harms, and we’ve got to be aware of that and also the fact that data could be then utilized by third parties without permission. They’re the main vulnerabilities. Particularly when you talk about your parents as well, I know that there’s tracking of data of vulnerable people and patients, and I think we just need to be very careful that we are aware of just how much is being utilized and stored.

 

10:38 Susanna:

One critical area for vulnerability is that of data being leaked out and being violated, but then the other vulnerability that I see as a user who lives with elderly parents is that they trust the information so much.

 

They trust it so much more than I do! When your Apple watch tells you that your heart rate is going up or is going down, they trust that more than what they are actually feeling. They also feel like ‘oh, something is now wrong with me’ – so it’s like bio-feedback to them, without knowing that it could be a glitch; it could be that these devices are not so accurate in how they track movement; when you move your arms up it can show a spike in your heart rate. There are people in our community who do not understand how the technology works, but trust the technology so much – and I think that is the biggest vulnerability. We have another LinkedIn user here who is saying that ‘the consent comes at device setup, but it seems genuine consent is unrealistic – what are you going to do, say no and be unable to use the device?’ Yeah, of course, that’s the thing: it asks you to opt in or opt out, and when you opt out you are essentially locked out of the device, you are not allowed to use it.

 

 

12:37 Sam:

It’s a very good point that you made. I read a really interesting paper earlier from NIH (National Institutes of Health), and they were discussing the fact that in the past what we often did was search online about our health issues, but now we have an application that replaces that. But obviously with that comes the capture of data, and we have a significant number of data points now, because there are many apps. There’s a market, as you said earlier; there’s a commercialized arm to it, and I think we have to be very aware of that.

Sam: We want to diagnose things, we want to know a little bit more about our health, and I think applications and searches online sometimes become almost like advice. The real danger is if we take that as a diagnosis and it’s not accurate. We have highly trained qualified professionals, and that’s not something we can replace.

 

 

14:09 Susanna:

One of our commenters is saying that ‘the terms and conditions generally say ‘we are not medical providers and this should not be treated as medical information’, but the thing is when you see this information then you go and bother your medical professionals, and you go with a false positive and you overwhelm the system’. Then there is a cascading effect that comes from not having that trust – my device says this but you say it’s okay to ignore that – what is happening to me? All that stuff now happens a lot. Also, there is manipulation that goes back and forth, because this is a predictive analytic which is telling you that based on this data, we are predicting that this is going to happen; or if you don’t accumulate this many hours of sleep, this is going to you know affect your mental health or your physical health – that is another level of manipulation, I think. This is part of surveillance capitalism – that is a term that is being used now to say that there are all these healthcare for-profit apps that are out there with capitalism in mind. That is the manipulation to me; I am being manipulated by all of these data points; instead of me using the technology, the technology is telling me how I should shape my day – and that is that is a manipulation, right? And that is psychologically changing the whole community, I think. What do you think? Do you feel you are being manipulated by all the healthcare apps – is it just me?

 

 

16:01 Sam:

No, I think you make a really good point. You mentioned surveillance capitalism, and we all know well Zuboff’s publication on that, and I think it’s important to understand that as with social media, our data is a commoditized good; I do often feel that when I use a wellness app or other applications they can be sometimes a little intrusive, and I think you have to almost start managing it. It can become a little bit addictive where you start to track your activities more based on feedback, and I think it can affect your behavior. It’s a really good point that we shouldn’t be using that as a diagnosis tool – we know that sleep is good for us, we know that eating well and drinking lots of water is good; we need to be aware of our own health, but using these apps is not a replacement for that. But yeah, you do find that it will sometimes change the way that you think about your own health, and that’s not a positive thing.

 

17:39 Susanna:

There is a LinkedIn user asking a question – will the apps sell your data to insurance?

 

17:46 Sam:

For all third parties, if your data is available to be sold, then there might be a market there, What do you think Susanna, is it something that you feel is a risk?

 

18:13 Susanna:

That is potentially a risk, and there are no laws in place.

 

Susanna: The HIPAA (Health Insurance Portability and Accountability Act) protection laws say that you cannot share this information to your insurers – but what about apps that are being developed and put on the app platform by insurance companies?

 

Susanna: I don’t know whether that is another area that we need to look at, because I do know that in America here we have Kaiser, we have Stanford, and all of them have developed apps, and they are putting them on android, apple, IOS platforms. But will personal data be actually connected to your medical record? I think the answer is no – I think that at least is still good, but it’s a marquee area that you could change. There is another comment that I think was related to HIPAA – ‘HIPAA requires some notice, but there is not a lot of genuine thought around whether the info in the notice is actually helpful or represents a choice – have you ever been to a dentist and said “no thanks, I don’t want to sign to say I saw the policy that I didn’t really see”‘.

Susanna: Yeah, with all those consent forms, I really doubt that people are actually paying any attention. What I am worried about is the behavioural impact of all of these healthcare apps.

 

 

 

20:18 Sam:

I would add to that: I do think often when it comes to consent – and we talked about this before – whether it’s surfing online on a website or downloading an application, I do think a move forward would be to be quite explicit about what data would be shared with that particular third party. I think having a disclaimer available that’s really clear will give us that transparency which is what we want. You’re right, terms and conditions are not clear and we do often tick without reading.

 

21:05 Susanna:

And we don’t even check it when we’re at the doctor’s office – if you are in a doctor’s office you are already under stress, I don’t think you are bothering to read that document very well! A LinkedIn user is saying: ‘do the kind of people who use such apps have a higher likelihood of being active/otherwise; therefore not representing the general population and setting incorrect metrics for future use?’ Yes, that is convenience sampling; a self-selected population.

 

 

Susanna: Let’s say I am going to install a healthcare app that tracks my steps that’s available to everyone, but I’m somebody who’s more worried about my health – I am self selecting myself for the study. I might be actually giving you the wrong metrics, because not everyone who is leading a sedentary lifestyle installs that step checker.

 

Also, the app itself manipulates you to become more active. Here’s my personal story: my step tracker tells me that in 2020 I didn’t move at all! That was when the pandemic hit. I’m wondering whether that is reflective of the general population, I don’t have that aggregate data. When this tells me I didn’t move at all, I tend to feel like: ‘Jeez, I didn’t move the whole year, what was I doing, sitting all the time?’ But I don’t know if that was that same for the entire world; probably everyone was more sedentary. So it is actually making me feel a little guilty but if I had that aggregate information maybe I would not feel so guilty.

 

 

23:40 Sam:

I do think though that with particular applications, whether it’s your Fitbit or your Apple watch or whatever, you do find that the data is presented to you in a way that is often going to try encourage you to be more active. I do often feel that you do get put under pressure a little bit, and you don’t need that when you just want to just go out for a walk and enjoy being outdoors.

 

 

 

24:26 Susanna:

Yeah, if your goal is 2000 steps and you see you’ve only done 1500, you just run around for another minute just to get it to that arbitrary number, without actually making sure you enjoyed the walk – that’s enough, why am I trying to set to an arbitrary number now? Another LinkedIn user is noting here that a friend identified a trend where his blood pressure rose every day around the time he checked his blood pressure!

 

 

25:17 Sam:

Yeah, that makes sense. You do find that during holidays when you might be more sedentary and not as active that you’re not checking as much, you don’t see yourself looking at the latest feed – so it’s good to sometimes switch off from that and take a take a break from it.

 

 

 

25:47 Susanna:

There’s actually a well-established finding that in doctor’s offices your blood pressure is high – they actually test you twice or three times if it’s too high; for my mom it’s very high. That ‘white coat syndrome’ – even after coming to the doctor’s office and relaxing there, she would get that high blood pressure show up.

 

Susanna: That is a common trend for most of us, so I think now during Covid we all have not only a blood pressure machine machine, an oxymeter – it looks like a doctor’s office now, and we get into this craziness thinking ‘oh, I can find out what my oxygen saturation is!’ but in reality it’s better to see if you are a healthier person, and if you are able to breathe and do all of that then you don’t need to check your oxygen.

 

 

26:50 Sam:

Yeah; I think having something that can give you a little bit of information on the number of steps you’ve taken is okay, as long as you have that information about general health. You can go out outdoors for example and have a break without having to use an application to do it.

 

 

27:45 Susanna:

So what do you think is an unfair commercial practice when it comes to healthcare apps?

 

 

27:54 Sam:

I think an unfair commercial practice might be your data being sold to a third party because there’s a market value and that being then used to diagnose things. Certainly unethical in my eyes – our health data should be primarily protected. We talked about bio-feedback earlier, but the data that’s tracking us could be utilized to other things, not just for marketing purposes, but also to change the way that we behave.

 

 

 

28:54 Susanna:

I think one area where I hope it’s not coming into practice is the area of the pharmaceutical industry being tied to the healthcare app industry.

 

 

Susanna: Right now we have that on the televisions – there’s a lot of ads here, I don’t know how it is in the UK – but here all the new medications will be first shown on the TV. If there’s a new medicine for diabetes; if there’s a new beta blocker, there will be a very expensive ad campaign for it showing how it is going to help you improve your life. It has a story-like presentation and appeal; it almost looks like a PSA and then at the very end in a very fast, 2x speed paragraph they will talk about all the side effects of the new drug and also the unproven data points of that drug – but that comes at the very end, and it’s very fast so you can’t even really grasp it. The whole ad is so melodramatic and you feel like you will have this glorious life if you take this new drug, and you will be relieved of your symptoms. That is there in the television industry now, but what I’m wondering is: what if all these healthcare apps have a tie-in with the pharmaceutical industry? They could market it directly to the people without the middle man of the physicians. I don’t know if it is there already – I haven’t seen any example of that in the healthcare industry, but it could be an unfair healthcare practice, in my opinion.

 

30:50 Sam:

I recognize that, definitely. Dosage is one thing that was picked up earlier by one of the papers I was reading – if we are to rely on tele-health apps, it might be that we’re not only using it for diagnosis but we’re actually using it for informing the dosage as well, and that could could put us at risk.

 

31:22 Susanna:

Mahesh R. raises an important point here, they say: ‘I think they might do’ – in response to our prior discussion about apps selling information to the insurers – Mahesh thinks they might do that: ‘taking away the private information and sell just the metadata’. What do you think about selling just the metadata – is that okay? If it’s not individually connected to you, would it be okay to have this information sold to an insurer, and the insurance company is making an aggregate decision based on that? For example, your geographic area – would that be okay? I don’t think so.

 

32:23 Sam:

This is a really interesting discussion; first of all: metadata, we have to define that as opposed to PII (Personally Identifiable Information), which is absolutely out of bounds. What is metadata? We’ve discussed before – it could be not just names and ages, but also other demographic information, and that could then lead to targeted advertising; products being targeted to you that you maybe don’t need or want. It’s an additional stress, particularly on vulnerable people, and if you haven’t got the right information to make the decision it’s a lot of pressure. We’d have to define what metadata is, but certainly we have to be very careful.

 

33:14 Susanna:

And it could be from a very low sample size, and you could be making this generalized predictions that may not generalize to the entire population, so those kind of things are there in the use of metadata, but unless your metadata has a very large sample size, on a national or a global level, I don’t think so. I don’t know if they do that – that is something we’ll have to look into.

Susanna: But I think as of now from what I read in preparation for this talk, GDPR and the regulations that are here in America do not allow healthcare apps to sell these data to insurance providers – that’s the conclusion I came to, but I could be wrong.

 

Susanna: If somebody has more information on any of these topics, please feel free to come and let us know! You can reach us at https://dataethics4all.org, and you can even reach us at all of these channels – LinkedIn, Twitter and Youtube, come join our community and let us know how we can use your knowledge to make sure that everyone is well informed on all of these topics. Thank you for joining us, and thank you for a very engaged audience here!

 

 

35:20 Sam:

Thank you all for joining us for this particular talk, it’s been a really fruitful and really interesting discussion. We’ve got lots more to talk about, so please join us come to our community, join us as a member and go to dataethics4all.org if you want to find out a little bit more about what we’re doing. Thank you everyone!

 

Shilpi_Agarwal_Speaker_-_AI_DIET_World_-_Founder_DataEthics4All
Susanna-Raj-Speaker AI DIET World event 2021
Sam-Wigglesworth- Speaker AI DIET World 2021

Leadership Team, DataEthics4All

Join Us in this weekly discussion of Ethics 1stᵀᴹ Live and let’s build a better AI World Together! If you’d like to be a Guest on the Show, Sponsor the Show or for Media Inquires, please email us at connect@dataethics4all.org

 

Come, Let’s Build a Better AI World Together!

Written by

I am a Philosophy and AI graduate interning with DataEthics4All Foundation as part of their Content Team to produce updates on current affairs in tech ethics.