DataEthics4Allᵀᴹ Ethics 1stᵀᴹ Live: Ethics Bounty System
``The merit of this is that we can get to these violations or ethical problems sooner`` ~ Shilpi Agarwal.
“I think the bounty system is less damaging and it’s more beneficial. It’s one of the most practical approaches”
~ Shilpi Agarwal
TEST
“Everyone can find a vulnerability that is against their own community, and bring that to the tech companies attention”
~ Susanna Raj
Section
Talk Summary
DataEthics4Allᵀᴹ brings this Series of Weekly Ethics 1stᵀᴹ Live Talks for Leaders with an Ethics 1stᵀᴹ Mindset. Food for thought for Leaders who put People above Profits!! Come, join us for this lively and informative weekly discussion on how to create an Ethics 1stᵀᴹ World with Ethics 1stᵀᴹ minded: People, Cultures, and Solutions. In this Live Talk, we discussed Ethics Bounty Systems.
1:12 Shilpi: Thank you for joining us for this episode of the Ethics 1st live, where we bring food for thought for ethics first leaders, or leaders who put people above profits.
1:24 Shilpi: So today’s topic is a very interesting topic, it’s about the ethics bounty system, so let’s start with what an ethics bounty system is.
2:42 Susanna: Basically, the way I understood it is it’s just basically hunting for bugs, bugs in the code or bugs in a program, or a public website or in an AI model. I mean, it started with websites in the past, and now we’ve seen the AI models too, so you find a bug, or you find a vulnerability in that, and they give you a reward for it.
3:05 Shilpi: Yes, and my understanding is that too. So this ‘bugs for code’, to find bugs in the code, has been there for some time, but now they’re bringing it into the AI systems.
3:21 Shilpi: With AI systems, until you deploy them, you don’t know, not really what bugs its has, but the way it is affecting the audience or the people that it was intended for, and it can cause a lot of harm, as we both know, to the projected audience.
3:42 Shilpi: You cannot know that until you have different perspectives from different people, and so that’s why there is something called the ethics bounty system, where tech companies are coming up with ways to encourage people from different socio-economic backgrounds, gender, races, and just everyone that the software, or the app, or the platform, or the toolkit is intended for, to test it out and report bugs.
4:18 Shilpi: Bugs in terms of, you know, it’s not, ‘the algorithm is not right, the data is incorrect’, it doesn’t necessarily mean the code in this case, it’s about the AI system, right? So why do you think it’s important to have such a system in today’s AI world?
4:37 Susanna: I mean, there are lots of reasons given for that. When you try to find a bug or you try to find an ethics violation from within the team, you have a certain bias.
4:48 Susanna: Just in the same way we are not able to catch our own typos, we cannot catch our own mistakes because we are so embedded in the process itself that we can’t even see what our vulnerability is, and who the other group is that we are discriminating against.
5:07 Susanna: So, giving this to an outside group, or a group of individuals who don’t have that same loyalty to the group means that they’ll be able to actually find it quite easily. They can also be very open and transparent in how they attempt to bring this to our attention. Another great impact that I see is that because you are opening this up to a wider diversity of people, it’s not just, ‘I’m just opening this ethics bounty to only the USA’, it’s all over the world. So, everyone can find a vulnerability that is against their own community, and bring that to the tech companies attention. So I think this is a great idea.
5:47 Shilpi: Yeah, I completely agree with you, and just like you said, when this is being developed, and these AI systems are being developed, nobody develops them with a bad intention, right? It’s sometimes not even possible to think of all the scenarios in which it could create harm.
5:49 Shilpi: So having these different voices, and backgrounds, and diverse voices to be able to contribute to the project that will end with a bounty, it encourages people to spend time on it and find ethical violations.
6:20 Shilpi: So, the merit of this is that we can get to these violations or ethical problems sooner. And then instead of going through the approach of announcing it on social media, or taking the legal route, people can actually reach out to the tech companies themselves and report them, and then the tech companies can take action.
6:45 Shilpi: So it’s a win win, right? Instead of having a PR nightmare, people are helping them to find the bugs, helping them to solve these problems, and reporting it from the diverse backgrounds that they come from. So yeah, I think it’s a great idea and more of it should be implemented. What are some of the great tech companies or examples that you have seen in terms of adopting these ethics bounty systems?
7:22 Susanna: I think pretty much all the big companies now have a programme like this, including Facebook, Apple, Microsoft, Google. I mean, they all have an ethics bounty system, and Twitter is one of the major ones recently, but I think they all have this program in place.
7:40 Shilpi: Yeah. And so Twitter, their meta team, which is their machine learning, ethics, transparency and accountability team, created this recent AI algorithm bounty, and I was reading through this report about one of their algorithms that was initially for the cropping scenario of how image cropping is done, and somebody brought it to their attention that it was favoring the white faces more than the darker ones.
8:08 Shilpi: So even with Barack Obama’s photo, they proved that he was being left out of this frame. But as they created this bounty, and I read through this research, where the prizes were given, and there were so many other issues that came up, and they were awarded for those that were not even intended to surface, nor would they have surfaced unless this bounty system was created.
8:36 Shilpi: So I think it’s a wonderful idea, and now Google has also started implementing it for the Google Play Store, right? So it’s not only the code that they are implementing, they are also providing a bounty for anybody who is able to find a bug in any of their Google Play Store apps, which is the next step.
9:06 Shilpi: Apple has been doing it too – they have these rules for their normal code, which is that you must be the first to report it, and you must clearly explain and show evidence for it, and you can’t disclose it publicly unless Apple gets a chance to fix it, and you can get a bonus if the company reintroduces a known problem and a new patch.
9:30 Shilpi: All of these things are already there. They can easily take this and implement it for an ethics bounty programme for their AI systems, which makes sense. You can’t publicly slash them on social media, tarnish their reputation and get the bounty, right, you can’t have everything – your cake and eat it too. So you first have to report it to them and give them a chance to fix it. Do you want to add anything to this?
10:01 Susanna: So yeah, definitely there is one aspect of that. I mean, they get to see the vulnerability or the bias in the system, and they get to fix it.
10:12 Susanna: But I’m also wondering was there a major ethical violation because of this vulnerability? And did they quickly fix it or patch it up? If that’s the case, does the public get to know the impact of it from when it was already launched? Do they get to know the impact of it? Is there transparency around it?
10:33 Shilpi: That is a good point. So what you’re trying to say is that if somebody finds a vulnerability, and they report it directly to the tech company, and that tech company fixes it, will there be a public report after that? Or will it be just a hush-up and it will be swept under the rug?
10:48 Susanna: Yes, because it could have already impacted some communities – it could be that my data was breached, and it was sold to somebody, but they fixed the vulnerability so I didn’t know about it. So, where is the transparency there? And does the bounty system cover that? Is the system covered?
11:09 Shilpi: That’s a good question. And I think this could be a great question for our audience, as well as some of the tech companies who are coming up with such bounty systems. I guess what we are trying to say is, yes, this system works – It’s great that you are asking for our community help by crowdsourcing to find the problem, right?
11:33 Shilpi: So that is great first step, but then let’s go one step further, let’s not just sweep it under the rug, and because you say, ‘Oh, we’ll give you the bounty and you can’t say anything about it until we get this fixed’, how will this person who has raised this flag know, first of all, that you have fixed it?
11:52 Shilpi: Will you make a public statement about it? Will you let that person know privately that you’ve fixed this and that this is what you have done? And then will you release a public statement announcing to the world that this was a vulnerability that one of the people or some people found and these are the steps we have taken for it? And these are the communities affected and going forward, this will not be done?
11:53 Shilpi: So there needs to be a clear, closed-loop process for this to be a foolproof method.
12:22 Susanna: Yes, I do agree with that. I think that is the next step. I mean, this is a step in the right direction, but we need to move a little bit more further to help bring transparency to any process. Mistakes can happen, I don’t believe that we can be a society without mistakes, in building anything. We build bridges and even they have faults and they crumble and fall down, mistakes happen anywhere.
12:49 Susanna: The process and the transparency around it is what an ethical community, like our community, would demand.
12:58 Shilpi: Yes, and speaking of which, what if we started a database and asked people to report any of these ethical bounty system’s, especially for AI, that they find and also any vulnerabilities that they find?
13:18 Shilpi: I know there are some who do this already, but what if we could do something where people feel comfortable to report this, and then, we are able to work with the tech companies to figure out a way to make sure that this has been resolved.
13:33 Shilpi: So, an independent body that is saying exactly what you touched upon – it cannot just be reported to them and be done with, in that case we don’t know if any steps were taken after that.
13:46 Shilpi: So an independent body like DataEthics4All, per se, or any other independent body, would be the mediator to understand, you know, what were the steps that were taken and were they enough, and act as kind of a regulatory body to understand if it was done exactly the way it was supposed to be.
14:10 Susanna: I think that would be really great to have, because right now we have AI incident reporters and databases that track vulnerabilities and incidents or adverse impacts of AI. There are systems that track that, but there are no systems like this.
14:26 Susanna: There is a bounty hunter, and they find a vulnerability, but then what happens to the impact prior to somebody finding this vulnerability and what happens after that?
14:43 Susanna: If an independent organization can go and build the database or bring some transparency around it, that would be a great service to the public.
14:52 Shilpi: Alright, so you second that we should move in that direction. The last question for today – so you know how in software code, and I know I’m a software engineer and married to another one, there’s always these bugs that are under critical issues that you find in software development. You put them under critical issues, and there are different priority levels, red and orange zones and all of that, right, it’s all color coded, and it’s all different priority levels, and it’s supposed to be critical.If there is a bug in the code, it’s critical.
15:33 Shilpi: So, do you think it should be grouped as a critical issue if there is an ethical violation in an AI system?
15:58 Susanna: Yes, I think it should be. I think we should have a classification, like that, that tells us what are the most adverse impact ones, what are the most critical, and go down the list to the least impactful. Yes, that would be a great idea.
16:14 Shilpi: Yes, I think so too, because we may think that this code is not creating harm, but the bias in AI can cause harm to so many different protected and marginalized communities and people, so we need to make sure that this doesn’t happen. We are all looking for tech regulation, but that’s going to take time. As you know, it’s not a foolproof method to catch everything.
16:42 Shilpi: So, the ethics bounty system does not rely on either bashing the companies on social media or filing for a legal battle, or waiting for a new regulation to take place.
16:58 Shilpi: I think the bounty system is less damaging and it’s more beneficial. It’s one of the most practical approaches to regulating this than we have ever seen and used.
17:11 Shilpi: Like you touched upon, it’s a global system that can be used in non-native English speaking countries where there are things that happen that are swept under the rug, or because it’s not predominantly affecting the United States, not much action is being taken against it. So if we have an ethics bounty system, and if we are able to classify them as critical issues, things that people have crowdsourced as a community, across the globe, I think that is a great step towards moving forward in building the next generation of AI systems that are ethical and transparent, and making sure that the tech companies are accountable. Is there anything you would like to add, Susanna, before we wrap?
18:03 Susanna: Nothing, I mean, I think this is also a call for action – so anyone in the audience who wants to join us in helping build this thing, you’re welcome to come and talk to us.
18:14 Shilpi: Yes, absolutely, and a great call to action, Susanna, thank you for adding that we are always looking for very dedicated and passionate volunteers who are passionate about this mission, who are passionate about the problem and who want to bring a solution. We are not the type who will just talk about the problem, we are solution focused.
So we just touched upon one of the solutions that we would like to implement, and we would love all of your help in coming together as a community in taking this first step. So we invite you to join us, and join us in creating a better ethics first world. Thank you, everyone.
19:05 Shilpi: To join our community, its DataEthics4All.org. Please join our community, and join us for these weekly live talks when we bring food for thought for ethics first leaders every week, Thursdays at 3pm Pacific.
Join Us in this weekly discussion of Ethics 1stᵀᴹ Live and let’s build a better AI World Together! If you’d like to be a Guest on the Show, Sponsor the Show or for Media Inquires, please email us at connect@dataethics4all.org