fbpx
Back To Top

Building Ethical Technology: A Panel Discussion.

❝Ethics is not an engineering problem. It’s an enterprise-wide problem❞ ~ Swathi Young

❝When they smell ethical smoke, what’s the lever they pull?❞ Reid Blackman.

Building Ethical Technology: A Panel Discussion.

Collapse

In this exciting Panel Discussion on Building Ethical Technology the panelists discuss the implications for society, the public, and private companies if we do not build Ethics and Trust into AI Technology, they discuss the need to consider the Ethics of Technology at every layer in the organization and argue that is has to be a part of the corporate culture and that the tone is set from the top all the way up to the CEO. They explore the importance of individuals becoming data privacy advocates, and the necessity to build Security, Trust, and Privacy into the design of AI tools. The panel also advocates training engineers, UI / UX designers, and Technology experts in how to build ethical tech for the future and the importance of eliminating bias in data and appealing to the company’s bottom line to get buy-in from senior leaders to build the ethical tech we want to see.

Collapse

TRANSCRIPT

00:02  Welcome all um this is the final panel of the data ethics for all summit my name is Ben Hertzberg I’ll be your moderator today I’m a director on the chief data and analytics officer research team at Gartner the tech research and advisory company but I also used to be a political philosophy professor, I taught politics at Emory and some other institutions.

00:36  Data ethics is a topic that’s near and dear to my heart, I’ll take a few moments to introduce your three illustrious panelists for today, um and then I have a series of questions prepared and we’ll have a rich discussion, um you’re in for a treat so Um to begin with um, at least on my screen. It’s to my left. I  don’t know how it is for you um, we’ll start with Reid Blackman reed is the founder and CEO of virtue, where he works with companies to integrate ethics and ethical risk mitigation into company culture and the development deployment and procurement of emerging technology products he’s also a senior advisor to Ernst young and sits on their ai advisory board. So welcome Reid!

01:23 And then we have Swathi Young, Swathi is the chief technology officer at integrity management services, where she works with government and commercial enterprises to minimize the risk of improper payments, fraud, and abuse. She also works, in particular, to do to design ai solutions for business needs in healthcare, welcome, Swathi

01:46 And finally, is Brian Greene.  Brian is the director of technology ethics at the Markkula Center for applied ethics at Santa Clara University in California. His research there focuses on ai ethics as well as the ethics of a whole bunch of other cool tech topics, space exploration, and the use of the ethics of risky technologies, and a wide variety of other topics. So welcome brian. Thank you all right, um, and the topic of the panel, as as you all know today is building ethical technology. So I wanted to start with a big question, but hopefully a provocative one, where do ethical considerations best fit in the technology and software development process, and this has a corollary to it, um, which is where do you, the three of you see companies currently making mistakes as they attempt to incorporate ethical considerations into their product development, um and we’ll go from there.

02:52 So I’m happy to to to start um and uh. I would just say, first of all, thank you so much for having me here. I’m really excited to be here, and I’m really excited to hear what everyone else on the panel has to say. Also um, I would say the place to start with ethics is to have it everywhere. It needs to be at every layer in the organization and trying to isolate it to one particular part of the organization is one of those problems that companies have. It needs to be really a part of the corporate culture, so we’ve been working at the Markkula center with various tech companies for several years now and a lot of tech companies if they’re doing it right, it’s part of the corporate culture and everyone understands that it helps them have a better product and ultimately, uh helps the bottom line. I mean that’s, that’s the good uh consequence that comes from ethics. Is it prevents problems from happening and ultimately helps you get uh Your customers become more happy customers um.

03:56 On the other hand, if, if you’re not paying attention to them or if you’re, not paying attention to ethics and you’re, not paying attention to your customers, you can very often uh walk into traps and have all sorts of problems. Your product is not that popular um. It’s not fitting the market right. Ultimately, it’s uh, it’s something that uh harms the company. So it’s really a matter of if you can build into your corporate culture at every level. From the engineer, all the way up to the CEO, and the tone is really set from the top, if the, if the tone is not being set at the top, then that then everyone knows what their incentives are right. Their incentives are not necessarily to do the right thing, so it really needs to be at all levels and if it’s not, then that’s where the problems happen. Thank you.

04:40  I can add something to this conversation and I’ll be wary that they’re, I’m in the midst of philosophers here, I’m coming from an engineering background, but I agree with what brian mentioned, but I want to add a little bit of nuance and color to this conversation.

04:56  So, as Brian mentioned, I say that ethics is not an engineering problem.  It’s an enterprise-wide problem, but having said that, I think it also starts at the business processes and as a practicing uh data scientist I observe a lot of things both in federal agencies as well as commercial sector. So one of the things is I want to emphasize the business process. So let me give an example, for example, if a business that gives loans for small businesses decides that they want to consider the 10-year history of their criminal charges or traffic tickets or all their judgments and liens, that’s a business decision.

05:42 I might have been pulled over by a cop 10 years ago might have not paid the fines might have gone to court, and that is being considered after 12 years, whether to give me a business loan or not. That’s a business decision and the business process and the business decisions would drive the technology so when it comes to an engineer’s desk they’re given these are the heuristics of the business rules, so now go and write a machine learning algorithm right.

06:10  So the conversation has to start with the stakeholders. The other mistake I see happening to your corollary, the question is sometimes machine Learning, algorithms or even statistical modeling can start in a particular department. It’s not necessarily a top-down approach where the c-suite is involved in making a decision. Oh, let’s adopt enterprise ai or enterprise machine learning; sometimes it starts in pockets where they’re prototyping some stuff really quickly, and so the conversation is not happening outside those pockets of folks who are meddling with data.

06:50 And the third point I want to address is that it’s not a one-size-fits-all.    Let me explain that. For example, I was working on lung cancer research and the correlation to lung cancer Re-admission rates using logistic regression, but the question was the data I started with, came from a publicly available data set of all the lung cancer patients in the united states, and I went off that data and obviously uh sorted analyzed it for bias, Etc, but what was missing is, I did not know who labeled the data set right so for some use cases it can start long before it comes to an engineer’s desk or even a stakeholder’s attention.

07:40 It can start at a process where the data is collected and it’s being labeled, and you don’t know – and we have heard horror stories of people labeling data paying pennies on the dollar. So so it’s not one size fits all. The use cases are varied. If I’m predicting a machine’s breakdown, I don’t need to look at the ethics of it, but if I’m predicting much more variety of things, whether I’m making a judgment if a person has to be given bail or not, yes, that has to be considered so to bring It home, I think, one, it’s not just an engineering problem, two, it depends on the use case, and you have to your beginning of the use case can be a variety of reasons across the spectrum of the life cycle and three it. It can be to do with your business process or the heuristics of your business rules.

08:36 Did you want to jump in? I mean, like I don’t think I have anything revolutionary to say. I think that Swathi and Brian did a really good job of covering the main point I take, which is something like um.  It can’t be a place so the question when it’s formed as something like wherein the organization should ethics be, is, in some sense the wrong kind of question, because it assumes that there is a place that it should be when in fact, it should be roughly Everywhere, but one thing I want to also highlight – and I think you know both Brian and Swathi mentioned this as well – really – is that it’s not enough to just say: ethics is important. Um, you know having an ethical culture doesn’t mean anything if it’s not backed up by process uh by infrastructure um by financial incentives.

09:20 You know people are going to you know. A lot of people want to be ethical. They want to take the ethical stuff really seriously. On the product development, but if they’re you know they’re financially incentivized to go a different direction, then most people will go in that other direction. Blameworthy or not. That’s just the way. That’s what they’re going to do um, so you need financial incentives to be properly created around the ethical infrastructure.

09:42 The other thing to say is um, so there are the responsibilities of each of the roles you know so, for instance, data collectors versus developers, they’re gonna – have different kinds of responsibilities. In order to make sure that things are, you know more or less ethically, safe or safer than they would have been without those processes in place um, it’s also too important to foster the right kinds of relationships among girls, so, for instance, um the developers should be In conversation with the data collectors to say, hey, listen! This is the kind of thing that we’re trying to develop here’s the kind of data that we need; here’s the kind of data that we think would be messed up, um, here’s where bias might creep in.

10:16 So when you are putting together, these data sets for us, here’s roughly what we need them to look like um, otherwise we’re worried about bias popping up in various places, and then you need also not just if you as it were um from the developers down um. But the developer’s upright so if developers, product developers, etc, PMs are properly trained. They have the right kind of awareness of what the issues are. Um, aside from the standardized approaches to bias mitigation, explainability uh, looking out for privacy violations and so on and so forth, they also need to know what to do once they smell ethical smoke.

10:53 So, what’s the what’s, when they smell ethical smoke, what’s the lever they pull? Who does it go to? Is it a deliberative body um? Who is that who’s that deliberative body comprised of what kind of framework do they have for making decisions? What kind of documentation processes are there around that? So it’s so culture is fine. It’s great! It’s absolutely necessary, but you also need a process that is not just a function of what are individual people responsible for, given their roles. But what are the kinds of relationships we have to create among roles in between roles right? Thank you um.

11:24  So let me try to get a little um to appreciate the broad overview. Um, and parts of your comment have spoken to this. But if we were to think about um and because this is a group of people who are interested in data science and data ethics in particular, if we think about the algorithm development process as data sourcing model development, model deployment, and monitoring and supervision, what ethical issues Arise in each one of those processes, and is there one or the other that you have found in your experiences, particularly troublesome in model creation and development from an ethical point of view, should folks in the audience, be most concerned about data sourcing model development model Supervision all three: why or why not?

12:16 So I can. I can start with that um. So again, I would go back to the use case because ai is such an overloaded term. It has all these subfields under it, there’s predictive analytics natural language, processing, robotics, etc. So it’s very very dependent on the use case to reads point earlier: the developers have to speak to data sources, not always because developers, sometimes if I’m working in the healthcare sector – and I want to work on, say, lung cancer, research or anything else I don’t know where the data is coming from, we will be there are some publicly non-profits maintaining the data sets where you can buy the data from so I absolutely don’t have a clue as to where they’re collecting, obviously, they’re collecting from hospitals. So so it also depends on the problem.

13:09 Sometimes if I’m within an organization where the data is coming like I’m working for a federal agency and the data is held by the federal agency. Yes, I have a purview into how the data is collected, but not necessarily all the time, but one thing I notice as a practicing data scientist is that the data bias is real and and and there’s nothing called unbiased data. All data is biased. This is because say, for example, you’re tracking credit card, uh credit card fraud, and so, if you take one million credit card for transactions, one million, mostly 90 000 would be regular Transactions. 10 000 would be fraudulent, and that is a bias by itself. It’s very skewed data, so bias is the very common occurring thing for any data set, big or small that you take.

14:01 So that’s a common thing, but obviously, there are techniques like smart and others that we as any data scientist worth their salt, will apply. Um but having said that again, the nuanced use cases, and nowadays we see second and third-degree impacts. For example, facial recognition technology was initially developed, many decades back, which did not use ai, but with ai coming into play.

It exasperated all the nuances of ethnic and racial discrimination, so it has to be now. We heard the announcements, even by IBM that it’s put on hold.

14:36 Similarly, I’ll take another use case where uh one of my friends is working on predictive analytics of a ship breakdown for the US navy. So in that case, yes, there is bias data and you will deal with some. That’s normalizing the data, but not necessarily study. So much of the impacts of the ethical implications itself. So again it depends on the use case. I see the bias is the most common thing, but having said that recently I had the honor of participating in the ethical ai framework for u.s federal government and it’s right now under review and we would be publishing it soon.

15:15 So there are five considerations we take into account. Buyers have already spoken about it. The next step is fairness, transparency, responsibility, and interpretability, and each one obviously is a very nuanced subject and you’re all philosophers. You know it’s not black and white, but monitoring for these five. What we call the indicators and then seeing: how can we? How can we measure is there a way to identify uh the presence or the impact of all these five indicators is one of the frameworks that I’m deeply and currently involved in so again. In a nutshell, it depends on the use case and there are ways to deal with the data. But again you have to look at the whole process end-to-end process.

16:05 Thank you very much, Swathi, and I would just we just got a question um for Reid. How do you define specific use cases for ai bias for a given client? How do you integrate these use cases into their governance framework, and it looks like this is from Pamela and Pamela – says that she’s developing a framework flexible to accommodate use cases but also identify specific use cases? Okay, so I think if I understand the question right in Pamela, please tell me if I’m not understanding it right, it’s something like how do we take a general framework that articulates our general values like or against discrimination or bias, and how do we take That and apply it to particular products that we’re developing um. I’m going to assume that’s the question.

16:51 You have five seconds to say otherwise, no okay, so I got it wrong. Oh no! Thanks! Okay! I think I think I have it then um so so good. This is, I think this is a great question um and it actually dovetails really well with um. What the general question that ben had asked um, which was where, where are their problems, so one problem is that there was an absence of process by which um developers think about okay. How should we think about bias in this kind of case um? So let me give you I’ll give you a very simple case because I think fairness and bias are more complicated in a way.

17:30 You might think that explainability is really important when you’re developing ai models. Now, myself and over the view that sometimes explainability is absolutely ethically imperative, and sometimes it’s utterly ethically irrelevant, and sometimes it’s ethically relevant, not by virtue of being, if you like, intrinsically important, but by virtue of its ability to give us insight into whether or not we’re Being discriminatory in an ethically objectionable way, um and then sometimes explainability is really important, and sometimes it’s a little bit important and as we know, there are often trade-offs. People need to make between how accurate they want it versus how explainable we want it.

18:08 And how do we make those trade-offs now? One thing that you can do is in your general framework. You have a particular articulation of the value of explainability, and what about it is valuable um why it’s important for your business purposes, for instance, as everybody knows, if you need, if you’re in a mortgage lending business explainability is going to be really important for certain Sorts of decisions um other decisions, not so important, so your general framework might articulate why explainability is important, but then you might have a process in place in the course of product development, presumably extremely early on in that process, where you ask okay look is explainability.  Is the value of explainability, as articulated by our general framework? Is it relevant in this particular case um, and if it is, if explainability is important, how important is it you know? Maybe you have a way of articulating exactly how important that is to give developers some guidance about how to make that kind of trade-off.

18:55 Now you take that sort of general approach and you’re going to wind up with the same thing when we’re talking about bias and fairness. The problem with that is that is, as a lot of people know, especially if you’re developing products is that that whole thing is a mess. It’s a total, it’s a total mess, so fairness and machine learning is a mess for a variety of reasons. Um, one reason is that there are literally dozens of so-called definitions of fairness, not all of which are compatible with each other, and so so that’s one issue, so you already need a process that says something like if one of these kinds of definition is appropriate, which one would it be, and why and there needs to be a process for determining that and if the developers can’t determine that, then there needs to be a body to which they can appeal, or at least a more senior leader to which they can appeal.

19:41 To answer Such a question um and then there’s another question of: is there an issue of fairness, that is to say, might we do something unfair, unjust in a way that’s not captured by any of these so-called mathematical definitions of fairness, um and there are lots of ways of doing That there are lots of ways going wrong because there are lots of ways of wronging people that are not about allocating goods to one sub-population.  You know more frequently than you do another, which is essentially what most fairness machine learning is about, so you’re going to need um a qual, a process by which a qualitative assessment is made either by an individual or ideally, a set of individuals where it’s assessed um. What situation are we in? What are the kinds of wrongs that we’re concerned about? I’m just a small? I just want to highlight that last thing that I said, which is that people talk a lot about bias.

20:34 The heart of the issue, though, is whether you’re wronging anybody um no not just harm, you can harm people without wronging them, but whether you’re wronging them and so one way that people need to think about their particular use. Cases are if we use you know. If, if we go to product development using this model, who are we going to wrong if anyone and is there ways of making it the case that we wouldn’t wrong those people um, and I just threw a lot out there pie mill. I hope that was at least mildly helpful, if not ignore me or we’ll have one on one great.

21:07 Thank you reed um brian. I wanted to give you a chance to chime in either on the question that Swathi was answering about um ethical problems in the model, development and supervision model, development and supervision, or um Reid’s, discussion of use case and institutional institutions to provide. For fairness right, they both uh both regions, why they had a had really good answers, so I just have a little bit to add to that and uh. Of course, the bias issue is something that people have been talking about for years. So uh bias is absolutely a problem and it’s going to continue being a problem.

21:43 Um and as Reid was saying it’s all. There are many different definitions of fairness right. So if you’re getting unfair uh bias in a data set and coming out in the model and it’s wronging people, then you have to figure out in what sense is it unfair? Is it too many false positives too many false negatives? Are we having you know, different types of negative effects coming out of it? So that’s certainly a major issue. Another problem that I’ve been hearing about from people that we’ve been working with is the problem of off-target use of models. So you get your machine learning model and it’s designed for one setting, and then you say: oh, let’s just start using it over here instead and that ends up with a whole bunch of problems associated with it too, and uh.

22:27 You know negative outputs and getting all sorts of uh problems associated with it. So that’s on the more model, side of things and also um. Very often a company will create these sorts of products and give them to the customer, and the customer doesn’t know exactly what the parameters are within, which they should be using this model, and that can sometimes lead to this. So there’s definitely communication that needs to happen between developers in tech, companies, and customers. So they know what the limitations are of the model where they should be using it and those sorts of questions right.

23:04 Just to add to that I think it’s the whole life cycle of the model.  So initially you might start off from a particular data set. But, as you know, the machine learning algorithm is learning continuously, as the data keeps increasing. So it might start with one thing but totally becomes something else, as additional data sets keep getting added right. So, as the model drifts, new ethical issues may arise. Thank you um. I’m aware that we have just eight minutes in our allotted time left, so I want. I want to skip to one of our last questions, which is that um, those of you in the audience and all of our panelists are to some degree we’re already converted right. We are. We are here we’re people who care about data ethics.

23:53 We care about bias mitigation, we care about non-discrimination um as folks in the audience and as you’re working with your clients or your companies to encourage data ethics, encourage proper procedures.  What advice do you have for people um to aid them in getting buy-in um for data ethics, initiatives, programs, governance, and related issues?

24:24 So I was just talking to someone at a company just a few minutes ago, um where they said the way you get buy-in is to make it. You know count in dollars, say: look we’re making a product here. We want the product to work, we want it to help people, we don’t want it to have negative effects. We don’t want to have blowback from it. That’s going to make us look bad. Ultimately, if you talk about making better products that help people more, then that’s where you get buy-in um it’s great.  If you can have an entire culture around that which is much deeper right, but ultimately, if you’re trying to just get a good effect, then that’s what you need to argue and that’s a consequence. You know form of consequentialist reasoning. If you want to talk about ethics, but if you can get an entire, if you can get something much broader than that, where the corporation is working for social benefit and those sorts of questions, you can do that also, and it strengthens the way the Culture in your company works, but if you, if you’re working at the bare bones level – and you just want to get a good output, then that’s the argument that you can make.

25:24 Thank you, brian. So so one of the things that I think is really important that brian’s absolutely right. If you can’t appeal to the bottom line, then it’s going to be phenomenally difficult to get a private company to um. Take this stuff seriously um. So you have to connect to the bottom line, as brian has articulated. There are loads of ways of doing that, both in terms of providing a better product reputational risk uh. You know the possibility of getting investigated by regulators, as a number of companies are now for at least allegedly biased algorithms, so you absolutely have to appeal to that.

25:57 The other thing that I think is crucial that doesn’t get talked about as much is that you need to buy in the buy-in of the developers the data collectors, the engineers’ et cetera. Now that’s really hard. If you have a certain kind of mindset around ethics that the vast amount of people do, which is that ethics is just subjective, it’s squishy, it’s just a matter of opinion um, which is a sort of loose way of saying it’s. It’s bs. You know it’s just whatever you think it is um, and you know as you, as you know, ben I was a philosophy professor, for you know a long time.

26:30  The best website of my career has been that as an ethicist, and what I found in decades of conversations with people is that if you can’t get them to see ethics as something other than just subjective and wishy-washy and squishy and just a matter of opinion, then You can’t get them to really engage in ethical conversation seriously, so one of the things that I think is absolutely crucial is to drive home – No, I mean you could say that ethics is objective or at the very least, you can say that the arguments for why most people think it’s subjective are just horrendously bad arguments and luckily engineers, for instance, like arguments they like hard-nosed arguments, they, like you, know a Kind of intellectual battle and there’s an actual intellectual battle to be had there, which is, relatively frankly, relatively easy to win um and it’s crucial. Because if you, if engineers, think that it’s just fuzzy and squishy and soft, then they’re not going to take it seriously and so part of the bind is getting them to take ethics itself seriously.

27:27 Yeah – and I would say I can’t go with a lot of stuff that reed and brian have mentioned, but as an engineer, there are a few of us who still believe in evaluating ethics of what we develop, design, and deliver. But again, I would like to divide this conversation twofold. One is product-based companies, so you’re talking about um, and this is products like yeah Facebook having secondary effects or any other map for that matter. Any product coming out of silicon valley, but second, is also what solutions are being implemented in whether it’s federal agencies or commercial organizations, so I’ll. Take the example of federal agencies. So, if the center for medicare and Medicaid once wants to investigate um fraudulent doctors? Okay, because that’s the business we are in, my company is in and we currently use statistical modeling and we are experimenting to use machine learning.

28:23 We definitely have to think about whether the machine learning algorithm is going to be biased and has ethical, bad outcomes to healthcare providers who might be unnecessarily investigated because a recommendation was given by the algorithm um. So thinking through those use, cases, and outcomes is very, very important, so I think that’s where the conversation has to happen. It’s the responsibility of a good data scientist to bring those outcomes to the forefront one-two. It’s also the committee of stakeholders to have that broader conversation. Like I like to say, I came from the world of software development where we didn’t care so much about these topics so, but now um, it’s ai is more than engineering and data science.

29:12 It’s you need to have the philosophers in the room and neuroscientists in the room to have those conversations and secondly, yes talking about the bottom line. I think the very fact that a revenue line of business like facial recognition technology was shut down by IBM shows that fast and furious in the world of ai is not a long-term strategy.   So the whole line of business has been shut down because you might not build a robust product in the long run if it’s coming under investigation by regulators.

29:43  So yes, there is an impact. Both ways and uh. Socially, responsible organization is a thing in the world of black lives matter, so so I think more organizations, it’s not just the bottom line. It’s. How can they brand themselves? It’s a matter of actually becoming more socially responsible as well. Thank you, Swathi and in the final minute, I’ll just um riff a bit off of the question from bruce who linked to google’s commitment to racial equality.

30:14 Um, we’ve seen lots of this from large corporations. Putting you know, tweeting black boxes for black lives matter, so um is it? Is it real or is it ethics washing in 30 seconds or less? it’s too soon to tell we have every reason for thinking it’s just pr. Unless we can see the concrete strategies and tactics that they’re going to implement in the very short, mid, and long term yeah, I agree and think until the feet are held to the fire, it’s always a conversation. What is accountability? What is the uh? What are the policy and regulations? Those are questions unanswered and it’s only going to amplify in the future.

30:55 So right now it’s narratives conversations, some lots of frameworks, but I think this conversation will help us get there. We would like to be uh to help hold organizations accountable uh, but we need to get there and increase these conversations in order to get there.

31:15 Thank you, Sathya brian. Do you want the last word? I would just say that uh, Swati and reader. I completely agree, and it’s uh, it’s really a question of. Can they? I think there really are good intentions right and the question is: how do those good intentions actually turn into something, concrete and practicable and something that’s actually going to have an effect on the company? And it’s that transition from intentions to actions that can be very difficult, and it’s one of those things where yes, they need to continue to have their feet held to the fire and make sure that something good actually happens.

31:51 Great thank you and we are at a time so if we could have a virtual round of applause for our panelists, reed, Swathi, and brian, thank you and then I’ll turn the time back to Shilpi.

32:03 Oh, thank you. Everyone, thanks ben for moderating, excellent moderation, and read brian and Swati loved your discussion. I think we need to have a special conference just on this. You know this is so fascinating and how can we bring frameworks and governance and actual you know in all this ethical discussion? There’s always building ethical technology discussion, but the data piece is missing and because it doesn’t affect the bottom line right. So if it affects the bottom line, like you guys were saying, that’s the best way to convince people and organizations to adopt those kinds of practices. So great conversation thanks for coming and chatting with us and we’ll see you soon. Thank you for having me!

Panelists

Swathi-Young__DataEthics4All-Top-100-DIET-Champion

Swathi Young

CEO, Integrity Management Services

Brian Green

Director of Technology, Ethics  at Markkula Center

Reid Blackman

CEO at Virtue

Moderator

Benjamin-Hertzberg-DataEthics4All-Member-Spotlight

Benjamin Hertzberg

Director & CDO at Gartner