Creating a Culture of Good Tech Ethics: AI and Beyond: AI DIET World 2021
``Include the voices of the many and not the few `` - Betsy Geytok
“Almost anyone can be a stakeholder, but you have to give them a platform so their voices can be heard”
– Betsy Geytok
“Almost anyone can be a stakeholder, but you have to give them a platform so their voices can be heard”
360
``I call it 360 degree views...Come up with a technology to try and solve the problem, pick it up and look at it from every angle.. and invite others to come at it and really explore what you might be missing`` - Betsy Geytok
“Almost anyone can be a stakeholder, but you have to give them a platform so their voices can be heard”
“Passion is contagious and when combined with leadership, the equation is effective.”
– Betsy Geytok
Expand
The increased use of AI has highlighted the ethical questions associated with such technologies. IBM has adopted principles of trust & transparency, which have generated a movement within the company – including the establishment of an AI Ethics Board – to examine issues beyond the common concern of bias within the data. The ethical questions addressed frequently include an intersection of data use, privacy, and technology. Good tech ethics is beyond just a checklist; it’s a culture. Betsy Geytok is Vice President of Ethics & Policy in the Chief Privacy Office for IBM. She works closely with the IBM AI Ethics Board developing principles, practices, and policies driving the ethical development and deployment of technology. Prior to joining the CPO, Betsy was Senior Counsel for IBM and lead a legal team providing business unit support for IBM global mergers, acquisitions, divestitures, IP Partnerships, and other complex alliance transactions. She has provided legal support for various IBM practices & offerings including OEM Channel Sales, Cloud & Cognitive Systems, Power Systems, and the AIX operating system.
Expand
0:15
Hello, everyone. Good afternoon. Good evening. Thank you for joining us in the afternoon session of day one AI diet world 2021 What a wonderful lineup of speakers we had in the morning and excellent attendee questions. That was fun. We heard from some great speakers including Christina from IBM. And now I’m super excited to hear from Betsy, another great speaker from IBM. What do you know? IBM rocks right. So, let’s, let’s give it away. Let’s hear from Betsy. Let me introduce her. Betsy. Your talk is vice president of ethics and policy in the chief privacy officer for IBM. She works closely with the IBM AI ethics board developing principles, practices policy, driving that people develop development, deadlock deployment of technology. After joining the CPU, FC was senior counsel for the idea and lead field in providing business unit support for IBM Global managers, acquisitions, divestitures, IP partnerships, and other complex Alliance transactions. She has provided legal support for various IBM practices and offerings including OEM channels, sales, cloud, and cognitive systems, power systems. And the AI X operating system. Wow, super excited to have you, Betsy. Oh, thank you, Sofia. Thank you for inviting me. I’m very happy to be here. And it should be said I am vice president of ethics and policy at IBM reporting to our chief privacy officer. And I’m going to start off by saying I’m going to share a lot of examples from IBM, but your thoughts and opinions here are mines. They don’t necessarily reflect the position of my employer, IBM, so if you’re upset with anything I say here, you can contact me. But that said, hopefully, you’re able to listen to some of the speakers earlier today. And Christina, my boss discusses betting trust and all that you do. They haven’t been discussed the importance of diversity and inclusion in data in Analytics, which has some very nice parallels to tech ethics and creating a culture that embraces ticket. And then we also heard from that panel regarding the importance of whistleblowers and encouraging people to speak up so I want to thank those speakers in the panel for laying such a great foundation here. And now I want to pull on some of the threads that were laid out in those discussions and dive deeper into how you can operationalize the policies regarding the ethical development or deployment of AI and other technologies and strive to create a culture that not only embraces the policies that advocates for the values behind those policies, and includes the voices of the many and not the few. So first, I want to point out that many people want to put tick ethics into the same box as other regulated areas related to business ethics.
3:44
Frequently hear a comparative privacy report into the CBO. And there’s a lot of similarities, but they’ll compare it to privacy and the GDPR. And all the regulations that followed GDPR and then I’m hurting other people who assume tech ethics is just another flavor of behaviors to be regulated by anti-bribery and anti-corruption laws like the Foreign Corrupt Practices Act or Sarbanes Oxley and comparison to those regulated activities aren’t wrong there’s a lot of parallels to be had, as we’re basically talking about a specialty or sub-specialty within business ethics. And we know that this is an area that will be regulated, and in fact, in some industries, it already is regulated to some extent. So I’ll also suggest viewing that while the regulations, they’re not what’s going to drive the corporate culture, it’s values that drive the corporate culture. And so while regulations are more about what you have to do, values are more about what you do just because you know, it’s the right thing. And while regulations frequently project or reflect our values, they also typically trail innovation and emerging technologies. So there’s a practical side to making sure that your values influence the design and development of these new technologies. And I’ll get to that in just so while creating a values-based culture is complex for today, I want to focus on three basic concepts that can help drive and determine a corporate culture. The first is this culture needs to have a strong foundation of values. Leaders can’t fake their way through tech ethics. Your employees and your customers are passionate about this topic. And as noted by many of the other speakers today, leadership needs to embrace that passion. That passion-driven leadership then needs to enable the stakeholders to speak up. And really almost anyone can be a stakeholder, but you have to give them a platform so their voices can be heard. And once they’re speaking up, then the leaders need to engage in active listening and further actually listen with action so let me try to explain why I really like these focus areas. So as I said, for a company to build a values-based approach to managing its technology offerings, the leadership must embrace and advocate for those values. It’s imperative that voices from the top advocate for good tech and responsible innovation. In studying this topic of creating a culture I looked back at some of the challenges faced by HR as they work to create corporate cultures that embrace diversity and inclusion. And again, I found several parallels Ted Charles, who was a former Vice President of Global work, workforce diversity here at IBM once said, passion is contagious and when combined with leadership, the equation is effective. And I find this to be really true when addressing tech ethics. as well. People tend to be very passionate about doing the right thing about improving life and striving to benefit the many. They don’t always agree on how to accomplish this, but they share a passion for it. Other speakers today have given you many examples of the amazing things we can accomplish with AI and other technologies. And you’ve also seen that people have a strong desire to avoid harm or the pitfalls that can come if we don’t address innovation responsibly. Companies will lose the trust of their clients and increase the mistrust within society for these new technologies that can stifle their development in use. And I’m guessing you wouldn’t be here today. If you haven’t at least witnessed that passion for good tech. And now you’re either curious as to what it’s all about or better. You have that passion yourself. So move beyond the discussion of people in general who are passionate about tech ethics, and recognize that these people are your employees, they are your customers, and they are the people who will be impacted by whatever technology you’re bringing to the marketplace. With that in mind, if your leadership is not embracing the principles behind the responsible and ethical use of AI, you’ll have a very hard time creating a culture that also embraces those principles. And in my opinion, you’ll have been creating a real disadvantage for your company in the marketplace. So those leaders who do embrace the values, need to be able to draw others into the discussion and be the catalyst that sparks discussion and convinces others to always strive for innovation that benefits the Mini and mitigates the harm.
8:46
The topic is complex is tech ethics cannot be forged from the top alone. It involves determining what is beneficial, what is fair, and asking ourselves, are we doing the right thing? Yet you ask 10 people these questions and you’re going to get 10 different answers. And answers will vary based on people’s economic, social, political, religious, and geographic backgrounds. Answers will also vary based on the person’s relation to the technology at issue. Are they a developer? Are they the user? Are they the seller are they the person who will be impacted by the result? With this many viewpoints and stakeholders but corporations, policies and processes need to be constructed such that they encourage multi-directional, multicultural conversations? In other words, you have to empower the voices so that the conversation flows both from the top-down and from the bottom up. Just as the technology is constantly evolving, so too is the need to address the ethical use of that technology and creating the ability to have this continuous feedback and enabling the ability to adjust to changes in the environment is key to sustaining the conversations and staying abreast of what the next issue will be. So as you may have heard from Christina this morning, IBM has established an AI ethics board made up of leaders across the company. And underneath that board, we have a network of focal points that help the board to engage with all employees. These focal points are from different business units, different corporate functions, and they’re also sitting in different geographies, to try to create as much diversity and inclusion in our board and in our focal points as we can. Equally important several community groups formed organically within IBM to address the questions of Tech Tech ethics, these are people who make connections across IBM. They were united in their passion for ethical issues, and they ended up creating real-world solutions with their crowd sourcing-like groups. And this is no small movement and includes hundreds of employees from over 45 countries.
11:11
At the same time that I’m seeing all of this passion and that spark of kind of a grassroots movement within the company. All of this networking was intentionally designed to create opportunities to be heard. I still also find that people sometimes fail to connect the dots and they fail to see how they can make a difference. And that’s where education and training play a critical role in opening up the doors for discussions. So similar to what our diversity inclusion team does, we spend time training IBM ORS on tech ethics and encouraging every employee to be an upstander and an ally in terms you may hear in the diversity and inclusion arena. But after one of these training, a developer who is a person of color reached out to me and he said these taken all of the diversity inclusion training that IBM offers and he gets the need to be an upstander in the workplace to speak out against any harassment and bullying and discrimination that he might see. That he then confessed to me that he really hadn’t thought about it in terms of speaking out, should you see an ethical issue around technology? He said I put on my developer hat. And I’m focused on solving the technical problem before me. And I don’t always think to look at it in the context of fairness or bias. So this says to me that even if your company is wildly successful at enacting DNI policies, and even if you have other policies, encouraging employees to speak up if they see something wrong, you have to make sure that you’re going that extra step to say that it’s not just about how people treat other people, but it’s also about how technology will impact human life. And regardless of what your role is, we want to hear from you. So that’s one example. Another conversation I had following one of these sessions is a data scientist spoke up to me after the presentation, and he said, Betsy, we’re data scientists, we don’t know anything about ethics. And so I asked him, I said, Do you have a general sense of how to treat people fairly? of what it means to include people of different backgrounds? And what it means to show respect for people and for their privacy? And he, of course, responded, yes. And so I said, well, then you do know something about ethics. And what it tells me is that people just need to know enough to enable them to ask good questions, and they need to feel confident in that knowledge. And that’s what this whole effort to promote tech ethics is all about. It’s about making space for people to be curious, and to ask those probing questions that will really test if the technology is in fact creating a beneficial impact and if we’re effectively addressing and mitigating any risks. So that said, the question I probably get asked the most is, can we provide you with a checklist? And I always caution against the use of a checklist. A checklist implies that once you’ve checked off everything on the list, you’re good to go. No further discussion is needed. At best, I can give you a list of considerations. I can give you a list of types of AI that frequently raised issues and you can look at the draft EU regulation to construct a similar list. Or I can give you a list of design and implementation issues, such as has it been tested for bias? Can you explain how it derives recommendations? Are you being transparent about the accuracy or what data has been used? But when you get to the end of these lists, I’m still going to ask you what have we not thought of? What is unique about this particular use case that might warrant further consideration.
15:19
This request for a checklist reminds me of something Ginni Rometty, who’s the former CEO of IBM has said many times, and that is for growth to happen. You have to get comfortable being uncomfortable. A checklist brings you comfort. You know that feeling you get I’ve completed all of the items that I need to do. But because these are emerging technologies, along with all the new possibilities come new problems that we can’t easily or always anticipate. And that requires you to leave your comfort zone and to really tackle the tough questions. And when I find that resonates well with the technical communities that I work with, work with is to tell them I can give you a list, but it’s not a checklist. It’s a list that is really meant to drive your curiosity. I need you to stay curious about the full impact of the technology and the innovations you’re developing. And that’s something that relates to and that’s something they can get excited about. So finally, once you get your T’s up and actively engaging with you, you have to engage in active listening. People need to know that they’ve been heard and when and where appropriate. They need to see that you will make changes or take other actions that demonstrate that your policies are not just a bunch of words with no meaning. A very visible example of that was when IBM announced it opposes the use of facial recognition for mass surveillance and racial profiling. But for every big headline, you see like that, there are many other smaller actions that we take in response to someone raising an issue. Someone asking those good questions, and we find that it’s actually pretty rare that we need to stop the project completely. It’s really more about fine-tuning it and putting appropriate guardrails in place. IBM has at least 10 formal avenues for somebody to speak and the ethics board has around for teams to bring use cases. But in my experience, some of the most impactful listenings have come from one on one discussions. So every time I lead an education session, I encourage them to reach out to me directly. I invite the conversation and that brings me back to finding those passionate leaders in your company. Encourage the conversation and the feedback loops. And most importantly, stay curious. So she’ll be with that. I will turn it back to you and open up the floor for questions. I guess they are as the green screen I don’t know why this field suddenly decided to come, Sam, can you take over?
18:14
Yes or no problem? Present. Good. Fantastic. So just something to look at some q&a. Just looking at the audience. Do we have any questions for you? And why should the organizers cover? Yeah, thank you so much. I think it’s a really really important point that you made earlier.
18:39
We need to involve a cross-section of groups and stakeholders to build ethical AI and technology and that leaders need to be on board desperately. Do we have any questions from the group? Let’s
18:55
See. Okay. I think we’re just waiting for a few to come through Betsy. So
19:06
Just give people time. Yeah. And I mean, how do you implement something from Christina sock earlier she mentioned it’s important to refer to IBM Policy Lab and the work you’re doing there. And that trust is important. You know, from your perspective, how do you build trust and that cross-section of individuals, and is there a formula or winning formula yours? That was the best approach. It’s more of a matrix and I would say a winning formula. And it is, it really is about that two-way conversation. So you do have to have passion from the leadership at the top. I really agreed with Katherine on that. Talk earlier today about diversity, inclusion, meaning that that you have to have buy-in from your leadership and they have to really be the advocates. But then you have to engage we engage with our customers and solicit their feedback. On what they’re looking for and what trust means to them. Engage with our partners, people that we have joint development agreements with, we engage with universities, policymakers, so we reach out to as many people within the community within IBM and outside IBM and constantly invite that conversation going because that’s the only way that you can even begin to predict what may be coming both in terms of what is coming with the technology and then what ethical issues to arise from them and get the true feedback and then decide how to act in it. You have to be able to pivot very quickly. And if you’re if you kind of let your foot off the gas, then you’re gonna miss something. So yeah, it’s very much about holding as many conversations as you can. And I have found that one on one taking the leaders within the company who are passionate about it and opportunities to engage in one on one conversations with employees. With some of our senior leaders like Rob Thomas and Arvind Krishna. They have Slack channels. And employees can reach out to them directly and it’s actually valuable to open those doors. Right. Oh, great. Thanks very much. A sec. Let’s just check we have any further questions? Just bear with us? Yeah, we have actually had one from Susanna and my question was, how do we encourage engineers as quick loans gonna go to our feeds? Yeah. How do we encourage engineers to think beyond the technical side of a problem when it comes to approaching it? Yeah, so that’s pretty much that example that I was surprised when I got that question from a developer, I mentioned. But you do have to do the education. So IBM has policies that were put out for Your Business Conduct Guidelines. For as long as I can. Remember, since I’ve worked at IBM and they say, report any ethical issues you may see, and here’s how you can report them. And they weren’t the engineers who were not making the connection. They were only thinking about ethical issues of people interacting in the office. So you actually do have to go to the engineers and you know, of course, a lot of them get because that’s where these grassroots communities that come up with IBM inventors that are really tackling these problems. But you have to go to the engineers and say, I don’t care what your budget is. I don’t care what your timeline is, your deadline is and this has to be supported by management all the way up. Which it is here. And so I want you to take off your developer hat and what you put on your personal hat. Think about how is this technology going to impact society? I know you have a good intention when you designed it, but you really have to step away. It’s kind of like how, if you’ve written a paper, and it’s always best to turn it over as somebody else do a pre-free guide and they catch things that you didn’t catch. And I call it 360 reviews. So you come up with a technology to try and solve the problem. Don’t pick it up and look at it from every angle you can think of and invite others to come at it and really explore what you might be missing. But you do I find you have to engage very directly with your engineers, your developers because they put on their scientist’s hat and they don’t always remember to look at it from a different angle. A great point. Absolutely. I’d agree with that. 2% And often we find ourselves in roles, you know, we’re so focused on what we were doing as engineers and product managers
24:02
Within our teams and individuals that we need to get out and have those conversations and whether that’s some collaboration platforms. Once once. Brilliant, super. We’re on 25 paths to see. I’ve got time for just one more question. Maybe Betsy, how do you feel I allow you for time? Okay. Okay, so we have another question for me. Let’s just see. So I think this question comes from Alan. question is does IBM partner with other leaders in the tech industry space and particularly in academia? why they’re so that’s another question that we do we have partnerships with MIT with Notre Dame. It was Stanford in the academic and likely other universities, many university partnerships, and those are the three that I engage with frequently, but there might be more than one missing. We have signed off with the wrong call for ethics so engaged with the Vatican and the Vatican is now expanding their program out to other universities. So kind of an indirect reach out that way. But then we also partner with many of our customers who are interested in similar ethical issues, and many of the partners that we, that we sell with and create solutions with so yeah, it’s multifaceted. Yeah. Yeah. Wonderful. Fantastic. Thank you so much. Right. I think if I’m just looking at a fee because we do have a new question. I think that might be the last one. Yeah. Okay. Thank you very much. Thank you. Very welcome and done well, the voice of the customer producer. Will, we’ll reach out to fantastic thank you so much. Thank you brilliant. Speak to you soon. Take care.
DataEthics4All hosted AI DIET World, a Premiere B2B Event to Celebrate Ethics 1st minded People, Companies and Products on October 20-22, 2021 where DIET stands for Data and Diversity, Inclusion and Impact, Ethics and Equity, Teams and Technology.
AI DIET World was a 3 Day Celebration: Champions Day, Career Fair and Solutions Hack.
AI DIET World 2021 also featured Senior Leaders from Salesforce, Google, CannonDesign and Data Science Central among others.
For Media Inquires, Please email us connect@dataethics4all.org