fbpx
Back To Top
AI-DIET-World-Keynote-Speaker-Christina-Montgomery-IBM

Embedding Trust in All that You Do with Christina Montgomery of IBM

``AI can play a critical role in addressing important issues in today's World``

Christina Montgomery

“Most businesses use more than 20 different data sources to inform their AI.”

~ Christina Montgomery

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

500

``Large businesses sometimes tap into 500 or more data sources``

~ Christina Montgomery

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

“I firmly believe that we can change the world with technology but we have an obligation to do so responsibly”

~ Christina Montgomery

Expand

The rapidly increasing use of AI has brought to light many ethical issues surrounding the use of this technology. Christina Montgomery, Chief Privacy Officer at IBM and co-chair of IBM’s AI Ethics Board, will discuss why trust is a key pillar in any AI strategy. Christina Montgomery is IBM’s Chief Privacy Officer and an IBM Vice President. As Chief Privacy Officer, Christina oversees IBM’s privacy program, compliance, and strategy on a global basis, and directs all aspects of IBM’s privacy policies. She also co-chairs IBM’s AI Ethics Board, a multi-disciplinary team responsible for the governance and decision-making process for AI ethics policies and practices. Prior to her current role, Christina provided legal support for many of IBM’s divisions, including spending more than 4 years as Secretary to IBM’s Board of Directors. Ms. Montgomery received a B.A. from Binghamton University and a J.D. from Harvard Law School.

Expand

00:00 Shilpi Agarwal:

AI Diet stands for Data and Diversity, Inclusion and Impact, Ethics and Equity, Teams and Technology. Let’s build a better AI world together that is cognitively diverse and inclusive in every way, and with that I would like to welcome our first guest Christina Montgomery. Christina Montgomery is IBM’s chief privacy officer and an IBM vice president as chief privacy officer, Christina oversees IBM’s privacy program, compliance and strategy on a global basis and directs all aspects of IBM’s privacy policies. She also co-chairs IBM’s AI Ethics board a multi-disciplinary team, responsible for the governance and decision making process for AI ethics policies and practices. Prior to her current role, Christina provided legal support for many of IBM’s divisions including spending more than four years as secretary to IBM’s board of directors. Ms Montgomery received a BA from Bingham University and a degree from Howard Law School, ladies and gentlemen please welcome our keynote speaker for AI Diet World 2021, Christina Montgomery. And with that let me welcome Christina to the stream, hey Christina welcome!

Christina Montgomery: Hi Shilpi, thanks so much for having me today, our pleasure and thank you for being here. I know my team said that they couldn’t hear me was the intro everyone could hear it I heard you fine all right awe me!

Shilpi Agarwal: I’m going to go backstage and the stage is all yours, Christina welcome.

Christina Montgomery: Great, thank you, yeah thanks Shilpi and thanks for everybody who took the time out of your day to join us today uh today I’m going to talk to you about the importance of trust and why trust is important with respect to AI.

If you’re attending this event you are well aware of how pervasive AI is becoming in our daily lives it’s spread across all industries it’s become embedded in everyday business, in fact three out of four businesses are exploring or implementing AI and there’s a significant amount of investment that businesses are putting into AI. Key reason for the increasing use of AI is this growing amount of data, the massive amounts of it structured and unstructured, that’s being generated from businesses and individuals every day from an expanding set of applications like websites and mobile apps and devices like wearables or appliances, connected cars, manufacturing equipment, cameras, the list goes on.

“On a recent survey found that most businesses use more than 20 different data sources to inform their AI and large businesses sometimes tap into 500 or more data sources”

Christina Montgomery: Considering all of this information and the means to connect process and uncover insights from it it’s easy to see that AI is changing the way we do business, but the effect of AI goes beyond business applications.

“AI is opening the door to create innovative solutions with the potential to solve some of society’s biggest challenges”

Christina Montgomery:  For example, using AI to analyze data researchers at IBM and the Michael j fox foundation developed an AI model that predicts the progression of symptoms in Parkinson’s patients in terms of both the timing and the severity Magnavox, a non-profit based in Paris built a platform with IBM that uses AI to scour information from trusted sources to monitor the world’s forests and the rate of deforestation and to act on those insights before further damage is done and finally

“Working with the United Nations environment program in the Wilson center,  IBM has demonstrated how digital technologies can help to monitor marine pollution and fundamentally change how data analysis and insight are accessed to enable more informed decision making”

Christina Montgomery: AI enables organizations to implement global strategies around big picture issues, for example, IBM’s newly released environmental intelligence suite allows organizations to plan for and respond to disruptive events like weather to avoid outages and ensure business continuity in real-time with their operations around the world it’s powered by AI that integrates weather data with business operations that augment decision making with enterprise processes to address sustainability resiliency and climate change challenges innovative solutions like this demonstrate that new technologies like AI can play a critical role in helping address important issues in today’s world but AI presents me unique challenges.

Christina Montgomery: First AI, as we mentioned is made possible by vast amounts of data collected every day but often there’s a lack of transparency around data collection it can be unclear to consumers who have access to their data and how it’s being used and protected, even what data is being collected. Second there’s a pacing problem around new technologies like AI,

“In some cases new technologies are being created and put into use quickly that regulation can’t keep up that leaves it to companies and governments to impose their own guard rails around technology development deployment”

Christina Montgomery: And use the combination of these factors presents the very real risk of eroding public trust around technology and if Society doesn’t trust the technology we won’t see it being used and deployed stifling innovation. For these reasons it’s essential that we lay a foundation today that ensures new technologies put people first and that their benefits are felt broadly across society.

Christina Montgomery:  At IBM we align ourselves to what we call our principles for trust and transparency at a high level. These are that AI should have meant not replacing human intelligence, that data and the insights generated from that data belong to their creator in our case, that’s our clients and that new technologies including AI should be transparent and explainable.

We impose firm guard rails on our own use of technology and data and we advocate for others to do the same through concrete actionable policy recommendations, having a set of principles is one thing, though the real trick is operationalizing them and you have to work to infuse them in your practices across the company.

Christina Montgomery: Essentially your values must be your practices. We align our practices around the principles I mentioned earlier in a set of pillars around AI ethics. The fundamental properties we view and are required for trustworthy AI are the pillars, which provide an intentional framework for establishing an ethical foundation for building and using AI systems and they’ll be familiar to many of you.

“There are essentially they are explainability, fairness robustness transparency and privacy and the principles and the pillars together demonstrate IBM’s long-standing commitment to taking an ethically conscious approach when bringing new technologies into the world”

And we embed ethical thinking throughout IBM via the AI ethics board which I co-chair along with the IBM AI ethics global leader Francesca Rossi IBM’s AI ethics board infuses the company’s principles and ethical thinking into business and product decision making. It provides centralized governance and support and accountability while still being flexible enough to support decentralized initiatives across our global operations,

“And that’s no small thing when you remember we have about 340 000 employees operating in 170 countries plus around the globe”

 

Christina Montgomery: The board provides too small and two-way engagement promotes best practices conducts internal education and leads our participation with stakeholder groups worldwide but in addition to the board we have a network of AI focal points and these are embedded throughout the business the focal points help ensure that our approach to AI ethics is both tops down and bottoms up instilling a culture of trustworthy AI throughout IBM the board and its associated governance are just one of the ways that we embed trust in our operations.

“At IBM they’re a mechanism by which we hold our company and all IBM’ers accountable to our values and our commitments to the ethical development and deployment of technology but no company can do this alone”

Christina Montgomery: We advocate for policies that are consistent with those values we’ve been very public in our positions for example to firmly oppose the use of technologies including facial recognition technology for mass surveillance racial profiling and violation of basic human rights and freedoms in fact really any purpose that’s inconsistent with the principles of trust and transparency that I mentioned earlier when our CEO Arvind Krishna announced that we would no longer offer general-purpose facial recognition

Christina Montgomery: Software products we were the first major company tech company to do so and others soon followed we also partner with leading minds around the world to advance important ethical dialogues. The efforts we’ve seen at a global level from the EU high level expert group on AI to the Vatican’s Rome call for AI ethics are laying a clear path for AI that’s deployed responsibly and ethically and they’re all efforts where IBM has played a leading role.

“Last year we partnered with the university of Notre Dame to establish the Notre Dame IBM Tech ethics lab it’s a first of a kind cross-disciplinary research initiative focused entirely on driving best practices in technology and AI ethics and in shaping relevant public policy”

Christina Montgomery: Our CEO is co-chairing the world economic forum’s global AI action alliance, a platform that will focus on moving beyond principles – one that will convene multi-stakeholder dialogues, engage in real-time learning and help scale best practices all in support of trustworthy AI.

As part of that alliance we’ve committed to sharing our governance approach to AI fix those concrete steps that we take as a company and we’re hoping that that will help guide best practices in the space as well internally. We’ve launched tool kits like AI fairness 360 to help customers and other organizations check for and mitigate bias and AI systems and AI explainability 360,  to help explain a system’s decision-making process.

Most recently, IBM research unveiled a toolkit called uncertainty quantification 360. That helps developers and data scientists quantify the uncertainty of machine learning predictions with the goal of improving transparency and trust in AI,

“And recognizing that the challenges of AI bias and explainability are bigger than any one organization can tackle independently.”

 

Christina Montgomery: We contributed these tools to the open-source community that anyone can use them and improve upon them along, with working with outside organizations and providing Open Source tools. As a company we advocate for responsible technology policies with policymakers around the world, in addition to our position on facial recognition we’ve called publicly for governments to strengthen trust in technology in a number of areas, for example, we support a strong and bipartisan national consumer privacy law in the United States, not a patchwork of different rules at each state-level.

 

“Privacy prediction should vary based on where someone lives or from where you access the internet in the United States. The framework should be flexible enough for organizations to implement while driving accountability.”

 

 

Christina Montgomery: I’m Sorry I’m seeing someone in the chat, can people hear? Shilpi should I stop or?

Shilpi Agarwal: Your fine Christina people can hear, great.

Christina Montgomery: Sorry I saw the text traffic on the AI front we’ve called for precision regulation of AI, so this is you know essentially a call to place the tightest regulatory and policy controls on technology end-users where the risk of societal harm is the greatest,

“In other words, regulate the use of AI not the technology itself and in addition we’re calling on governments to help strengthen the adoption of the testing assessment and mitigation strategies to minimize instances of bias in AI systems”

Christina Montgomery: New laws, regulatory frameworks and guidance for mitigating bias and AI systems are on the horizon and if well-crafted these measures can provide industry and organizations with clear testing assessment mitigation and education requirements to enhance consumer confidence and trust in AI.

“I firmly believe that we can change the world with technology but we have an obligation to do so responsibly.”

Christina Montgomery: We take that obligation seriously at IBM and we’ve been a responsible steward of new technologies for more than 100 years with the rapid pace of technology development, it’s even more important that we hold firm to our values embed them in our practices and advocate for policies that align with them. We consider trust our license to operate and it’s incumbent upon us to continue embedding trust in all we do, as we create and use these technologies to build a better and more prosperous future.

Christina Montgomery: Thanks for your time today and I believe we have a little bit time for uh for Questions.

Shilpi Agarwal: Thank you, Christina, great talk, yes we do have a few questions coming in the audience we are ready for your questions

 

“Christina, I love that you said that yes with great power comes greater responsibility and at IBM you take this really seriously and I loved all the examples you shared”

 

So if a company was like you know putting these AI governances in a place where would you tell them how do they start where do they start?

Christina Montgomery: Yeah I mean I think it’s a little bit different for every company and I that’s the challenge that people have right now is to figure out how best to ensure leveraging the current business structure that they have in place right to start thinking about these considerations in much the same way I know, from our perspective we think of it as a few years ago we were looking at concepts like privacy and security by design

Christina Montgomery: So we think about it as ethics by design from that perspective and just building upon the existing practices we have in place and ensuring that we’re embedding in the day-to-day decision-making that perspective I also think you need to start with your foundational values and then use the unique uh organizational construct in your own business build upon and embed those values uh in the most effective way in your business yeah that’s true so what is the best way to identify um bias um is it possible to even identify bias at the data stage and at the algorithm stage or is it only after the model has been deployed can you actually figure out that there is a bias in the system oh no I think it’s throughout the life cycle that’s one of the things that’s sort of unique about AI is you start with.

Christina Montgomery: Something and what you end with once you’ve deployed it is often very different because the model learns over time and trains on new data and the data evolves, so we advocate for a life cycle approach to things like bias starting from the very concept when you think about what is the purpose of the end purpose of what you’re trying to accomplish and what do you need to train on that and looking at data sets again from the beginning throughout the life cycle of the product.

Shilpi Agarwal: Yeah we have another audience question, at the lowest level how is this implemented if someone is at the deploy development stage of a product how does it go from top down to bottom or is there any other way around it?

Christina Montgomery: Well I think some of the things um that we do here at IBM is this sort of ethics by design concept, so starting I think it starts from the very beginning when you’re thinking about, as I mentioned what are you trying to accomplish what data do you so need to accomplish that just having people start thinking about being aware of these issues here are the questions that you should ask here’s what you should check for in your data here’s what data you need to accomplish the end goal.

It is sort of starting with a vision and end vision but you know backing up and looking at to accomplish that what are the questions, I need to be asking at the beginning

Shilpi Agarwal: yeah, very true um we wow we have a lot of questions.

So Alberto asks what about Clearview AI and Amazon that still self, facial recognition, AI yeah I mean I you know I mentioned in my remarks that there’s often this pacing problem where regulation can’t keep up and it’s kind of up to companies to set their own guard rails.

I can speak for what we advocate for you know we don’t um sell general-purpose facial recognition anymore because we don’t think the technology is ready we think there need to be uh there needs to be a closer look at the use of facial recognition particularly in the context of law enforcement but other uses as well.

 

“That’s our point of view and we’ve called on regulators to start taking a look at that right now, you know those come what the companies are doing in that space and Clearview, in particular,  there’s no law essentially you know stopping it”

 

Shilpi Agarwal: So it’s the guardrails that companies need to put in place to build trust – yeah absolutely.

Bruce says you mentioned a call to regulators to focus on the specific uses of AI, can you speak more to the ask for regulation and what it would mean to us developing AI products?

Christina Montgomery: yeah, what we called for in our policy please we have by the way for those who are interested the IBM policy lab offers, as I mentioned concrete actionable policy recommendations – we have a number of papers in that lab.

You can read in more detail what we’ve suggested in things like precision regulation for AI, or regulation for facial recognition technologies – but it’s for regulators to initiate what we’ve called for is an initial risk assessment.

“If you look at something very simple like facial recognition or something everybody will understand right you use it on your smartphone um its one-to-one match the picture on your phone doesn’t leave the phone.”

 

Christina Montgomery: So no one else has access to it,  you can choose whether or not to implement and if you use it on your phone it’s just easy it makes it easy for you to unlock it but facial recognition used by police force and cameras on the street very wholly different use of that technology and what we’re suggesting it’s not the technology, it’s not the ability to recognize a face, it’s the actual application of how you’re going to use that particular technology that regulators should look at.

That’s at a very high level, the distinction between the technology itself and the use of that technology.

Shilpi Agarwal: sure thank you, Christina  – Michael McNair,  in the education sector has a question – I know we are a little over time but these are some great audience questions, do you have a few more minutes Christina? Thank you.

In the education sector what types of people should be a part of the data governance committee and AI committee,  should we start with consulting first I mean I I’m not working within the education sector myself?

 

Christina Montgomery: So it’s kind of hard for me to answer, but I would say just as a general premise, when you’re thinking about we’ve been very much active in advocating for multi-stakeholder dialogues right?

 

“So if you’re talking about the appropriate use of technology, data, like having a diverse cross-disciplinary lot of people from different backgrounds”

Having stakeholders as well those who will be subject to regulation to those who will be the subject of an AI application, those who work on it from vastly different backgrounds diversity is really critical because you get lots of different perspectives that you might not otherwise experience or take into account.

 

Christina Montgomery: I think from our own ethics board for example we have a very cross-disciplinary board we have people representing pretty much every business function within IBM lots of different backgrounds some are technical some are not technical and we feel that that really brings us lots of different questions that you would you know one pillar alone isn’t going to have the answers 

Shilpi Agarwal: So that diversity of background is really important and we’ll just take one more question, the last question for Christina – thank your audience for the questions and thank you, Christina, for spending some extra minutes with us here today, Alberto Roldan asks what’s the quantitative measure of trust and transparency?

Christina Montgomery: I don’t know if I can answer quantitative these questions, yeah I mean look every day we ask questions about what we’re doing internally and the like I don’t think there’s anything mathematical a lot of these are really hard questions which is why I feel it’s really important to firmly adhere to some north star for us that north star is the principles right?

So you take a hard line about things but often there’s you know, it’s often you’re in a grey space right, where you have to make a call and being able to point to those principles has been really something that’s been very valuable for us, but quantitative I don’t think there is one trust, this is really what it comes down to in many respects right?

“You’ll be able to see if technology is entrusted if people don’t want to use it,  if businesses can’t use it because their customers won’t trust it, that’s where the measure is. That’s one way to measure it true.”

Shilpi Agarwal: all right that’s all the time we have for Christina today thank you so much for being with us uh fabulous talk and thanks, thanks to Kevin and Aaron also for you know helping us coordinate all this and getting us Christina thanks again Krishna take care bye-bye thank you bye.

 

 

Christina Montgomery _AI_DIET_World_-_DataEthics4All
Christina Montgomery, Vice President & Chief Privacy Officer, AI Ethics Board Chair at IBM

Christina Montgomery, Vice President & Chief Privacy Officer, AI Ethics Board Chair at IBM was our first Day 1 Keynote Speaker for DataEthics4All’s AI DIET World 2021.

AI-DIET-World-Hero

DataEthics4All hosted AI DIET World, a Premiere B2B Event to Celebrate Ethics 1st minded People, Companies and Products on October 20-22, 2021 where DIET stands for Data and Diversity, Inclusion and Impact, Ethics and Equity, Teams and Technology.


AI DIET World was a 3 Day Celebration: Champions Day, Career Fair and Solutions Hack.

AI DIET World 2021 also featured Senior Leaders from Salesforce, Google, CannonDesign and Data Science Central among others.

For Media Inquires, Please email us connect@dataethics4all.org

 

Come, Let’s Build a Better AI World Together!