fbpx
Back To Top

The Impact of AI on Society: A Panel Discussion.

We have to uphold Humanity's values.

We can’t know where we’re going unless we know where we’ve been❞ ~ Rachele Hendricks-Sturrup

20

Of People

❝predicted to commit violent crimes actually went on to do so❞ ~ ProPublica.org

77

Black Defendants

❝more likely to be pegged as at higher risk of committing a future violent crime❞ ~ ProPublica.org

❝We are driving the extinction of cultures❞.

❝Tribal cultures, other cultures where they don't have the computational resources or the technology is not there yet, we might see this inequity go up.❞ ~ Mayank Kejriwal

The Impact of AI on Society: A Panel Discussion.

Collapse

In this talk, our exceptional group of panelists explore the Impact of AI on society and answer the question “What are the human values that you think we should protect over the next five years?” The talk explores how to gain consensus around how we align human values with the values of artificial intelligence for the good of society and to protect people’s privacy, control any bias, uphold fairness and prevent harm and discrimination before it happens. The talk also explores the importance of protecting the diversity of rich languages and cultures globally from extinction as we develop AI Systems for the future.

Collapse

TRANSCRIPT

00:00:00  I had a real crisis last night because I was looking up at all their LinkedIn profiles and trying to give a good introduction to all of them to do justice to everyone and it was not possible because it’s like their achievements are like two miles long and that’s gonna. Take me like 35 minutes to finish introducing them, and the panel is only 30 minutes long. So I’m just gonna have a short introduction to all of our panelists.

00:00:44 This is a fraction of their achievements and their accolades so bear with me. So I’m going to start with professor Mayank Kajarwal. Did I get that right? He is a professor and the lead researcher in the center on knowledge, graphs at USC information sciences institute with a focus on developing high-impact, artificial intelligence technology for addressing important social challenges such as human trafficking and disaster response.

00:01:19 He has delivered talks and workshops at over 20 international academic and industrial venues published more than 40 peer-reviewed articles and papers and in 2019 he was nominated for the Forbes 30 under 30 in the science category and is the co-author of the textbook on knowledge, graphs, MIT Press, I mean that’s a very short introduction of professor Mike and Toju Duke who is a rock star by herself and a rock star at Google.

But that’s not her official title and I think we should petition Google to change that.

00:02:02 But her official title at Google is a project manager at google’s machine learning, fairness, and product lead for Europe, Middle East Africa, with over 15 years of experience spanning across advertising retail, not for profits and tech.

She is also the project manager for women in AI, Ireland and founder of Refyne, a tailored business and career coaching services for small businesses.

Toju has a keen interest in ethics, ai, and transparency, focusing on examining bias in data sets to choose an advocate for transparent and bias-free ai, aimed at reducing systemic injustice and furthering equality.

00:02:49 Okay, she’s also committed to ensuring ai for social good, while acknowledging all backgrounds and cultures of society – and the list goes on, but um I have to shorten it so and then I’m moving on to you’re not going to have any time left for discussion.

00:03:09 Yeah! That’s what I feel like is an assistant dean university of assistant dean at the University of Pennsylvania for the last three years, and she has a portfolio of projects and initiatives that drive excellence at the number one ranked school of nursing in the world.

00:03:30 And she has 12 years of experience at innovating strategies to improve everything from sustainability, climate change and in 2019 she started her own consulting firm, called trajectory consulting and that’s as if it wasn’t enough, she has, she holds several masters and now she’s doing another master’s in philosophy, and she is she’s fascinated by the intersection of health, environment and natural innovation, and social justice.

00:04:01 Michael dr Michael Akinwami a Ph.D. in applied mathematics and is the director of risk sciences and analytics at Trust Science, Edmonton, Canada. He has a deep technical background that covers end-to-end development of scalable, custom, machine learning, solutions, and, of course, he’s a peer reviewer.

For many journals and a multiple, you know multiple avenues for research, intersecting, social economy, politics, and security.

And finally, Dr. Rachel Hendricks is a scientist and solutions.

Arctic architect and the health policy council and a health lead for the future of privacy forums, health, and genetics in Washington dc, and she also has 12 years of experience at the intersection of health, information, security, privacy, and understanding consumer understanding.

00:04:54 She holds a doctor of health science, a master of art in legal studies, and a master of science in pharmacology and toxicology, and that was the briefest introduction I could give to each one of my student panelists.

I’m really sorry because I could not fit all of your achievements in once in one shot uh.

I just want just to get back get to the discussion very soon, because we have a very short amount of time.

00:05:24 So the title of this one is called the impact of ai on society.

00:05:29 I picked this title because it is so ubiquitous and so common everywhere, like everybody, wants to use this and it’s the most commonly used title for research papers and, I think believe in the last five years as everybody has written about it, and I can’t say that I have read everything, but I have read most of them and whenever I am like trying to understand what they are saying is like either it’s the positive impact of ai or the negative impact of ai, but they don’t give us a concrete uh.

You know in insight into how we can align those values between the human values and the artificial intelligences values so and their role in society.

So there’s always this consensus on how you know artificial intelligence is good, but it needs some protections to protect people’s privacy and their values, but then there’s no consensus on what those values are.

So my question to all the panelists is: what are the human values that you think we should protect the next five years? And if you had like a chance to stand on the precipice of an ice rise, I mean before it got to the place. It is now what would you have done so uh any one of you can start and then you can, you can all go uh.

Maybe I will pick dr Rachel sure absolutely.

I was going to actually volunteer to go first, so you and I were on the same page so um, that’s a really really good question, and I personally feel that if, if we’re going to implement ai in any kind of way, we have to uphold humanity’s values Around three things really um, the first being um the value of being seen and being heard and being represented um in a way that actually will control for any bias and also uphold fairness.

When ai is implemented across a multitude of settings, whether ai is used to allow individuals to become insured, receive job opportunities, gain access to housing, transportation, those kinds of things, and then also values around lessons learned.

00:07:51 For example, we can’t know where we’re going unless we know where we’ve been and that’s really important, to make sure that we halt any cycles really of harm or abuse that came before us.

We want to make sure that ai is not perpetuating harm or abuse or even discrimination, because those activities were extended in the past and we don’t.

We don’t – want we ai models to perpetuate that in a way that does not allow us to move forward.

As a society and as a civilization and then also upholding values, around governance, standards, and procedures that are evidence-based, that’s also very important to make sure that we all have common ground on how ai should be used if it should be used and also how it should be Used to make sure that ai is not abused by various parties and to make sure that it’s implemented fairly or in ways where all sides of the story can be heard and action can be taken and taken in the best way possible.

00:09:03 I think those are wonderful um, dr Rachel.

I want to add to that that when I was thinking about the question um, I found myself considering the long-standing tension that has existed in society, um humans and technology, right, um and thinking about that friction.

That’s been there and oftentimes it’s due to not understanding what it is not seeing its use the messaging around it, but I also wanted to consider the different lenses by which to think around the question um when ai is built.

When you talk about the actual algorithms, I’ve had the opportunity to speak with some of our theorists and computer engineers at Penn um, who really believe that ai is still in a very nascent stage.

Although it is ubiquitous, as Susannah said, um found itself in the energy in uh smart homes in all of these areas in healthcare um, but at the same time it’s the building of it.

00:10:13 The embedding of these values and then the issue of defining those values, which is our exercise today, which value do we truly believe, is the most important or the most valuable to society or that crosses different peoples and different cultures.

00:10:30 And that made me think around two last points: the fact that our understandings are so centered around our culture, different lenses, the locale in which the technology is being introduced made me land in the area of respect finding the definitional worth of. How do we define fairness and privacy and I landed in respect, I feel like if we begin in respect, which goes to your thought about being seen and represented having more people of color as coders and engineers um talking across different disciplines, I think we could make Some headway and uh yeah I would just like to have to uh once we uh just mentioned, especially I like to focus on the aspect of uh understanding right so uh.

One thing I’ve seen is that uh almost I’m yes to me, uh a person who uh doesn’t have the desire to grow or improve uh ease or circumstances.

00:11:33 If you actually explain uh what needs to be done right and that’s coming from uh explainability or in uh interpretability of uh, ai or machine learning, algorithms right, so I think uh. The value that I would like to actually be perceived is being transparent, uh being able to actually explain what individuals need to do if they want to improve their chances.

An uh, maybe uh.

One good example that I like to give is, if you take, let’s say a credit scoring example right, so if you’re going, if you’re being denied uh uh uh if your application for credit is being denied right and you have no clue as to why uh it’s Been denied right, but I think you actually, you have the right right to know what you need to do so that next time you come around, you would at least have improved your chances of getting down, and this actually applies to different uh uh areas right just Like uh Suzanna mentioned, this is evictors in our society there are disparities, uh discrimination, but as long as we, we actually drive our efforts towards making sure that these algorithms that are being used, they are explainable, like even to a layman person.

00:12:59 I think uh uh will be actually uh making progress towards having an equitable society.

But one thing I like to say is: uh.

You know I.

I always reflect on what uh our society looked like before ai right, so some somehow some way uh humans.

Actually, we make, we will have cognitive biases, we make decisions and all that right and all these data that are biased.

They are propagating through our system, but thank god we have a.

That is, you know the way I look at the analogy I use is the analogy of a mirror that is reflecting on our faces actually showing to us exposing all these biases that we have in our data, but I think uh to the question that Susanna has about what if I’m standing on the pre uh uh uh on uh? Is it president or time or something like that? What would I like to change right now? I agreed that uh ai is probably not matured yet, but you know, because all these biased data are still being used in predictive uh space right things that you know that are wrong.

You’re still using it to make predict uh predictions that will keep you know is.

Is it more like uh uh? What’s the feedback loops right, so I think one thing I would like to see: uh uh changed is that we actually move from predictive machine learning to prescriptive, which is whatever decisions you’re making.

You should actually be able if the algorithm you’re proposing, is going to contribute to our society.

You should be able to prescribe to people what they need to do to actually improve their situation and circumstances.

I love that.

I love that uh new take on that uh.

What do you think uh, my professor mike yes um, so I agree with both things that were just said, which is that the ai is very um nascent.

I mean I would not use the word nascent, but I would um.

I agree that it’s not there.

I think – and that’s, maybe you know what the point was: it’s not there yet, and um it’s because of the other reason that is brought up, they’re, not explainable.

I think the neural networks a lot of the technology we have now are not really explainable, so it seems like the new um ai.

You know they are very good at prediction, but they are very bad at explanation.

You know, and so that’s one of the problems, my takeaway from uh – that I would really like to push in response to your questions.

Nana is, I think, no person should be left behind and we all agree on that.

But the one thing I would like to add to that is that no culture should be left behind and there’s a very real danger that some cultures are being left behind.

So, for example, um you know, and since he also mentioned some um, you know that the concrete examples are often missing in the literature.

I’d like to give one concrete example.

So in one of the projects that I worked on for several years and it’s there in my slides is um.

You know disaster relief using ai and social media in developing countries and specifically developing countries where the language or parts of developing countries where the language is not English right so um, for example, I come from the city of Calcutta in um in India, and there are many millions of people who speak Bengali, you know in India and in um other countries in Africa you have um very rich cultures.

You know where you have people who speak different languages um, but unfortunately, what’s happening.

Is that a lot of the new ai tools like natural language, processing, and so on, that you might have heard about are designed for major languages like English and so on? And so to convince yourself of that? You just need to go to google translate and you can translate like french to English or you can try to translate a language which many millions of people speak, but you will not see a good translation and for some languages, you might not see your translation.

00:17:07

So in some sense, we are driving the extinction of these cultures.

You know tribal cultures, other cultures where they don’t have the computational resources or the technology is not there yet, but we might see this inequity go up.

So I’m very concerned about this that you know we might be seeing extinction um of certain cultures, you know being accelerated by uh.

You know, through the overwhelming focus on certain languages and on certain um, you know a narrow range of what the human civilization is capable of um.

So I really hope that we can come together.

I think this is starting to get recognized in the community.

00:17:45

I’m trying to bring it up even more in whenever I go to technical conferences that can start looking at everyone, not just English, not just western cultures, but other cultures as well, because um, you know, otherwise we risk leaving them behind um yeah yeah that I think I do agree with you.

It’s uh the languages uh, you know the indigenous languages and the rare languages we are losing them.

00:18:14

We are losing them rapidly.

I found like something like two or three dies per year.

Yeah, I was, you know, blown away, but that’s we.

We have lost them like forever right.

So that’s uh, really that’s one value.

I think we should focus because the language was the mode of communication for us forever right.

There are the precedes everything else, even visual.

So I think that is one value.

I think we should focus on.

So what does Toju say on this? She here suddenly went offline, so maybe she’s rejoining uh.

Well, while we wait for her I’d like to add to um, add a comment along that same vein about culture, I think that’s super important, but I also think that it’s probably even more important to make the distinction between culture and practice because they’re, obviously not the same things and have some practices within certain cultures that are not always fair or not always um, to the benefit of those who live within that culture, um and for that reason, I think it.

It causes us to sort of take a look in the mirror and think about what we don’t want to perpetuate um as practices, even within our own cultures within ai.

00:19:38

So it really forces us to take a hard look at ourselves.

00:19:43

You know to say: okay, you know we saw that some previous generations before us did these things that are obviously questionable or that we obviously you know, might not agree with you know doing today and i think um every generation has to ask themselves Of that question, when they think about sins of the past um, so going back to my point about not perpetuating um uh uh practices, uh that can be perceived as harmful or discriminatory um.

That’s extremely important um.

I believe, drawing on those lessons learned to make sure that we’re not perpetuating stereotypes and biases and other things that cause individuals to miss out on important opportunities that um that could be available if the ai weren’t there.

In the first place, I like to have to uh comment about our culture, but now that we have told you so yeah, maybe after then I can hear one or two comments.

00:20:42

Yes told you like i.

I would like to get your thoughts just before you left.

Mayank was sharing about the last languages and how that is one of the values and Dr Rachel was sharing about how we should not perpetuate the same biases that we had before AI  came into the presence.

So could you add your viewpoints on that yeah? I think for me um, I was just looking at thinking of most of the human values that we have so we’ll probably tie in with the conversation that I missed um, but I mean a few of them is like love and empathy and concentration and Acceptance and listening skills and all of that and usually when we apply human values, it gives us the ability to have ethical values and ethical values could be like justice, and you know, integrity and refusal to be violent and refusal to kill and murder.

And all of that um and if we apply that so if we hone in – and I say, okay, I want to choose one particular human value.

I feel all human values are important and I don’t believe we should drop any, but as even I want to hone in on one I’ll choose acceptance, and I think that’s what professor Mayank was talking about as well so acceptance and where does the substance lie in The overall in overall human society and human ethics and values.

So if I look at acceptance as a human value and in terms of the ethical body, I think of justice, and I combine them together.

Acceptance ideally, should be integrated in any ethical, ai principles of any organization of any government of any you know, company or any business, and, let’s assume, that’s been implemented.

It’s part of the ethical ai principles, every ml engineer, data science engineer, whoever is building the ai data sets, has that at the top of their minds that we have to build models that are set into everyone.

That means that those models and those data sets are diverse enough to include every culture, every ethnicity, every language, every sexual orientation, every gender.

Now, when we have that um and assuming we’re applying that to justice, for instance, right when we hold into like criminal justice systems, are currently using ai if they were built on these principles, the problems were seen with criminal justice.

Today we would not see them will not see a young man who has been falsely accused of a crime.

00:22:58

He did not commit to losing his whole job.

You know his whole life, his car, because of facial recognition, software, and went wrong all because it was built on the wrong values.

Um so I mean one of the recent research um results that have come out recently by pro Publica actually showed that you know for one of the criminals.

The software for use for criminal justice systems is only 20 correct, and people of color are always wrongly accused, or the prediction is wrong by seven to seven percent rates, and these are already being used by a lot of law enforcement agencies today.

00:23:32

So honing back on.

Just having basic principles going down to the human values, ethical values, making sure that we want to build models that are accepting everybody um and they have acceptance and justice included in them, for example, for the criminal justice systems, then I believe you know ai tomorrow will Be a better system than what we have today so going on.

I told you is uh.

You know uh the main point, that acceptance is one of the values that she will pick.

I do agree with that, like that, would like to cover everything right model acceptance into my life.

So I see a lot of you are actual ai experts here.

00:24:15

So how do we model acceptance into an ai? I think uh, if I may have uh before even uh, ask answering the question of modeling aids like uh.

If you the way, I see acceptance and cultural preservation is like it’s uh.

It’s probably going to be difficult.

If you don’t have a an a diverse team of people coming from different backgrounds, different cultures right working on the same projects, but having different perspectives right.

So I think that’s that core piece is very, very important.

Otherwise, we’ll keep turning out uh algorithms that will actually drive some of these cultures or values into extinction.

So my grandmother was at the time I’m just recently, actually working on a computational linguistics project and I was just trying to translate something to my own mother tongue and like it was really really horrible translation right.

00:25:15

But I could imagine it’s actually difficult, probably difficult for people who are working on that project if they don’t have an understanding of other languages, so acceptance would also hate cultural preservation.

That’s one point that I just like to add.

00:25:29

I like yeah to add to that Michael.

I think that one of the things that we’ve seen in other disciplines is the idea of interdisciplinary teams of having not only multiple multicultural voices but multi-disciplinary ones, so that you have ethicists working with the engineers that you have ai specialists, working with um, sociologists and psychologists And criminologists, and really understanding that the the openness with which that we expect from ai that acceptance that level of human value has to has to begin at the very beginning at that at that and at that acknowledgement of what the problem is, that we need to Solve and that the all the voices be representative – and i feel like we, we kind of keep coming back to that representation, um, how and and how these systems are actually built from the very beginning, very difficult to correct course correct afterwards, and certainly when we have Other systemic issues and in the us we have systemic and structural racism that is so pervasive that it’s very difficult to get at the root causes so yeah.

What about disciplinary teams? Yeah? No, no yeah, that’s! So what I’ve seen also in practice is that oftentimes there is a lot of pushback resistance even coming from the core data scientists right because, like usually, I think the defense line is the evidence is in the data.

00:27:07

That’s what the data is telling me.

So, even if you have, I am on board with you uh that we have interdiciplinarity uh uh our team right, but probably we can actually have one or two thoughts on how to address the issue of resistance or pushback from people that just want to focus on the Evidence or in the bias data this is, this has been a really great discussion and you guys are like bringing in so many points.

I mean like you’re, brought in culture acceptance, and you know criminal justice and not perpetuating the same biases like I see the overlapping concepts here and I actually see like if we could continue this discussion for another one or two hours.

00:27:49

We might actually have a solution like we may bring out even a much better document than the slmr principles.

00:27:55

I know many of you may have heard that and I was very disappointed in that, but I just we’ve just run out of time because I think it wasn’t enough time for us, but you know it’s.

It’s been so amazing to meet all of you and it has been my privilege actually in life to talk with such experts like yourself and to be granted this opportunity by data ethics for all.

So I’m very thankful to everybody and I hope on another panel, we can all meet together and continue this discussion.

00:28:27

Thank you.

Everyone.

Thank you.

Thank you for having us.

Thank everyone, pleasure.

Thank you, everyone.

This is awesome.

Panelists

Toju-Duke Product Lead Google | Tech Startups Mentor | Founder Refyne | Ethical AI | Women in AI Project Lead

Toju Duke

DataEthics4All DIET Champion 2021, a member of the DataEthics4All Think Tank Community, Google Product Lead for Europe, the Middle East, and Africa, Product Manager for Women in AI Ireland, Founder Refyne 

Lucia-DiNapoli

Lucia DiNapoli

Member of the DataEthics4All Think Tank Community, CEO of Trajectory Consulting & Assitant Dean, Office of the Dean at University of Pennsylvania School of Nursing

Michael-Akinwumi

Michael Akinwumi

Member of the DataEthics4All Think Tank Community & Director, Trust Science 

Dr.-Rachele-Hendricks Health Policy Counsel, Scientist, and Solutions Architect in a Data-Driven World

Dr. Rachele Hendricks-Sturrup

Member of the DataEthics4All Think Tank Community &  Health Policy Counsel and health lead for the Future of Privacy Forum’s health and genetics working group

Mayank-Kejriwal Research Lead at Information Sciences Institute

Mayank Kejriwal

Member of the DataEthics4All Think Tank Community & Research at the University of Southern California

Moderator

Susanna-Raj

Susanna Raj

DataEthics4All Leadership council, AI Ethics Researcher, Founder of AI4Nomads, Artist & Writer