Back To Top

Accelerating AI Adoption: AI DIET World 2021

``Do we understand what responsible AI is?``

``Are we informing and educating all the people that might have an impact in the decision making process.`` - Diya Wynn

“I think it’s so important the work that data ethics for all is doing, and exposing young people to stem and technology and opportunities and artificial intelligence”

Diya Wynn

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

“Deloitte… said that while organizations understand that there’s a risk, or understand that there may be a potential for bias and a lack of transparency, they really don’t have the skills in order to be able to address those issues.” 

Diya Wynn


0:00 Shilpi Agarwal:

Next up, we have Diya Wynn, Senior Practice Manager of AWS. Diya has worked in technology for 25 years, building, delivering and leading in early and growth-stage companies. She has a passion for developing current and emerging leaders, promoting STEM to the underrepresented and driving diversity, equity and inclusion. After spending years in technology-focused roles, and separately pursuing passion projects in DEI, she now has a role marrying the best of both worlds. She leads a team focused on engaging with customers to go from principles to practice, operationalizing standards for responsible use of artificial intelligence, machine learning, and data. Please welcome Diya Wynn! Hi, Diya, how are you?

1:17 Diya Wynn:
I am doing great. I’m excited to be here.

1:20 Shilpi Agarwal:

I’m excited about your talk on ‘How to accelerate AI adoption’. I’m going to give you the space.

5:00 Diya Wynn:

Greetings to everyone. As was introduced, I’m Diya Wynn, and I’m excited about having the opportunity to share a little bit today. Now, I would say I have a distinct pleasure in being on the stage with so many other amazing folks that have been talking with you over the last three days. But I also have a bit of a challenge, because we’ve had some amazing individuals come and I think I’m at the end of the day!

I just want to take a moment to share a little something with you that will hopefully give a bit of context for how I got here, and why this particular area is important. So, I have two boys and I grew up in a family of girls in the south Bronx, between Harlem and New York City. Earlier in my years, I had not been exposed to technology; I had not had the privilege to see other people who were in technology in my family. What was traditional at the time was to see women who were teachers, maybe nurses. I got a computer early in the third grade, and that actually was the catalyst for me that made me want to go into technology. It was that exposure; and I bring that up because I think the work that DataEthics4All is doing is so important, in exposing young people to STEM and technology and opportunities in artificial intelligence, and even exposing and elevating the awareness of folks that may not be as aware of some of the challenges and the things that we should be mindful of, or avenues for us to consider as we look to advanced technology. I just wanted to bring that up because for me, when I started to think about my earlier experiences, I was looking at: ‘well, what do I do with my sons? How do I prepare them for what’s going to come in the future?’ And that’s really led me to where I am today. My background is in technology, I studied computer science. I was that third grader who kept my mind focused on technology even from the start, but I wanted to make sure my sons too are able to navigate this world, because we know that there are changes on the way that we will have to live with and adjust for, when we think about technology, artificial intelligence and robots being at the forefront of how we engage and shaping how we interact with the world. This is as much an inclusion, diversity, and equity problem as it is a technology problem. So, that’s just a little bit of my introduction and why this is so important that we bring awareness and education and opportunity exposing individuals to more of what the world will look like and the opportunities we have in it.

9:14 Diya Wynn:

Now, since I’m here on the last day of this AI DIET world, I wanted to just give you a brief overview of some of the solutions that my team and I have created to help our customers at AWS in this journey to responsible AI.

Most of you are already aware of how much machine-learning and artificial intelligence has been incorporated into everything that we do. Most of our industries are exploring and using artificial intelligence in some ways. Just recently I bought some new appliances in my home, and now my Samsung refrigerator and TV and microwave all have ways of talking to each other and giving me alerts in different places. Those are some simple examples, but then we also know that organizations are starting to use it in ways that aren’t just research and development or proof-of-concept projects anymore; they’re really leveraging artificial intelligence in major ways to advance products to see greater efficiency in operations. And we know that they’re handling real-world tasks, maybe not as simplistic as my Samsung kitchen appliances, but more complex problems that we thought perhaps unachievable or unattainable in the past. And this whole idea of this increased use of machine learning and artificial intelligence technologies means that there’s an increased need for us to not only be aware of benefits and potential risks, but also be doing something about them.

12:00 Diya Wynn:

Companies and CEOs recognize that there is a potential for bias and lack of transparency, and we’re seeing this in other organizations, big and small. At the same time, there is a concern – and this came through in a Gartner study – that 85% of the projects were expected to deliver erroneous outcomes due to bias in data algorithms and the teams that produce them. So what do we do then, if we recognize that more companies are going to begin to use artificial intelligence and more applications are going to have AI and ML infused into them, and we’re going to be interacting with it more, and that there are these potential risks? There’s another study as well (from Deloitte), that said that while organizations understand that there’s a risk or a potential for bias and a lack of transparency, they don’t have the skills to be able to address those issues. So, there is some level of awareness and some people will question whether everyone is fully aware of where there may be an impact – but then, what do we do? Not everyone is sitting in circles like what we just saw over the last three days, where they’re talking about potential opportunities, and the ways in which we might look at navigating bias, or how we understand our data and look at more complete and robust options with our data. Not everyone is sitting in those environments, so what do you do when there is such great promise and opportunity when we think about the ways in which AI has been used today?

14:03 Diya Wynn:

We heard some examples – when Vivian was talking about being able to give those who were autistic the ability to be able to recognize expressions that they wouldn’t normally be able to do, amazing use case, right? Or in the case of Thorn, which is helping children that may have been pulled into child sex trafficking; helping them to be reunited with their parents. These are use cases that matter, and we want to be able to have the technology at hand for us – so then what do we do?


I think when we think about enterprises and us wanting to be able to leverage the great benefits from the technology, we also need to make sure we’re not infringing on civil liberties, and that there is inclusive technology or models that are being in used in ways that respect our rights, are fair and inclusive. In the work that I get to do, and my team as well, we understand that this is not a simple problem. Some of the things we can do to address it may in fact be simple, but this really is a multi-disciplinary problem; not just looking at the skills or the expertise from certain industries, but also bringing to bear philosophy, law and these other social sciences to begin to address some of the issues and challenges, and create opportunity. I heard someone say that the other day: consider every problem as an opportunity.

16:12 Diya Wynn:

So, what my team and I got to do is create this framework to begin to address the gaps in skills in the enterprises and organizations we’re talking about, so that they’re not just aware and wondering what to do, but they actually have the skill and capability to address problems, and begin to tap into how we operationalize responsible AI. What things do we put in place to ensure that we’re looking around corners and seeing where we didn’t notice that certain individuals may not have been at the table, and that applications aren’t necessarily meeting the needs of some of our consumers?

AWS Ratio is our responsible AI framework that helps us establish principles so we can take an intentional and strategic approach, so that we can deliver good outcomes and minimize unintended impact and risk. This is our answer for assisting organizations with building on their AI investments in ways that allow us to experience the great benefits that we get; tackling these challenges but also addressing opportunities or areas where they may be risk. Each stage in Ratio helps us highlight questions; primary considerations; areas that we need to pay attention to in order to identify where there are gaps or potential risks, and begin to address those with some best practices and actually doing the work to put them into place in the organization.

18:28 Diya Wynn:

So we help our customers look at artificial intelligence and machine learning and the way in which they’re going to employ that in a responsible way. This for us is identifying a new operating model, right? Looking at people, process and technology, so that we can really drive better outcomes and reduce risk. Defining and executing that operating model goes across every stage of what we would say is the machine learning lifecycle, but this is really a part of a holistic AI journey. Do we understand what responsible AI is? Are we informing and educating all the people that might have an impact in the decision making process, in annotating data, as well as our data scientists that are training models, who all are educated in and love their area, in ways that they actually can be accountable and responsible, and take a part in ensuring that we’re delivering, building, deploying and operating responsibly?

19:40 Diya Wynn:

We’ve defined seven pillars as a guide to help us navigate through the areas to put in place the right things in order to design, develop, deploy, and operate responsibly. This will hopefully address ethical concerns when we assess this more broadly, and help us define players that have a responsibility, so they have clarity about where they fit in and when, and then give us a relative mechanism for being able to monitor and iterate that over time. And continuing to improve the process, because the ultimate goal is that we minimize the risk, reduce unintended impact and build in an inclusive way.

20:37  Diya Wynn:

Now, I talked about this, and we started off with this first piece or definition that said: ‘Accelerating business outcomes with AI matters. Raising the bar on responsibility matters even more. And Trust is at the centre’. The reason why I say that is because this conversation ends up being one about trust, and we’ve heard a little bit about that over the course of the last couple of days, as well. As human beings, we don’t trust things that we don’t understand. Companies need to be seen as trustworthy and as building trustworthy products, and the absence of that is going to have an impact on the company, people’s perception of their products and the technology itself, which ultimately has an impact on the industry and will inhibit our ability to leverage it to get life-changing solutions or solve problems. We want organizations to be building and operating responsibly so we can continue to solve real-world problems.

A great example of that would be that of the NFL and the way in which they’re using AI to find new ways to address player safety. I told you the example of Thorn with child sex trafficking or even someone that’s using AI to start to predict how they might help heart-related issues so that they can provide more preventative care. These are the ways in which we want to be able to leverage it, so we’ve got to build a foundation of trust. That notion has to start not just with the data sciences, or those of us that are aware, but really giving opportunity and responsibility to everyone in the lifecycle. That starts in a large part with our companies and our organizations that are building and deploying the technology in this way.

22:55 Diya Wynn:

So with that, I will say thank you, because I only had a little bit of time. Our solutions really go from discovery into production, and these are ways in which our team (which is a global team) can begin to partner with our customers and take them along this journey, going through the seven pillars and highlighting areas of opportunity for us to address to really build more inclusively, and ultimately have the foundation of trust that allows us to solve more, and to address more and more.

23:39 Shilpi Agarwal:

Completely agree with you Diya, what a fantastic talk, completely agree that we have to move forward with social good as our core in order to leverage data, technology and artificial intelligence to make the world a better place. Thank you so much!

Diya_Wynn-AI DIET World event Speaker 2021

DataEthics4All hosted AI DIET World, a Premiere B2B Event to Celebrate Ethics 1st minded People, Companies and Products on October 20-22, 2021 where DIET stands for Data and Diversity, Inclusion and Impact, Ethics and Equity, Teams and Technology.

AI DIET World was a 3 Day Celebration: Champions Day, Career Fair and Solutions Hack.

AI DIET World 2021 also featured Senior Leaders from Salesforce, Google, CannonDesign and Data Science Central among others.

For Media Inquires, Please email us connect@dataethics4all.org


Come, Let’s Build a Better AI World Together!