Back To Top
DataEthics4Allᵀᴹ Ethics 1stᵀᴹ Live: #10 MVP vs EVP – What should Startups focus on?

DataEthics4Allᵀᴹ Ethics 1stᵀᴹ Live: #10 MVP vs EVP – What should Startups focus on?

“There is a problem with that – that the minimally viable product is pushed so fast in the pipeline, that companies don’t even take the time to make sure that it is also ethically viable.”

~ Susanna Raj

TEST

“You’re now thinking about having the right team in place, having an ethics first mindset across that team and within the individuals themselves, and also looking for someone who can lead that team in ethics.”

~ Sam Wigglesworth

Expand

Talk Summary

DataEthics4All brings this Series of Ethics 1stᵀᴹ Live Talks for Leaders with an Ethics 1stᵀᴹ Mindset, those who put People above Profits. Come, Join us for this lively and informative discussion and food for thought on how to create an Ethics 1stᵀᴹ World: People, Cultures, and Solutions.

This week, in the 10th Episode of the DataEthics4All Ethics 1st Live, we discussed: MVP vs EVP – What should Startups focus on?

1. Minimal Viable Product vs Ethically Viable Product. What should be the primary focus of an Agile Startup?

2. How to build a Product with Ethics by Design?

3. How to build a Startup Team with Ethics INSIDE? Champions who put Ethics above Profits?

Come, Join us and share your perspective and best practices. If you’d like to be invited as a Panelist, please write to us before the show.

Transcription

0:39 Susanna Raj:  Hello everyone! We are back again for our weekly ethics first live talks. I’m here with Sam Wigglesworth, and Shilpi unfortunately could not join us today, but we are going to run the show, and Sam is going to introduce the topic. It’s a very, very relevant topic that is very close to my heart. So Sam, go ahead.

1:07 Samantha Wigglesworth:  Brilliant! Good evening, and lovely to see you again, Susanna. This is session 10, it’s January the 27th, and what we wanted to focus on this evening is looking at minimum viable products versus what is called EVP, and EVP is referring to ethically viable products. So we’re just going to discuss that this evening and what the importance of them is for startups in particular, and where we should focus our time. 

1:38 Susanna:  What is an ethically viable product?

1:46 Sam:  From my perspective, it’s moving away from the traditional MVP, where you want to get a team together as quickly as possible, produce an idea and pull an MVP together, a minimum product together, you’re now thinking about having the right team in place, having an ethics first mindset across that team and within the individuals themselves, and also looking for someone who can lead that team in ethics. 

 

2:15 Sam:  That’s what I would say. You want to focus on responsible AI as well, that’s the core focus when you’re building responsible applications. Yeah, making sure you consider everybody’s decisions as well, and everyone’s requirements, it’s really important for the user.

2:32 Susanna:  Yeah, definitely, and, you know, the minimally viable product has now been sold to everyone, and it’s a very lean, agile, framework in which to develop a product as quickly as possible – a testable product that you can push out to the public, and then the public can actually work on it, give you feedback on the failures and the flaws in your design, and you can quickly iterate, fix it and move on. 

3:01 Susanna:  There is a problem with that – that the minimally viable product is pushed so fast in the pipeline, that companies don’t even take the time to make sure that it is also ethically viable. 

3:19 Susanna:  To me, an ethically viable product is more important than a minimally viable product because in the long term, once the pilot is done, once the testing is done, it’s not possible for you to go back in time and fix what was originally a critical ethical flaw in your design. 

 

3:41 Susanna:  So, do you have any questions for us that we can answer from the audience?

3:49 Sam:  Let’s have a look. Let me just check if we have any questions.

3:57 Sam:  Not at the moment, we don’t at the moment, but please go ahead and ask us. 

4:02 Sam:  Yeah, I think you’re right though, it’s a really good point that you were making that we do need to think about it from an ethical perspective and an EVP rather than a minimum viable product.

It’s becoming more common now to do that – to think about the company’s values, how the models that we’re training and building treat the end user and how we as a business are focusing on that as an element for development.

 

 

Susanna  4:37   think, the first call about it, I think that’s the question right, you know, yeah, really cool about it. [?]

4:46 Sam:  Yeah, like you were saying you want to have an agile approach. It’s not that we’re dismissing that you should still go through the process of scoping your ideas, of strategizing, and planning and going through that development life cycle and testing, deploying, and then iterating and improving. 

 

5:09 Sam:  But fundamentally, from my perspective, and from what we’ve learned so far, it’s really about the data that you decide that you’re gonna train your algorithm, your machine learning model on, and understanding that and making sure that it aligns with your end user – that it avoids as much as possible those biases that we have talked about that creep into data and into training models. 

5:40 Sam:  That’s really important for me, and we can look at frameworks to help us with that, whether it’s our DataEthics4All pillars, or others that are more available internationally and in companies, but the fundamental thing for me is that we consider all users and all end users when it comes to the data itself and the model. Yeah, what about you? What do you think? What’s your view?

 

6:05 Susanna:  My view is the same, that right now there is a question within the field of ethics, you know, not in the field of ethics within the AI industry, it’s ethically costly businesses, by design, you know, trying to build ethics into the product.

 

Susanna: Yeah, waiting for inclusiveness, making sure that you are hiring the right person, that right the people are in the room, and also, that you are able to do market research for the target audience, making sure they are also included in the process, all of this costs money. So what do you say to that?

 

6:48 Sam:  Yeah, it’s absolutely true, I think you have to align your product, your EVP with a funding model, as well, that aligns with your values.

 

Sam: I think it’s looking for those individuals that are interested in helping invest and grow that, whether that’s within the company itself, or looking for funds. Yeah, so certainly doing your research in that area, whether it’s working in an application in ed-tech, or in an ethics framework, for example, you’ve got to look for those funding avenues, and they’ve got to align with your values in your mission. 

7:32 Sam:  I’ve seen before that, we’ve seen in the articles that we’ll be reading, particularly from the one from TechCrunch, that Shilpi shared earlier, that for several years, a company can be developing a product, and invest millions of dollars into it, into both the marketing, and promotion of it when it’s finished, and the development, and if you don’t get it right that first time, like you said, and you don’t build it into the design from the basis, then that’s it. It just takes one mistake, absolutely. We’ve seen it before, and we don’t want that. 

 

8:06 Sam:  So aligning the right funding avenues, and making sure that that is your priority, and that comes from the team, and comes from the design.

8:17 Sam:  What do you think? What’s your view?

8:19 Susanna:  I don’t believe in that argument when people say it’s so expensive to build ethics into the whole design of your product. No, it is probably the most priority expense for your company at that stage, you should spend as much money on that as possible

 

8:38 Susanna:  If it takes time, then you should take that time and build it ethically – hire the right person, wait for the right team to form and make sure that your team is inclusive, make sure for all the discussions you have about the people who for whom you are building the product, make sure those people are also in the room with you. 

8:56 Susanna:  So it is an expensive avenue, but I think that is the only way to avoid the pitfalls that we are seeing now. Also, going down the road after you make a mistake, and after you see that it is no longer ethical, and then people are shouting at you and suing you and all of that, that is it is a costly endeavor, right? You might as well put the money in before into the product itself. So that’s what I think.

 

9:35 Sam:  Knowing when it’s ready and taking time is important, I agree. I think from the very, very beginning, if you have a team that understands data, that understands the importance of privacy, that understands how to protect security and users rights, and has that perspective.

 

Really importantly for me, and I’ve been looking at a couple of TED talks as well about this recently, and actually we’ve had a talk on this, and we did have a talk last year during our Summit – we mentioned the importance of inclusiveness and understanding language, but it’s also about understanding culture. 

 

10:17 Sam:  If an organization can do that, and can make it a global application, thinking about the global South – we’ve talked about the global south before quite a bit.

 

 

10:28 Sam:  Sometimes we have a perspective and we need that to be global. We need to look at that from many different lenses, from a language perspective, and also from a cultural perspective. 

10:40 Sam:  That takes time, as well, and learning. Do we have any questions? 

11:12 Susanna:  Yeah, anything related to this topic of an ethically viable product, this building with ethics by design into the product life cycle. Do you have any questions for us? You know, ping us? Put it in chat, and we will answer it live for you here.

11:27 Sam:  I have a question then, and I think one of the questions we wanted to ask was, how do you build a team with ethics inside? So what is your view? How would you build a team up with ethics inside?

11:29 Susanna:  First of all, with ethics, you can say that ethics has many different frameworks out there, but the one that is most crucial to any product’s viability, is to have inclusiveness built into it. 

12:04 Susanna:  You cannot have inclusiveness built into it when you don’t have representation of everyone at the table, everyone at every stage of the process, everyone who’s going to be impacted, the people are going to use your product, as well as the people who are developing a product – both of them should be from that background from the same background, should have also differences in different disciplines should come together. We don’t have that anymore. Different races and ethnicities, and different people who speak different languages. Even languages have different meanings in the same word, as a different meaning or a different connotation in another language. So you don’t know all of that unless you build a team that is more inclusive.

 

 

 

 

12:52 Susanna:  So to me inclusiveness is the first first pillar of you know, building an ethically viable product. And if you cannot build an inclusive team, as of now, wait, all silver workers.

Susanna: Wait, wait until the person is there and don’t Don’t rush into it. And that waiting is not considered viable in the industry. But to build an ethically viable product, you must wait. And that is what I think and it should have. I think the next step is it should have inclusive use cases that benefit anyone, everyone everywhere. It cannot be just your product. Let’s say you’re selling something like a device, you know that is going to benefit only a certain group of people,

Susanna: you need to think how that device will be used against another group of people. So it needs to be inclusive.

These use cases are another area that I think we should have in the design itself. Any other things that you think should be there at the design stage?

13:57 Sam:  Yeah certainly, I think as developers, and designers, we need to come from a perspective of knowing that we need to be agile in the way we approach products and developments. But before we get to that stage, like you said, we have to take our time, we have to understand the importance of knowing and being transparent with the technology that we’re building, and understanding how the processes of the algorithm operate, and the data that’s been captured. I keep saying that, I think that’s really important – from my perspective, from my background and from the work I’ve been doing, I think that’s really key. 

 

 

 

 

 

14:38 Sam:  Understanding the phases of development, but getting those people involved from an early stage, like you said, aligning them with the values of the company, and treating people that are part of that journey with fairness and respect and rights to privacy as you go along. 

 

14:58 Sam:  I think also you’ve got to think about, like we said, safety and security, and that it operates efficiently for everybody, and that we’re accountable to any outcomes. Like you said, we often find that, in the past, we’ve gone ahead of ourselves and developed products that are for users and very functional and well designed, but we haven’t often thought of the consequences. So we need to think about that first, before building – just know the pitfalls first and learn from that. 

15:29 Susanna:  Yeah, you have to think about all the use cases. I know they say that we didn’t have the imagination to think that facial recognition would be used? No, not really, you had the imagination to build the facial recognition tech, but you can’t say you did not have the imagination to think of any evil use cases, so we must have the imagination to ‘think evil’ as well so that we can avoid being evil.

 

16:03 Susanna:  You have to think, how will this be used against an individual? Or against another country? What about countries that don’t believe in democracy? How will they use it?

16:15 Susanna:  We didn’t think of all of those things, we launched facial recognition technology out there, and it is now being used in countries that don’t believe in democracy against their own people. So those are the use cases that you have to think about and all of this thinking takes time, and that is what it means to build an ethically viable product rather than a minimally viable product. A minimally viable product is easy to build and very fast, but an ethically viable product takes a longer time.

16:45 Sam:  It is an investment, yeah I agree. I think what you mentioned earlier about having use cases for individuals or even countries that look at risks, and how it might be used against that particular group or individual is really important. Especially when it comes to the newer, more advanced technologies that we’ve seen in voice and facial recognition, I think both of those areas, those avenues are worth exploring.

Sam: I also found, from the research, that we often build products that are not accessible, so we need to think about accessibility.

We’ve talked about that a little bit before, but I think that’s really important for those that have multiple special educational needs, for example, that maybe need not only to read but also to have good audio, and good visuals of websites or applications. 

 

17:51 Sam:  I think we’ve got some questions coming up, let’s go into the feed. Okay, it’s not that busy on the comments feed, but I think we’ve covered most bases this evening. I know we had a couple of key questions about what we should be focusing on, what’s most important, and how do you build it? I know that in the articles that we’ve been looking at together as a team.

Sam: there’s an important requirement for an ethics first leader to lead that team, and I think sometimes you need both that and the right group, as well.

18:45 Susanna:  I know that article mentioned that, you know, you should have an ethics chief officer, but I tend to disagree with that – it’s not just one person who’s being the ethical lead in the team, it doesn’t work like that. 

 

19:03 Susanna:  It has to be everybody, if we are going to build something which is ethical by design, everybody in the room has to be a voice for ethics and has to be the person to advocate for ethics from that room, it cannot be a single person – that is a model that has already failed. 

19:23 Susanna:  We saw with Google and all of them, you know, it’s not like you have a chief AI Ethics Officer and Chief AI ethics is such that no is doesn’t have the, you know, the ability to this another community wise, everybody in the company at every stage and every process should be talking about ethics. 

19:43 Susanna:  You know, in DataEthics4All, for example, all of us are advocates for ethics, so it’s not possible for the organization to do anything unethically because everyone in the room will speak up for it. 

19:58 Susanna:  Speak up politics, that is how I know ethics by design should be bad. What do you think?

20:06 Sam:  Yeah, I think it’s it’s key, and we should be thinking about it from within as well and understanding that it’s actually part of your fabric as an organization, and it’s interwoven into design, it’s not separate. You shouldn’t just have that one individual, or that one small group, you need it to be across different teams or multiple teams, it’s part of a culture. 

20:34 Sam:  We’ve got our main objectives as an organization, and one of our missions is that we build it into the fabric of who we are, as an organization, I think that’s really important.

We learn and we improve, but we do want to start from that base, from the bottom up in that respect. I think we’ve covered all questions for this evening, and I’ve really enjoyed the discussion. 

 

21:19 Susanna:  Definitely, Samantha is also a founder of an organization –  she is also a founder of a startup, so she knows what it means to build ethics by design into the product, into her programs, and into her organization. I am also a founder, so this is not something we just talk about, this is something we are trying to implement in our own products and in our own companies. So it’s a long journey, it’s an expensive journey, it takes time to do this the right way, and we have to break this mold of ‘minimally viable product‘, but just focus on the the agile framework that is included in that  philosophy, but take it and use it for an ethically viable product, an ethically viable lifecycle of a product. 

22:14 Susanna:  You have to really dive deep into that, build explainability into it, build inclusiveness into it, make sure everybody is included, and every use case is also included.

Susanna: So, those are all the things that we are all trying to practice, and you can contact any of us for any more information on how to implement this in your companies. Sam is an expert in NLP so she would be able to help with how to build an ethically viable product as an NLP expert. 

22:50 Susanna:  So, DataEthics4All is an organization that is founded by Shilpi Agarwal, who believes in the Ethics First mindset, and everybody here is an ethics first mindset leader. So we are all here to support you in your journey to build ethically viable products. So come and ask us any questions, come and join our community.

23:30 Sam:  Thank you, Susanna, who’s our chief ethics first officer, she’s our ethicist, absolutely. So thank you.

 

23:29 Susanna:  Thank you, everyone, for joining us. It was a pleasure talking to all of you this evening. And we hope to come to you again next week for another lively ethics first discussion. Thank you.

 

 

/

Shilpi_Agarwal_Speaker_-_AI_DIET_World_-_Founder_DataEthics4All
Susanna-Raj-Speaker AI DIET World event 2021
Sam-Wigglesworth- Speaker AI DIET World 2021

Leadership Team, DataEthics4All

Join Us in this weekly discussion of Ethics 1stᵀᴹ Live and let’s build a better AI World Together! If you’d like to be a Guest on the Show, Sponsor the Show or for Media Inquires, please email us at connect@dataethics4all.org

 

Come, Let’s Build a Better AI World Together!