Episode 20: Ethically Aligned Design & Applied AI Ethics with John C. Havens


20_ John C. Havens.png

What is IEEE and what is their “ethically aligned design” initiative? How can positive visions for the future help us create better technology? What do kindness and wellbeing have to do with AI Ethics?

To answer these questions and more we welcome John C. Havens to the show. John is the current Executive Director of the Global Initiative on Ethics of Autonomous and Intelligent Systems at The Institute of Electrical and Electronics Engineers (IEEE). He is a contributing writer for Mashable, The Guardian, and The Huffington Post. John is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, among others.

Follow John C. Havens on Twitter @johnchavens

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Relevant Links from the Episode:

Heartificial Intelligence: Embracing Our Humanity to Maximize Machines by John C. Havens

IEEE Ethics In Action Website

Ethically Aligned Design E-Book

Ethically Aligned Design for Business

IEEE 7000™ Standards & Projects


Transcript

John C Havens_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

John C Havens_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical AI, a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess. In this episode, we interview John C. Havens. John is the current executive director of the Global Initiative on Ethics of Autonomous and Intelligent Systems at the Institute of Electrical and Electronics Engineers, or I believe for short. John is a contributing writer for Mashable, The Guardian and The Huffington Post and the author of Artificial Intelligence Embracing Our Humanity to Maximize Machines, among Others.

In our interview with John, some of the topics that we cover include what is IEEE and what is their ethically aligned design initiative? How can we effectively crowdsource ethics? How can positive visions for the future help us create better technology? And finally, what does kindness have to do with A.I. ethics? For many reasons, we are incredibly excited to share this interview with John, with all of you, but we are especially excited to have John featured on this show because John has been such an amazing supporter of this project ever since he initially reached out with just nothing but kind words to both Dylan and I a little while back. We have been wanting to get him on this show and he is just continuing to show us the ropes of A.I. ethics and connect us with amazing people and really just help us feel so welcomed and part of the A.I. ethics community and the conversation. So thank you so much, John, for helping us feel at home in this community.

We are on the line with John C. Havens. John, welcome to the show and thank you for coming on today.

My pleasure. Such an honor to be here. Thanks to you and Dylan, of course.

And we just want to start off by asking you really what motivates you to do your work? And maybe you can start by talking about where your story begins.

Sure. And what a great question. I feel like I should be playing country music in the background. I was born in 1960.

I guess a lot of my story begins with the gift of I'll call it forced introspection. My dad was a psychiatrist. He passed away in 2011. My mom still is a minister.

I went to college thinking I was going to be a minister in the Protestant or Christian tradition, rather, and had a wonderful acting teacher who knew that I was a massive ham and he was basically like, look, don't go to the pulpit thinking you want to be an actor. So that I went to New York City, I was a professional actor for about 15 years and fell into writing because a lot of times I would do really bad industrial films where it was like, here's how H.R. works. And I was the actor you were forced to watch. And so I just wrote the scripts and I was good at being funny at them. Then I got paid to get to write and then I fell into writing and I've written for Mashable and The Guardian and Courts and and now I've written about ten books.

The last three of the ones that are professionally like a ghost written some books, and the one was for a while in two or four penguin. So all of this to say a lot of my life is about examining who we are as humans in a deep and methodological way.

You are someone after my own heart, John, as someone who in a religious studies program right now and I'm wondering if you could just take, you know, this this is kind of like a lob or a softball for you. So what does A.I. ethics have to do with what it means to be human?

Right. Yeah, softball questions are nice. Easy one, one word answer that no academics are going to go enjoy.

I think for me, my journey with I started off, I was writing a series of articles for Mashable and just kept asking people, hey, what's the code of ethics for? Hey, I just sort of ignorantly thinking like it's on page seven of the interweb, like here it is. And more and more people would just keep quoting Asimov's laws of robotics to me, which I love.

I'm a huge nerd and watch every single black mirror or whatever insert name here. I've hopefully read it or seen it. And Philip Philip K. Dick is like a God.

And more and more people kept kept quoting those things and then realized, I don't think people are equipped. Or don't necessarily have a background like I've been blessed to have a background of of just naturally asking a lot of questions about my own values and then trying to understand others. And I think certainly not just in the Judeo-Christian tradition, but. Muslim, Indian, Confucian, you know, it's this deep sense of who am I, how do I fit, how do I find worth and purpose?

And I'm really lucky because I think when you get to ask that question for yourself and then inspire others to do it or by inspire me, mean try to invite them to do that in a way that makes sense for them, that that means the gift of especially when it's coupled with curiosity. That's where I think the easiest road to wisdom actually lies. It's not through necessarily. I mean, academic stuff is phenomenal and fantastic, and that's often what research is. But I think it's that truthful sense of who am I, how do I fit in the world? And that's just been a blessing for me to to merge my back story with my present work.

It seems like you talked a little bit about this subject in the book that you wrote, Artificial Intelligence. Could you tell us a bit about what that book was and what it meant to you, why you were motivated to write that just with the book mention totally behind you guys.

Some coffee is from the standpoint of just saying we'll split it or you can pay for me because how do I do that? Well, you know, and I know no one can see it but your viewers.

I have the book here and I I wrote the book. It came out of those Mashable interviews. Where I realized and this is back in twenty thirteen or twenty fourteen, which was, quote, early in the space, but then of course when you know, people like when the Warlick or the people that you all have interviewed, it's like obviously people have been doing this for 20, 30, 40 years.

So I am very deeply indebted to them, Stuart Russell, et cetera.

But I wrote the book really it started from a place of fear and it wasn't a place of fear of like, oh, robots are going to kill us because it's like we've all watched all those movies and like, that's easy. You wake up one day and you're like, oh, I guess there's a gun or whatever. And that's just the sort of silly, easy stuff. I say easy, because the deeper fears for me are where people don't understand aspects of data. Right. Which is the core of what A.I. relies on and how fundamentally our data represents our identity. And and then other things about like where we potentially delegating aspects of introspection to these amazing, beautiful technologies where the answer is we are also built as humans to do wonderful relational things that makes us gifts and that these amazing, beautiful technologies can.

Compliment, but the book was really an exploration for me personally to say as a father, there's a lot of stuff about me as a dad. And the other thing I should say is I opened up each chapter of the book with fictional vignettes because I think the stories most people have told me that, which is lovely when they tell me this, the book resonates with them largely because of those stories.

There's not necessarily like a launch into let's go into what fairness and explain ability means and really tough stuff to grab your brain around first, which is needed, but more as an entry way into that. And that was a big goal. The book is to write a book that my mom could actually read, at least the opening sections, and we could have conversations about.

We talk a lot about storytelling on this show. And in your career, you've done a whole lot of storytelling from the religious element, then the journalist element and the writing element. And one thing that we've heard is this almost double edged sword of of storytelling where you can have storytelling that really opens up people's hearts and changes minds. And then you can also have this very dogmatic storytelling of either, you know, this dystopian future or this utopian future. And I'm wondering if you can talk about how we do storytelling responsibly in the space.

I think for me, and it's people like Stuart Russell, that's another wonderful guy named Hugh Price who's at home, him, who's a wonderful friend and just brilliant guy. We actually were running a meeting in Austin, Texas, about three years ago for the chirpily, ethically aligned design work. And he got up and said, we need positive stories of the future so we know what to build. And I had tried to do that in my book. The whole second half is on utopian recommendations based on things we'll probably talk about later well-being indicators and data sovereignty. But to your point, I think what's responsible is, first of all, to to recognize that I'm doing this more and more. And your show, by the way, I hold up another book, which is The Amazing Race After Technology by Dr. Ruaha. Benjamin, you guys.

Oh, there it is. Just represent this is I mean, she's brilliant, as you know.

I mean, hello. I'm giving you credit for the introduction to her stories more and more I'm realizing for myself. What even the framing of a story means, right, I start to speak in English, and I'm a guy who's fifty one and Maplewood, New Jersey, but I talked to a friend in New Zealand who's from an indigenous tradition there.

And stories and oral tradition are policy. I'm paraphrasing her probably poorly, but she invited me. I would have loved to go and I'm still dreaming of going to New Zealand.

I have a very respectful crush on the prime minister. I hope I can say that. But is is they invite people apparently into and I'll get some of these terms wrong, but into a very sacred space where no recording equipment is allowed, even writing. And when you have meetings with this and her indigenous tradition, which I'm Mary mispronouncing that and they write that it's a it's an oral tradition and they're the idea of storytelling, of course, is is let's let's call it majestic.

It's not just a a sense of. I want to frame the narrative, it's the narrative is the people, and so all it is to say the responsible storytelling also needs to start with a recognition of what that storytelling means.

So I really love how you frame that utopian future with this vision of, you know, needing a positive vision of the future so that we can help create the tools in the future that we want to see. And that and that utopian idea. And one of the things you mentioned was I Tripolis ethically aligned design initiative. So it would be great if you could first explain what it really is and then dive a bit into what that initiative is.

So I Tripoli is the world's largest technical professional organization, advancing technology for the benefit of humanity. It's a nonprofit organization, its engineers and technologists from around the world. And it's got over four hundred and twenty two thousand active members.

And it was also founded about a hundred years ago now by Thomas Edison. So a lot of these things I didn't know when I first had the the the blessing of being invited to speak at South by Southwest Pietropoli.

I had known in my work in PR who they were, but kind of on the periphery because it wasn't straight up academia and corporate land. Once in a while I'd get like a really geeky paper and get to indulge in it. And it said I chirpily on it.

So when I found out who they were and they invited me to speak, besides just getting to go to South by Southwest, which I've done many times, and love good barbecue and blues music, I was like, this is the world's largest engineering organization. So I eventually went and pitched them and said, I think you should create this code of ethics. Because everyone I talked to keeps mentioning Asimov's laws of robotics, which are awesome and geeky and lovely, but written in the nineteen fifties.

And it's a conundrum on purpose. And there's a guy named Constantinos Hollyhocks. Great interview. At some point if you're into it, he and other people that believe were already thinking about this question very deeply. And one of the reasons I wanted to work with AI Tripoli is their tagline is, is actually advancing technology for humanity.

And what I'd said to them in my pitch meeting, and they had Constantinos also says this all the time, too, is that word for 4R is deeply important because advancing technology for humanity, you have to define what that means. And that's where the applied ethics and all the values questions come into play. So really, it's Constantinos. He's very humble. Hopefully it's OK. I'm saying this, but it was him who said, OK, let's let's create a document, which is what ethically aligned design started off as a twenty. Sixteen is the first version that launched. And what we did is we found about one hundred people largely it was not not intentionally just happened that it was mainly from North America, the UK and EU. And we broke up a document at a different I think it was eight different areas, personal data law, general principles. And we said let's get about 10 to 15 experts in those areas each and just have them. We had this big meeting in The Hague. I have such fond memories of this meeting. Francesco, Towsley was there. She actually said, let's talk about issues, not concerns. And the point being that then the top 10 issues that these experts in these areas talk about and we said, let's frame it like you're getting coffee at a conference, right. Where like the legal conversations happened and write out the issue and then write out a recommendation, even if the recommendation is like, let's keep talking. Great. And then the resources. So the first version was this 100 page document in twenty sixteen that came out with the had sections. And then the two things I'm so happy that we did, which were not my decisions. And I believe our work is very consensus driven in this regard. Brilliant woman named Azure Moon who's at McGill leading mind in global robotics.

She said, Let's make this Creative Commons. And I come from I ran a couple of big pod camps, lived on wiki's for years, and I'm like, yeah, Creative Commons. So we did that. And then secondly, someone else said, make this a request for input, because the logic is the last thing you want to say with any sort of format of paper in general is like, here's the final word. And secondly, like you're going to say like, hey, we've done it. Here we go. Here's the final word on ethics. We you like just you know, and that was wonderful because so much of the feedback, especially in the first version, came from our members in China, Japan and South Korea who said this feels very Western. And my answer back as it often is, is awesome. What I hear you saying is you want to join a committee and tell us your thoughts, because then it's like it's a real cheesy but but overt way to say you were kind enough to give us that feedback. Tell us what you mean. And so we created a new committee actually called Classical Ethics and Autonomous and Intelligent Systems, which features non Western traditions like a boon to Confucianism, Shintoism to sort of. They say if this is going to be global in nature, we have to understand these different traditions and I can say more, but that's kind of a general introduction to ethically and design to take a step back for a second for folks who may not be who may have never heard this term before, ethically, line design or even are still maybe we all are to a certain degree, but are still information about what ethical design looks like.

And you possibly have an example of ethically aligned design versus maybe unethically aligned design and what those with the lines are, what the differences are between those two.

So many funny responses, which I should probably reserve for another time for unethically and design. But yes, I think one thing is that understandably, especially depending on the context of who you're speaking to in the audience, the word ethics in corporate settings and having come from the corporate setting, I get this oftentimes means compliance. And so the legal team is involved. But a lot of times other teams aren't especially saying engineering teams or having come from PR. A lot of times the word ethics is immediately synonymous with morals. And so people understandably get nervous. Is this people pointing fingers at us saying what we're doing wrong, which is a legitimate concern for us? The core of this work boils down to the need is to use applied ethics methodologies, things like values based design or value sensitive design created by the amazing body of Freedman. And then Sarah Speakman, who's a personal friend, really group that work. But so many thought leaders who have said, look, the real thing is to a and this is also like values alignment. This is stuff that as Moon is is gifted in human computer interaction. When you build something and you don't know how the end user will respond and you don't know who all the end users are, it's a design question. Right? Let's move the word ethics and morals and values off to the side for a second. The point is, is you're putting something into someone's sphere and it may be invisible. So it's even more important with an algorithm to understand what that is. You have to have methodologies about what you're building that simply recognize that there is something like a lot of times people will say, hey, is this the new manifestation or whatever? And my point is a A.I. is such a broad term. We say artificial intelligence systems because you can't just talk about A.I. It's machine learning, it's ajai, it's all these things. It's like saying the Internet. So you have to be specific. Secondly is here's what's not the same.

My data is being shared to the cloud period. Didn't happen with Eli Whitney didn't happen with whatever. It's just different.

So if people aren't building, recognizing that the tools that I use with this amazing technology, my response goes to the cloud and multiple actors respond and come back and there's influence and all this stuff, then my answer is, look, this is about helping. It's all about complementing and helping. And I'm not an engineer by trade, but I have so much respect for men, women, data scientists who are like, look, I'm I'm building an elevator to keep you safe. That is majestic. Important work. However, just as important is to say, well, look, you know, you don't need to do you don't also need to get a degree in psychiatry, for instance. But if you don't have someone on your team, when you're putting that device that literally has a human voice and is speaking to kids, you will unintended consequences. But by the way, I've been using this phrase recently, right. Ignorance is not innocence. It's not you being ignorant about the fact that that device is going to affect the mental health of a child does not mean and you are not pointing to anyone in particular, it's we, the society building the stuff. You just simply have to bring in people who have that expertise and it makes your lives easier. And anyway, all that is to say, ethically, line design means using applied ethics methodologies. A lot of what I used in PR to ask about end user values before you get to the design stage, because then the move fast and break things ideology is really reversed.

And the other thing I said incorporates all the time is, remember, this is not just about holding back and the ethics team is telling us to calm down and corporate social responsibility is talking about trees. It's like, no, this is about a design, kind of an agile mindset. But instead of speed and exponential growth being the priorities, it's asking the questions about really increasing the well-being of the end users and the innovation that will come from that. It's R&D, it's real R&D. So it's rethinking not not not recreating. Right. Because I want to be very honoring to the engineers who might listen to the show. This is all about we are building on the shoulders of giants. I am honored to be here, period. And that's ethicists, sociologists, anthropologists. However, where anyone might say we're good in isolation, I'm whatever a sociologist. I'm John. Right. My answer is the whole is greater than the sum of the parts we have to come together and communicate as the creators of this stuff. Ask about the end users and then everyone has to tell each other. And maybe you didn't think about that because you're not a psychiatrist. But I want to help you make sure that this thing you're building creates joy and happiness and well-being versus harm.

One of the worries that a lot of folks seem to have about standardization of any sort of ethics initiatives is something that you mentioned earlier, which is the fact that we don't really know the end users and that they exist all over the globe and they could be anyone and they could have any sort of value systems, especially if they're from different cultural communities where values might come into conflict with other communities. And something that I'm wondering is with this ethically aligned design initiative from I Triple E, how do you incorporate the possibility of value conflicts and value tensions and avoiding that standardization problem?

It's a great question. First of all, it should be Standards Association, the program that I'm executive director for. And I should mention we have a chair. His name is Roger Chatila, gloriously amazing human being based in Paris, world renowned roboticists. So my role is really to strategically help and drive and heard smart cats, as it were. If I don't if that's what you say, standards process for Tripoli. And I can speak here with expertise about that Tripoli process. Really one of the reasons I love it so much and I'm I'm a pure fanboy, so I'm not objective that that's why I'm here is the logic is there open? So all the all the it's a series called P that just means project. There's about 13 projects focused on or inspired by ethically aligned design where P is just mean they're still in development. One has launched what's called seven zero one zero and well-being. We talk about later. But all those standards right now, you don't have to be an actual member to join and they're open and by open, it means you can be anybody in the world and you email and say, can I join the working group? And you're in. So first of all, one thing is to say, understandably, sometimes that process can be confusing, but the reason I sort of joke is it's actually also part of the role of of I believe the new tagline is raising the world standards. And part of that for me comes from simply saying just that's a great question or fantastic question coming to the group and make sure to bring that up, because I will never be able to voice your concerns the way that you can. Right. But because these these working groups are open, if you come and join and contribute, then it's only going to be better work. And it's hard. It is really hard.

I don't want to make it sound like it's hands across the room and we're all just standards. Work is really hard because a lot of times there is this communications gap where even if we're all speaking, meet not just me. People are in that group speaking the same language, English or whatever it is Spanish. There may be a sociologist and engineer. And the actual word we we have a glass on.

I'll send it to Sarah Jordan created it or started it. She's amazing. She's at the Future of Privacy Forum. She had this idea, which is brilliant, which is putting a word and then giving the key stakeholders what they mean by it. So the word values, what do policymakers mean by the word values, which you engineers mean by the word values with the data scientists mean by the word values? And what does the corporate world mean by the word values? If there's four people from the same city in the states like Detroit, those four people will have four different definitions. So I bring all that up to say that, first of all, recognize that being from different countries. Yes. Is, of course, a huge concern and the cultural aspect of things anyway.

So that's one angle. The other angle is that ethically line design and the general principles came from a consensus based vote, the order of which there in the first principles. So it's not like John made this is human rights. We had this big meeting I mentioned in Austin, and it was a it was a wonderfully heated conversation, was very real about human rights. And a lot of the human rights lawyers got up and said, ethics is great. We understand applied ethics, the methodologies, et cetera, et cetera. But also we cannot ignore where established human rights, international laws exist, even if the interpretation of human rights and who wrote them might be conceived, maybe Western, sometimes, whatever. But the point is, is that they're there. And so that's why that was adopted. So I bring that up to say there's also where there's a lot of.

Needed recognition of end-user values and what's different to your point, Jess, there's also for us sort of lines in the sand and one of the lines in the sand is human rights comes first. Does that mean it's easy? Does it interpreted the same? Is there any country that doesn't have human rights violations? Answer is no. But does that mean we ignore it and don't try? And this is established. The answer is no. Of course we look to it. The second thing is things like data. We do a lot of work. And this is not just in the initiative, but on children's data and children's data rights and where there may be understandable issues about how different countries deal with data. And this is, by the way, not John saying this is from the working groups and from different work that we're a part of. And this is multiple organizations across the world are focused on children's data rights because it's a great way of saying where can we kind of move beyond some of the either corporate or policy driven or internally in the family, you know, the philosophical discussions which are incredibly important and say, OK, but is it OK for a nine year old child anywhere on the planet to have their data accessed in ways that they could potentially be more harmed in terms of human trafficking?

I don't want to be in the room where someone's like, hold on a second. I'm concerned about age nine. I think it should be age four.

Then I'm speaking as John, not for I should believe, but I'm also speaking for the work that's being done by a lot of the groups. And that is to say, of course, not nine, nine years old. And then from there is also to say, when we start to understand we're honoring children's data, it's a wonderful foray into recognizing that that's actually what we mean about human data. So a lot of the data sovereignty stuff we're working on, what is global in nature in one sense is never telling people, here's what you should do in the sense of. Mandating whatever about how they interpret privacy, but, for instance, is keeping the idea of data sovereignty where someone would have access to create a personal data store and share their data through block chain or smart contract means it's up to them how they would do it and if they want to. And then just finally, I'll wrap up by just reminding people that the standards process is not John or Billy saying anything. In one sense, it's the invitation to have the volunteers who create the standards come in and be a part of that work. But with ethically line design, what I'm so thrilled about is you can see it's already there on paper. The third version of it, which came out in March of twenty nineteen, the aggregate work of seven hundred people over three years with another about eight hundred people editing it, saying this is what we think is best in class right now. And there were multiple people from around the world.

One thing I appreciate about your answer so much is the naming that these conversations are not happening without context, right.

There's nothing contextual about this. There's both immediate of the past, however many years. But then there's also hundreds of years of of context as well that have created these global systems of technology, also global systems of oppression. And one of the things that, you know, Jesse and I talk a lot about with some of our guests, especially folks from, you know, Black and I or with Safiya Noble or Tim Ebru, as how to bring these very real histories into this conversation.

I'm wondering how you either as just John or as I believe, depending on who you want to speak for, think about creating those spaces, especially to make room for folks and for communities that have been historically marginalized in these spaces.

I think I'm speaking as both John and I believe. So I'll start. And if I say something when I know it's more John, I'll state that.

But I really again, I'm not just blowing smoke because I'm on your show, but like hearing Dr. Ruaha Benjamin and I'm going to miss Misket quote some of the phrases. So obviously, please listen to past episode radically for the correct language is she said something about talking about her book and giving the example as I paraphrase it, it say there's one hundred thousand people that live in a certain area somewhere in the States.

Let's say it's New Jersey and there's research done or data about the medical aspects of the people in that region. And the research may say 40 percent of the people in this region, you know, X, Y and Z about the medical history, say for diabetes, whatever it is she pointed out, which kind of blew my mind. And I was a little ashamed that I didn't know it.

But there it is that she said, keep in mind that one hundred thousand residents are citizens, meaning from the census doesn't mean that they all had access to the medical insurance or medical doctors within that, they were part of the research that gave the data about the 40 percent number and why that resonated so deeply with me. I know a lot of the work that I can speak about. I believe there's been work certainly on on the Internet, recognizing that only 50 percent of the world still in twenty twenty has access to the Internet. So it's very easy, I think, to hear statements said and hear data about the Internet, for instance, and go, oh, here's what we do to fix X. And the word fix is also very complicated. But the point being, like we got to address the fact that are we talking about with whatever statement a person just made, which could be very learned, informed. But are you talking about the 50 percent that have or don't? It's a completely different conversation. And I think that's where the work that I believe is doing. And I can speak many here about the initiative first. A lot of the work for me is always asking who is not part of the conversation yet. And not just who is not part of the conversation yet, but are they are they being listened to and heard and are they being given tools like the standards process parties have been designed to speak.

Now, that said, it's also challenging is. A standard group or any group can have people know 20 people in the group, but it may be English versus a different language, should people where English is not the first language, that's a challenge. It can be people who are shy and so they don't speak as much. So there's always these multiple levels. But in terms of you, maybe we're coming from a different angle. And it's a great question. But but for us, a lot of it is about the inclusion of all of all and things also like ableism, which oftentimes a friend of mine and Jean Baptiste from Google talks about beautifully, where, along with race and other critical issues of marginalization, oftentimes you build something. You're like, look at this awesome like electronic sneaker I built and you're building it for people where the assumption is they have the same size feet and their feet or whatever, which why not? Why wouldn't you necessarily do that at first? But then hopefully pretty quickly, if someone on your team has like there are a lot of people who would want our shoes who are in this category and it goes back to design. Right. And I'm not trying to minimize anything of the the import of what I know you're also talking about. But anyway, I'll stop there for that that response.

I'm wondering for folks who are listening, who want to take what they're hearing from us and do something with it.

You have this document. You have this initiative. What next? So ethically, line design.

I'm speaking as John. Right. One thing is it oftentimes gets bucketed as a document in with the dozens or now, I think even hundreds of really good, of course, the principles around the world. So, first of all, honored to ever be on any list, period. And I'm not speaking as John, the author of Ethically Line Design. I'm saying a guy who is fundamental in helping bring it together. But the other people wrote it right. So a lot of it you'll hear me being speaking for them or trying to. But the first draft came out in twenty sixteen from a timing standpoint. And second of all, it was never designed to sort of be only a list of principles. General Principles is one key chapter, but the logic is the other. The final version is 10 chapters. The other nine chapters. We went to all the chairs and of any other principles from your work that we should put in the general principles. And it's a 300 page document. So this is not slighting any of the other documents. But I bring all this up to say it is still and people say this to me all the time, which is lovely for me and I think for the volunteers ethically line design. The document is is a living, breathing document in the sense of much of it is evergreen. I mean, certain things are not old but are referencing twenty, eighteen or whatever. But that is a fundamental tool. I think sometimes what I hear is people like, what's the not your question. Yes, but I'm just saying when I get a lot is like what's the pragmatic implementable thing and whatever else thing. And you guys know, it's like what do you mean by implementable? Do you want me to come? Like I'm from a corporate setting. I get it. I didn't walk into the Seimas office and it was like, Emmanual, can't here we go.

The nature of, you know, fundamental agreement.

Like, it's like, no, I have a one pager and I'm there for 20 minutes and I walk out. But part of the answer here is I don't know the question that someone may be asking at a deeper level. And I get that. I can't say to them, here's 300 pages, figure it out. But I can say we facilitated these amazing brains and we laid it out in 10 chapters. And maybe the title is Help You read the title. It's his personal data and agency. So read that chapter if that's your thing. And each of those issues, I guarantee still as a document right now, like I go, how many policy meetings? And people don't know not to be negative what data sovereignty even is like. Not to not to say that they should, but they know the word privacy and GDP. But I'm like, cool. Do you know about the work being done in Estonia and what data sovereignty offers? Let's discuss. And they say, no, I don't. So the first pragmatic tools to read that chapter, you know, amongst other things. So I say that because ethically line design and we'll never stop saying, please read the document because the framework question is critical, of course.

But I don't want to walk up to someone and say, here's your framework, but here it is on the table. Sound effect on purpose.

Here's the framework, because you may need to know about classical ethics, for instance. You may need to know that before you go in and bring in this beautiful piece of technology. It may not be relevant to two thirds of the planet. So read that chapter. So all that said, we've also created new committees. One is called ethically aligned, designed for business. And this was specifically to say, knowing what John just said and his little passionate soapbox. Thanks. It's still a 300 page document. What do we do? So that committee is people from Intel, IBM, the two chairs. Fantastic. Adam and Melana, brilliant.

And Jean-Baptiste from Google. I just mentioned some real heavy hitters and we focused on especially this, a design chapter. And Ed, there's two of them. One is on values.

One is on design and all the people on that committee were people like Annie, who is a product design lead at Google, and she's an evangelist saying here's what applied ethics means. I can't speak for her, but oftentimes I don't think she'll use the word ethics because you want to freak people out and have them run like scatter like chickens from a fox. You know, you say like this is design and and product design, and then you bring in these methodologies so that for business paper was core questions about how do you hire people in ethics. And you guys know that term fixes, if not spurious, kind of vague. Right. Data scientists who and Kathy Baxter from Salesforce, who's a friend of just a goddess, she's phenomenal. She brought in so much great information about how to hire. It's very hard to find this. You guys would imagine and know someone who can come with real philosophy chops, applied methodologies, mythological chops like Olivia, who is on your last show. I just have so much respect for David. Brian Polgar, by the way, had to give him a shout out. These are people who are trained in the actual methodologies of applied ethics and philosophy, or people like Shannon, whose name I cannot mutter without just bowing in reverence. Right. That's a training the same way engineers have training. So also like that needs to be recognized anyway.

All that to say back to Ed for business along the hiring section, there's also a section of how do you prepare the enterprise or an organization for this work? And the team created the committee created this amazing what they call it, the ethics readiness checklist. And I would highly recommend this documents Creative Commons. It's free. I can send you the link so you can post it in the show notes. And that is a matrix where on one side, it's like here the key stakeholders, the C Suite, the value chain, whoever, and the other side was four levels of readiness, which this is what we have to recognize. There may be some people who are like, I have no idea what that means. I'm irritated, but I don't really know what that means. And then you go from left to right and you say, here's how to go from there anyway. So the pragmatic things that people can do is we seriously just read ethically line design for business. We have a lot of other papers and then the standards work. It's always an open invitation where people thank you. Probably post my email address. People can join any of those working groups. And that's a neat right. It's a need because we need more brains, more voices. And then we have a certification program.

The acronym is A.I.S Certification, which is probably a whole other subject if we have time to go into it is complex, but it really is just kind of the nutrition label aspect of something to be to see someone picks up a phone or a listening device in their house and they see little adorably mark or a different organization. And it means that they can trust when they go to the website and see what that means. Here's how the people making this stuff are communicating. This is safe. And then we've a lot of education work as well. So hopefully that was a lot of kind of pragmatic things. And if people are still sort of overwhelmed or confused, like, that's a lot of stuff. Fair enough. Email me directly. And I'm that's a lot of my role and I love it is to ask people, you know, what are you interested in here? Like five committees. You want to introduce you to the chair. And that's a lot of my skill set is is just facilitating good introductions and then trying to find where people will be most excited to do that work.

John, I have a question for. I've just been wondering for a long time, because there is this several pieces of research coming out about checklists in general, and we don't have to get too far down the rabbit hole here. But while we have you on the show, I'm wondering from your perspective, coming from the insurance industry perspective, the Israeli perspective, what is the role of what are the role of checklists?

How can checklists be helpful to folks?

Just because we've heard that critique a lot about, oh, we're just creating, you know, these hundreds of checklists and then how do we navigate that? And I'm wondering if you have thoughts about that, because I know there are some listeners who are actively discerning that.

I mean, first of all, just am speaking as John for checklists in general. I might to do list it sitting next to me on my desk. So sometimes it's just really satisfying to say, do X, I did it and you cross it out. Right. You can sort of say I took I took care of that. I think the positive side of aspects of different work that's come out. And here I'll throw up my buddy Ramon Chowdhry from Accenture. She did. It's not a checklist is is not enough to give her expertise and brilliance. But she did one of the earlier kind of algorithmic impact checklist type things. But I'd say the thing about her was she's so brilliant if you've ever heard her speak and she'd be great for the show, by the way, of like seven thousand friends who would be awesome for your show. But it's your show just saying. But when you hear her speak, she is such a gifted communicator where she starts listing things. Right. The thing about a good checklist is it doesn't just burst out of nowhere, right? I mean, it can be you know, I want to shop for something, but that's that's groceries when it comes to algorithmic impact.

Right. Then you have to go to people who really can communicate what that even means. And then we're talking about someone of the prowess of, again, the amazing Dr. Benjamin. Right. Like it's like, what's the checklist we're creating? Is it up here, like the tip of the iceberg talking about bias where it's well-intentioned people, but they're like, hey, the word whatever was used in the nineteen fifties data set, we got a medical technician and all the women are called. She and the nurses and all the guys are called doctor. Like that's an important insight. But that checklist would only be kind of level one. We hear someone like who you guys are interviewing, like Renee or Dr. Benjamin, and these people are like, there's a whole level under that where if you really want to get to this, the checklist is a very different checklist. But that said, I think it's a communications tool when it's used well.

And the other thing about certification, and here's where I can speak about are I believe work formal. What's called conformance assessment is a wonderful opportunity where there's really complex stuff on one side and all the experts like both of you on the phone and the data scientists are saying, here's what we've done to make this. And in our case, it's product driven. Right. It's not certifying a company. And also, you can't say that product is ethical. It doesn't make sense. You can't say, look, Bob, the robot is ethical. No, no, no. You have to say the systems have been created with due diligence in our case around accountability, transparency and bias where these people did this stuff to try to make this device, product or service be as transparent. And here's what we mean by transparency. Conformance, however, means that the front end people who are doing all the expertise stuff then send it to an outside different group. And Tripoli is so big, there's actually a different division that does conformance assessment, but it's kind of akin to journalism of like having to objectivity tools like almost an internal review board ethics wise and that conformance group. Then it's not really a checklist per say, but kind of in one sense it's structured and sort of say like, did you do this for compliance? And the way that we're interpreting it, well, we're not the experts in there. Does it make sense to us that you've done it? And if it doesn't, again, it's all supposed to be a gift. It's all a gift. It may seem like a pain, right? Like no one loves Sarbanes-Oxley.

They weren't like, yeah, let's grab a beer and do Sarbanes-Oxley. Right. But the point is, is that in this case, it's like, what is innovation really mean? It's building trust. And part of that means all the people who can take us through all this stuff and we're checklists mean. We're going to ask a lot of questions that may not be relevant. Awesome. Because I guess when you don't want to find those questions out, take this from a former SVP of PR is after it comes out and the crisis is exploding, your brand and you yourself are saying, how did I miss that? And you're not evil, but no one cares anymore because it's out and unfortunately, you've harmed people.

Yeah, we really appreciate you actually explaining a bit about the the compliance and versus the values piece here, because that's a critique that a lot of checklists get, is that they're taking something that should be value driven, like ethics and turning it into something that is more compliance and making people feel like they need to check a box. But it's nice to know that, you know, the two of those don't really need to be in conflict with each other and they can actually help each other out.

But, John, I'm sure you already know you are on the radical podcast.

And part of our mission in doing these interviews is to try to define what this term radical A.I. means in the field of AI and technology. And so to start off, could you tell us how you would define the word radical and also if you situate your work in that definition or not?

First of all, one thing I keep talking about this, too, but I have such admiration for her work now. And you introduce me to her. It's at the end of the show with Dr. Benjamin. And forgive me, I'm going to paraphrase is wrong to one thing that was so radical about what she did. And then your show for this type of wisdom is she talked about love.

It was very powerful, I remember was on the track running, and I paused when she talked about that because I was like, here's a world renowned, as far as I know, scientists at Princeton and taught me so much. And I so quickly become enamored with people where I learn a lot fast. Part of my praise for you guys is not just because I'm being nice. It's a very quickly get kind of a childish fan thing for like B.B. King, Eric Clapton and people who expose me to wisdom and learning that I know I need. That's like I'm like a plant soaking up water. And so the reason I bring up the idea of love is for me right now in my own life, not just with covid, but some other things that are happening in my personal sphere. The word kindness is the word I'm going to use for the one word for radical. And it kind of stems from what Dr. Benjamin was talking about with love. And I'll let listeners listen to how she said her stuff so beautifully. But kindness for me, I think I was just tweeting about this last night.

People tend to forget self kindness.

And self-worth, and if we have time to talk about well-being and stuff that the work of, we're doing it at Tripoli, I'll talk about that.

But I think there's this.

Thing with introspection were sometimes, myself included, from a philosophical standpoint, we get into a lot of really important questions about what's the meaning of life and all that, but that just at a very deep personal level, I think all of us, especially with covered, we're recognizing the ultimate human need for connection, kindness and caregiving. And that really it's like, what are we doing all this for, right, and. Self kindness in this, a fantastic book I'm reading called. Of course, it's a fantastic book and I'm blanking on the title, but here it is when things fall apart. It's this book when things fall apart. Yeah. All right. So just, you know, this by my children. And I heard about it on a different amazing podcast called On Being With Krista Tippett. And I almost like shrieked like a teenager because she followed me on Twitter and I did Emdur and, like, rewrote it like 15 times because that book has helped me so much. And a lot of it is a lot of Buddhist mentality about when something horrible approaches you, grief, sadness, anger. And that's why I think a lot of us are coming from not just with covid, but in a lot of the questions we face in the field of A.I. and politics and design and certainly a lot of heavy, beautiful stuff that you guys cover. It's very easy to go to grief and anger and just instantly kind of put it back out. It is for me, a lot of times my passion comes off as anger, which I regret. But that book is sort of a sense of like when stuff comes at you in the way that she puts it in, when things fall apart as it's a messenger. And what I love is she's very real. She says it's not a messenger. You're necessarily looking forward to that, which I loved, you know, but instead of killing the messenger, as it were, the logic is to recognize, ah, this may really, really suck because this messenger is something I don't want. I see grief. It's here. I see anger. I see pain. But the opportunity now is to say kindness.

A deep breath and to say. Oh.

And kindness isn't forgiveness, it's not avoidance of all the stuff like you guys are talking about, like Dr. Drew, how Benjamin obviously a very passionate, very purpose driven about this work. And I can speak here for, I believe, ethically line design, the work that the volunteers have done. I think it's I can say this. It's the most important work I've ever done in my life. It's an honor to be a part of that work. But the kindness part is also something where I don't know how one could ever delegate kindness to a machine or an algorithm. Can they be helped? The machine help, of course. But then also is if I really want to be kind to others outside of my personal views and my faith and life and stuff in general, that actually means more often than not is really trying to, which is hard for me, as you can probably tell, shutting up and listening and more important than just listening, hearing. It's very hard to hear. I've had a lot of situations in the past two or three or four months where I say stuff and it may be smart or something, but I'm not intentionally but hurting someone. I'm not hearing them well. And I think the kindness too is a recognition of like where you're ignorant about things, whether it be race or design or engineering or philosophy or anything I so adore. I find it so attractive now in my life, curiosity, humility and above all, kindness, because kindness means for myself and for others. There's this open invitation to be together and learn not to just hibernate and not evolve. But I think what's radical is that type of kindness is really, really, really hard.

So you probably might believe, but you might also not believe the amount of people that have come up to have you have you interviewed John yet? Have you talked to John? Have you talked to John? And what that shows us is that you are so connected. Right. And not not just in a shallow way. Right. Like, it's very obvious when people mention your name, how deeply they care with you, even if they disagree with you or something on something that they really they care that you have done something to connect in that space. And we are living in such a time right now where it can be so difficult to find that connection and kindness. And with everything going on in the world right now, be so hard to connect back and sink back into that that kindness and connection. And I'm wondering from your perspective, if you had any advice for folks who right now are struggling with finding that connection, maybe even their connection to a greater purpose in their work or in finding kindness in this field of ethics that we're in right now, what that advice might be?

Yeah, well, first of all, John, thanks a lot. And that means a great deal. I wouldn't have minded been on very I would have dressed up. But also that really moved me. If people have said that to you, that's that's like how my life I want my life to be lived. So thank you for sharing that. First of all, you know, I'll say this and I'll do my best because we're all busy. You know, funny people email me. They're like, I know how busy you are. And I'm like, aren't you like everyone's busy, you know? Like, I appreciate that. I'm busy. Sure. But I want to never be too busy as someone wants to reach out to me and say I could use some kindness.

So on that sense, this is just on a personal level. If I'm on Twitter at John C. Havens Jail and VNS, some of their best business and life connections have come from people who just on Twitter reached out and were like, whatever we did and we start talking. Now, that said, I want to be very clear, and this is also from my dad, who was a psychiatrist, where people are actually struggling with mental health issues. Of course, first and foremost, go to a professional, go to a mental health professional, go to a therapist, go to your doctor. And then whether it's the World Health Organization or the American Psychiatric Association, I can send some links, a very trusted organizations, where this is never to say that like well-intentioned people, like with apps measuring depression or not coming from a good place. But I also have experience a friend at Harvard who did research about how many well-intentioned app creators didn't have sociologist or medical people on their their teams, which is just a recipe for disaster.

And the fact that you didn't mean to do that, it's not to be mean, but get those people on your teams. I think another step is like the more the more meetings that we have.

And we I mean, collectively, anyone in the eye space, more and more, I have this list, I can send it to you guys like the top 10 things I love to see for every a policy meeting. And this is very much John, right. Versus I believe, because the thing about iSuppli and I already said this is the consensus based work that they do so fits who I am because I'm not saying, hey, I hope you like what John is saying here. I mean, people do. That's lovely. But the point is, is my main message is it's OK if you don't come into this group and then you tell us what you think. But things for policy meetings like our dreams for me is that every every meeting, policy or academic opens up with kids. I mean, the kids under like 13, 14. But like, I still picture it haunts me because I'm a dad. It literally haunts me.

The great Attenberg, her main words to the UN were, first, if you don't fix this, we will not forgive you. She's right. She should be dealing with whatever a frickin 16 year old girl deals with in the Netherlands or wherever she's from, like or doing her amazing stuff. Like her brain is so amazing. Think of how many papers she can be writing about saving the planet if she wasn't. By the way, I'm happy for what she's doing. But if every meeting started off with kids like her saying, here's what you need to do, like it's not fun. This is where the humility and kindness comes into play. But it's sort of like we actually are literally, literally doing all this for them. Right. They are the next generation. Secondly, as First Nations and indigenous groups, which is a very vast thing to say. Right, because there are hundreds and hundreds of different First Nations traditions and Canada, the states, Australia, New Zealand. But in general, a lot of aggregate things I've learned from from them are also things about thinking about the next 10 or 12 or 15 generations to come, and that includes the planet. So that means short termism. It doesn't really even grok in that paradigm. And also it means that another thing which hopefully I'm still answering your direct question is, is the planet.

I've learned as much as I can in my sort of feeble, at least initial Western ways that the planet cannot be considered as a resource. It's like me talking about just Dylan. It's like, well, there are two people I can use if I need to get on podcasts and reach my audience and the demographics. It's like, no, they're human beings, these glorious humans with this beautiful show and our resources, er quote their wisdom givers, they are storytellers. They are sharing this right. Same thing with the planet. And when when I talk to people who a friend of mine again from that military tradition in New Zealand, Sarah, when she talks about the planet, she's like, this is like me talking about my sister and my brother. And that means do that. And there's more on the list. But it's things like never use the phrase, don't hinder innovation ever again unless you thoroughly explain what that means. If anyone says the phrase don't hinder innovation without giving clarity on what they mean, then my I now I'm trying to respectfully raise my hand and say, please be specific on what you mean or don't say that because it just kills everything in the room.

Of course, everything that you've mentioned, all the resources and ways to get in contact with you. We will include in the show notes. But for now, we've reached the end of this interview. So, John, thank you so much for coming on the show today.

My pleasure. And I know we didn't talk about it, so maybe I'll post some stuff to you.

The whole idea of wellbeing, which is a standard that was launched a lot of there. The logic is how do we define what we want to build for beyond sort of exponential growth measures? I'll just say this, that in the show notes, I'll post that and that work, we invite people to be a part of that. The basic message there is giving solutions to policymakers and corporate folks to say we're not just going to say don't do a single bottom line stuff anymore. The logic is here are these amazing ways to build this beautiful technology to really enhance all these are the things that we all want to work on together. So all that to say such a pleasure to meet both of you in person and to be a part of the show and really, really appreciate the opportunity.

We want to thank John again for joining us today for this wonderful conversation and for all the work that he is doing at I Tripoli on the Global Initiative on ethics. Now, one of the things that I'm taking away from this conversation is, I guess, more of a challenge that I heard from John of like how do we take these concepts of kindness and wellbeing that don't always have maybe a natural home in the engineering space, at least traditionally? And how do we bring them to the forefront of how we are designing our technology and how we are making ethical spaces, maybe spaces that represent everyone, spaces that are kind to ourselves and also to others, just like all these questions of kindness. Right. Like what happens when you bring this question of kindness into it? Like, what if we did that? What if that was central in our design philosophy?

How might that impact how we design these technologies?

Yeah, I think it's interesting because John kind of maybe unintentionally brought together two core aspects of what I see his research being. And on the one side, he's talking about kindness and wellbeing. And then quite a bit of what he was also saying was the importance of like not being prescriptive with the way that we create ethical initiatives. And I think that that also has quite a bit to do with kindness, too, because we're sort of being kind and respecting that. People aren't always the same, whether that's geographically, culturally or just individually. And so to prescribe an ethical initiative of any kind, whether it's, you know, from a philosophy perspective or for technology from, you know, I really it wouldn't do justice to have initiatives that are standardized in a way that are potentially harmful for certain communities or that they're only promoting Western ideologies or things along those lines. So I think it's really great that John was so passionate and adamant about making sure that the ethics that were a part of these initiatives are crowdsourced and they're brought together by working groups and they're created by hundreds of people and they're checked by hundreds more. And so there's never really this mindset at what seems like it. I triple E or, you know, within John's own mind of this is the right way or the wrong way to do things. And this is the way we're going to tell everyone is the right or the wrong way. No, it's it's really this like open, collaborative and democratic discussion that they're building here around what is best for the most people and how we can really prioritize making the most fair and the most right things in the in the event that there is going to be unavoidable ethical tradeoffs in any decision that we make.

For more information on today's show, please visit the episode page at radical A.I..

Doug, if you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite pod Katcher. Join our conversation on Twitter at Radical iPod and catch our next weekly episode on Wednesdays. As always, stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Quickly and accurately convert your audio to text with Sonix. Automated transcription is getting more accurate with each passing day. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Better audio means a higher transcript accuracy rate. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone. Sonix has the world's best audio transcription platform with features focused on collaboration. Lawyers need to transcribe their interviews, phone calls, and video recordings. Most choose Sonix as their speech-to-text technology.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.