Bonus Episode: AI & Racial Bias with Renée Cummings


Renee & Racial Bias (1).png

Sponsored by Ethical Intelligence this bonus episode features a presentation delivered by Renee Cummings as a workshop given on 06/24/20. We also welcome Ethical Intelligence CEO Olivia Gambelin to the show as a guest host. Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance. She is the CEO of Urban AI. Slides referenced can be found by contacting Ethical Intelligence at ethicalintelligence.co

You can follow Ethical Intelligence on Twitter @ethicalaico and you can follow Renée Cummings on Twitter @CummingsRenee.

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Renee_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Renee_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan and Jess.

And we would like to welcome you to a very special episode of the radical A.I. podcast. So we are here today actually with a third co-host for this episode. It is Olivia Gamblin, who is an ethicist and the CEO of Ethical Intelligence. And the reason why we brought Olivia on, besides her being a good friend and colleague. And this past week, her organization, Ethical Intelligence, put on an amazing event entitled A.I. and Racial Bias Workshop with Rene Cummings.

So, Olivia, welcome to the show. Thank you, Dylan. And just I'm so excited to be here.

And before we talk about the event and your work with ethical intelligence, we first need to introduce Rene and her work as this episode in place of the normal interview will actually be airing the talk that she gave during Ethical Intelligence's workshop. So Rene Cummings is the CEO of Urban A.I.. Rene is a criminologist, criminal psychologist, A.I. ethicist and A.I. strategist. She specializes in A.I. for social good, justice oriented A.I. design, social justice and A.I. policy and governance and using A.I. to save lives committed to using A.I. to empower and transform communities. Rene is helping governments and organizations navigate the A.I. landscape and develop future A.I. leaders.

So as I just mentioned, the format of this episode is going to be a little bit different. We're going to talk to Olivia a bit first in this intro about why Ethical Intelligence wanted to put on this workshop of why it was important for them to sponsor this work. Then we're going to listen, as just said to Rene's presentation that she gave at this workshop. And then we're going to debrief along with Olivia following that recording. So, Olivia, I'm going to start off with the question that I just said you were going to talk about, which is why for you as the CEO of Ethical Intelligence, was it so important to have Rene come do this workshop and for you all to put this on?

Yeah. So I'm gonna take a high level step back real quick. So overall, with Ethical Intelligence, we have an educational series that we've just started called Building Ethical Intelligence, and it's literally targeted at building this your ethical reasoning abilities. It's comprised of a few different pieces. And one of those pieces is a workshop. And the whole point of the workshop is to bring people together to discuss either a prevalent piece of technology or a prevalent issue. And in light of the Black Lives Matter movement, we wanted to show solidarity in some way that stuck true to both what we do at ethical intelligence, but also was within our our capacity to actually have some type of impact. So it was actually my team and I, we sat down and we went, well, we've got a workshop scheduled in the first place coming up. And we think it would be amazing if we could actually focus in on racial bias, specifically in light of everything that's going on. It's very topical and people need to have that conversation. So and we have the ability to create that space for that conversation. And so one thing led to another and we went, well, Renee is perfect for this. Her background covers both the criminology aspect as well as an ethics aspect, and she's become a very strong voice in the community. So we were honored when we reached out and she said, yep, I'm in. She's straight to the point. She goes, Yep, OK, done. What do you need for me? And that's that's really how this event came about for folks.

I realized that just Justin, I know a lot about ethical intelligence as an organization of fruit for folks who may not have heard of you all. Could you say a little bit about who you are, what you do, where you're based, et cetera?

Yeah. So we are ethical intelligence. You can call us. For short, we know that ethical intelligence is a is a is a mouthful, but essentially we are internationally based. So we cover everywhere from the U.S. to the U.K. to Europe, a bit into Asia as we're going. But we are an ethics firm.

Right now there are a few of us growing up into this space and I ethics. It's a brand new sector that we're paving the way into. So what we try and do specifically is bridge this gap that we've come to realize is between the research and academics that so there's a wealth of knowledge in ethics and the issues and the solutions. But there's a gap between what's sitting in academia and what's actually being applied in industry. On the other hand, you've got industry going, OK, we've got all these questions. No idea how to answer them. So we're trying to position right in between that to ease that transition over and make sure that all of that impactful research and information that's sitting in academia is getting practically applied in our day to day technology.

And we saw that today with the amazing event that we were all just fresh. Coming off of right now with Rene. And so without further ado, we're going to play back. Rene is talk from the event. And then following that, we will have a debrief and group discussion with Olivia.

So we're talking about Rick Bias and we're talking about racial bias and artificial intelligence. And I said, you know what is happening right now when it comes to the protest action that we're seeing on the streets of America, action that has actually spilled onto streets in cities all over the world. We are seeing also, though, the most diverse crowd of protesters ever and a global level of solidarity that has never really been seen before. And I think what this protest action is asking every industry is what are you going to do? And this is why I said this is the critical moment for artificial intelligence. So as we move into the first slide, what we are speaking about and what we are see with the pandemic, cold, cold, the 90s. There is also this pandemic cold racism. And with that pandemic of racism, there is also a pandemic of pain. And I think that pain is which we have all been experiencing.

Some of us, like Harris and some of us on the streets of the US is really connected to the anger, the intergenerational anger and that free floating anger that many individuals in our society have felt for a very long time.

And when you combine that anger with grief and in many of the cases that we have seen, where there has been situations of police brutality and police violence, there's also been no closure.

And with that, all of this happening in the context of health care disparities, because many of the same communities, communities of color, having a higher exposure, all levels of susceptibility levels, of course, that the are pre-existing conditions that create the context.

October 19, we have seen issues now with mental health, financial health, isolation, cultural dislocation and intergenerational trauma with the criminalization of color and all of this leading to a post-traumatic stress disorder that is correlated to systemic racism. So now we talk about the global response to systemic racism and that global response. So you can change the slide is really creating an outpouring of support and throughout the literature and throughout many of the articles that we're seeing in the media. There is a question being raised whether the support that we are seeing now from big tech, whether that is genuine or just corporate in general, whether that is genuine or whether it is linked to some sort of a fear of guilt or shame. But there is a global solidarity. And that solidarity is many of the brands are now aligning themselves with social justice and really supporting many of the social justice movements that have been advocating for interventions and approaches that deal with systemic racism. We've also seen recognizing and a record of what systemic racism is and how it impacts us at every level in society. And of course, the advocating for for social justice and a renewed commitment among many of the big tech companies to investing in black talent or to now creating programs to support diversity and inclusion.

And when we think about systemic racism in a I, you can move the slide. We recognize that there is a system and there was a system so deeply rooted that in able a systemic racism. And we've got to ask ourselves from an A.I. perspective, from a tech perspective, what are the parallel systems that we have been creating that support systemic racism?

What are the structures that we have in place in the organization, in the design, in the development and the deployment of A.I., those structures that preserve systemic racism? And what about the political forces? We know that design is something that is very political and in the politics of design. What we do see are social impacts that may not be reversible. So what are the social forces that encourage systemic racism and at various levels? What are the privileges that excuse it in the behavior, in the language, in the ways in which things are frame?

And, of course, the financial inequalities that sustain the system. So what we think about racial bias, the. And we think about how racial bias really informs implicit bias and implicit bias really refers to those that attitudes and stereotypes that affect our own.

And our actions are decisions, and when we think about the design process of A.I., we've got to ask ourselves who's making those decisions? Who's at the table when those decisions are being made? How diverse is the mix when it comes to design, the development or the deployment of the technologies that we are seeing? And we've got to think about it. Stereotypes. How do stereotypes filter into the design process? And we know the stereotypes are really these fixed, oversimplified images and ideas of a person or a group. And many of the stereotypes and we've seen some of the stereotypes, particularly when it comes to the context of police violence. So when it comes to the context of police brutality, in many of the videos that have been circulating on social media, that has brought this whole situation that was leading to really a boil over to become variable at it.

We see stereotypes, particularly when it comes to, let's say, African-American strength, speed, physically threatening, angry, must be controlled by aggression.

And we see the integration of these multiple pieces of misinformation creating an image, particularly now that we're dealing with law enforcement in the minds of of those individuals and how we react or how we interact with a particular group.

So when we think of systemic racism and we think of implicit bias and we think of subconscious biases, we come to a place where many psychologists and many psychiatrist and have been trying to come up with sort of a multicultural sort of spyglass to deconstructing this idea of systemic racism. And many fraternities and many fields have offered many sort of philosophies. There has been the. Systemic racism is a form of mental health as a form of pathology, as something that is so deeply rooted that it would probably take forever to pull it out. There's been neuroscience that has been speaking about creating a pill, a pill to to treat with all privilege and a pill to treat with prejudice. So there's been some work happening in the background. But we realize that it is a combination probably of nature and nurture or it's just social systems that are so deeply ingrained that it's really difficult to treat with.

So there is a lot of talk about that sort of deeply rooted hate that we see and what we see from all neuroscience and the research and suggests that it is an uncomfortable reality and that ending something like systemic racism can be achieved. But not just through counseling and therapeutic sessions or anti bias training, but really some difficult work that we've got to do as a society.

So we know that when it comes to systemic racism, when it comes to these deeply rooted issues that many of us don't know we have or may not think we have them, but sometimes they prop up so, so easily and so quickly subconsciously. And this really looks as a question of humanity and how do we perceive a human being. And it speaks about empathy and how do we use empathy, even from the perspective of a guy in our design processes at every stage of the life cycle.

Are we paying attention to these really deep rooted issues and how are they informing the ways in which we create technology to interact with society? So we know what implicit biases we know. It is also considered really unconscious bias.

We know that it operates outside of our awareness and it could be in direct contradiction to our own belief system, meaning that many times we we don't believe that we have biases, but we all do each and every one of us. And I think what makes implicit bias so dangerous is that it slips into our aspect and our behavior automatically. But the thing is, the big question remains is that the exact sequence of mental events that create bias is still unanswered.

So what have we seen, particularly in the past when it comes to A.I. and some of the mistakes that A.I. has probably made along the way? We have seen new technology creating old divisions, so facial recognition that has misidentified people of color and woman. And of course, now we saw that IBM and Microsoft saying that they are going to discontinue the development of that technology. Amazon, who has been probably the greatest supplier to law enforcement across the world, saying that it's going to put a moratorium on it.

The CEO of IBM saying that we do not support technologies that promote racial injustice and that they are now committed to social justice and really taking an honest and open approach to understanding some of the biases that are created with technology or something like computer vision, offer some driving cars.

That is challenge when it comes to spotting for guestrooms of a darker skin tone. And what about airport scanners that have difficulties dealing with with black hairstyles or soft credit scores that relate to racially segregated neighborhoods for lending to that charge, higher interest rates to Hispanic and African-American? So these are things we've got to think about, how we've got to think about how new technology is really creating those old divisions.

But the big question is, how do we reprogram Sentry's centuries of systemic racism? And the basis of that is the the presumptions, the assumptions, the confidence in which a bias is played out. Combine that to sometimes basic ignorance and a systems that really professional and institutional and networks, institutional racism that also crystalised levels of privilege. And we've got to think about these, particularly within the context of A.I. as we move now to to deploy this technology in so many unique ways because of the requirements of over 90 and. When we talk about systemic racism, and particularly now, you've probably been hearing about two pandemics, the pandemic, the fourth of October 19 and the pandemic of racism.

And I think what they do have there is that they are being now seen as public health challenges and requiring an epidemiological intervention that is very, very strategic, that really breaks the system down and really applies and intervention at every level of the system. So when it comes to neuroscience, you know, neuroscience has really exposed the shortcomings of the current approaches to combating racism. And it is revealed that curing hate is possible theoretically. But under the right circumstances. But I think we're still trying to negotiate what those circumstances are. And, of course, with mental effort and enough mental effort.

But the question becomes, are we prepared as a world, as our own society, as our own groups, as our own organizations to make that mental effort? And what neuroscience is saying is that we can unlearn it. We can unlearn it if we want to, as we have unlearn of violence in our society. You saw a really a constant type of learning. But we have seen that many people exposed to violence early in life have been able to unlearn that and really change their lives. So studies have really looked at it.

And I think most of the studies have looked at that sort of interaction, I guess, with black men and people of color in general and in many of the studies in your side saying that there is this just natural fear that comes into the mind.

And we've seen it through the brain imaging of the brain fingerprinting, how the mind works and how the mind reacts to fear in that regard.

But there have been some studies that have given us some sort of temporary relief.

And there's one in particular that's always used by Calvin Klein, I think both from Stanford. And what he did is he looked at nearly twenty three thousand people and what he did in his experiment. He created a scenario in one experiment where researchers asked subjects to imagine being kidnapped by an evil, middle aged white man, only to be saved by a dashing young black hero. Within minutes, the subjects decreased the intensity of their biases and the speed at which those prejudice prejudices were associated. By by 50 percent. However, the the catch there was just after a couple of days, he said that those effects faded away.

So what we have seen over the last probably 20 years is that implicit bias which turned into diversity training really has not worked, really has not worked in the workplace.

And over a billion dollars has been spent by tech in the last 10 years to really introduce and sustain diversity training. Well, what the studies have shown us is that encouraging participants to embrace in diversity training is not always that effective.

And many times it really backfires because you don't get the requisite levels of honesty that's required and that people sometimes become really defensive and divided. So there's been a lot of work. So I think in any and every tech company, there's been an extraordinary amount of work when it comes to implicit bias.

But what we've seen is that although these things have been tried, they've really not been tested because the research is not there. There's one psychologist out of Princeton, Vecsey Patrick, who reviewed hundreds of interventions designed to reduce prejudice and found that only 11 percent of all experimental efforts were tested outside of the lab and few corporate training sessions, particularly attack wherever evaluated. So we've seen billions being spent on mandated employee training, on racial sensitivity, diversity awareness, compliance, anti-discrimination laws, and lessons on how to integrate a better workplace with diversity and inclusion. But what we have seen, it's not been working, and many of you may have been already exposed to the implicit association test, which is widely used and what it does and measures. How are our mental associations going to influence behavior and how our mind linked concepts and assessments and stereotypes about other people. And this test has been given out. So many times, and the only thing that we know really about the test is that it is full. The results are questionable. It is still there. It's still a lot of skepticism around it. The scores are not stable. The low test retest reliability. Are there different scores when you apply it, depending on who does it? And are the metal analysis show weak relationship between the scores and behavior?

So it comes back to decision making. And one of the things that we do in lie is that we make some very big decisions. And what I'm hoping that this talk will make all of us think about is when we are in that position to make those decisions. How conscious are we? Right. How vigilant are we when it comes to the ways in which we are going to use data? Because we've got to think about.

Fairness when it comes to making those decisions. So when it comes to database discrimination. And this is where a guy is really now at the fore. What we have seen historically in a guy is that there's been a denying a historically disadvantaged and vulnerable groups from full participation because of the biased data sets, because of the widespread biases that persist in society that are deeply baked into the data that they often use. And those pre-existing patterns of exclusion and inequality that are also in the data and how A.I. has amplified historical and institutional discrimination and default discrimination.

How does that slip so easily into the design process where many of us in the world of AI ethics, we speak about it as unintended consequences. But for those groups that are affected, those consequences that we call unintended feel very, very intended. And we've got to speak about engineered inequality, which leads us into how structural racism rears its ugly head in a I. And then what do we see? We've seen it when it comes to A.I., digital bias, digital discrimination, digital marginalization, digital profiling, particularly with big data policing, digital redlining. We see it again and again and again in finance and insurance, mortgage loans, banking, all of their digital digital victimization. And we see all of this in the space of A.I.. We've got to ask ourselves, what are we going to do about it? So when we talk about these biased data bases, we've we've got to ask ourselves about the many racialized predictions that we've seen in the criminal justice system and in law enforcement, in healthcare, in finance. We've got to ask ourselves, how do these biases slip through the back door of design optimization? And that's a big question that have asked in her book, The New Jim Crow, which is really, really significant at this moment. And she speaks about engineered inequality and how we do these things subconsciously and technological benevolence when we design technology, thinking that we are doing it to help, thinking, well, we're doing good technology, that really does not help. And we've got to think about the hidden effects of algorithms and algorithms of oppression and simply a notebook with. You've read her book. She looks at that. She looks at how algorithms create racial disparity. Somehow they support economic disparities and how they encourage through the processes, victimization.

And furthermore, marginalization and how algorithms. And they are now creating new types of systemic discrimination. And now several levels of discrimination. What we call colly victimization. So we've got to think about that as well. And we've got to ask ourselves how role is raw data? Machine learning relies on these large oftentimes very. Are problematic data sets because they they really need to be treated with some level of consciousness and requisite levels of vigilance, that we can understand the racial, economic and gendered biases and those deeply ingrained cultural prejudice. So those structural hierarchy that are in baked in to the processes and the predictive policing is the one that, you know, we know the example.

We've seen what's happening with policing, but we also know that these calls to defund the police force, to reimagine policing or to reinvent policing may create a real heavy over reliance now on a guy to come up with more strategies to really change the dynamic or enhance the relationship between police and communities all across the world. So we've got to be particularly vigilant right now because what we may see is an expansion of big data policing. We may see more rigorous approaches to use predictive analytics to police communities. And we've got to think about that. We've got to think about overpoliced and we've got to think about this level of intense and excessive surveillance of communities of color. We've also got to think about how we design these tools, because what we have seen to date will be all the big data policing tools are being designed to treat street crime. But we know financial crime, right? We know financial crime, what we call the crimes of the sweets.

Also, big crimes. Well, we're not seeing all of those apps.

I think that's probably one are designed by some sociologists to track financial crimes in Manhattan. We've got to look at the traditional approach that we use in training classification models for predictive policing, geospatial historic crime data. All of these things are just really, you know, really saturated with systemic racism. And I was reading an article maybe about two days ago about the gaming in the street, and it was showing you that in the gaming industry, I'm not highlighting any one industry. I just felt this was really a very salient point. Now, where they showed you that and given the fact that young, black and Latin makes you use these games at a higher percentage of a higher rate than anyone else. But yet the characters in the gaming industry really were not representative of the diversity that we are actually seeing by the user. So the creators are not as diverse, but the individuals who are consuming. Of course, very diverse. And I think the one of the individuals or the experts asks a very pertinent question, why can black characters ride dragons?

I think that that is really, really deep.

And we have got to think about that when we are designing and what we are really coming up with those idea. You could change the slide. So we are obligated. I think when it comes to a I, we are definitely obligated when it comes to ethics and the work that we are doing, not to repeat the mistakes of the past. Why use new technology to create old divisions? So I think we have got to reduce the disparity caused by human biases. We've got to do things to increase due diligence, due process and duty of care.

We've got to use a eye to create the kinds of protections that are required against bias and unfairness. We've got to prevent algorithms from perpetrating human and societal biases. We've got to collaborate with diverse researchers and policymakers.

You know, research. As an engineer must work closely together because I think we have a responsibility when it comes to dismantling these systems and ensuring that they're not rebuilt in a digital space. We've got to look at those best practices and we've got to really identify multidisciplinary research to really support a more forward thinking and a more social justice approach to the ways in which algorithms are used.

And we have got to think about what we are doing, what we are doing, and as we move to reimagine A.I.. I ask you, what is the challenge? No one is stopping us from coding diversity and coding equity and coding inclusion and coding fairness and coding context and visibility and coding empathy because A.I. is still about humanity.

So I think at this moment what is required of us would be on this honesty because with honesty there could be healing and there could be moving forward. We've also got to be introspective and reflective and understand that these conversations are subtle times, uncomfortable, but in discomfort. We can come up with the solutions that are required.

And as we move forward as an industry and as a fraternity, we've got to ask ourselves, what do we plan to do and what ever changes that we decide to make as individuals and organizations and corporations. They have got to be sustained changes. So we've got to look at the boardroom. We've got to look at the C suite and we've got to look around. Every time we're in that room and ask ourselves, who are the people next to us? What are their belief systems? And do we have the right mix at this table to serve humanity at this time?

Thank you very.

We want to thank Renee again for that fabulous, invigorating, powerful talk.

So, Olivia, what are some of your takeaways from that event?

So part of the event, after we had Rene's talk, we did these breakout group discussions. And one of my favorite takeaways comes out of the group discussion I was having based off of Renee's talk, where Rene is advocating for the need to ask questions. And those questions will lead to people sex, taking a step back and considering their technology, considering their actions often from a different lens, and that that different lens is what's going to start making an impact. And one of the takeaways off of that was in my group discussion, we were talking about how people often feel afraid if they're not from a tech technological background of questioning tech in the first place because they think, oh, I'm not an expert in this. I don't understand tech. So I have no right to have a question. But we sat down and we went, no, that's that's not true. Actually, the people not involved in tech should be the ones really asking those questions because they're taking new perspectives into the conversation and essentially forcing the people that work in tech out of that that mindset, that bubble to consider the wider picture. So for me, building off of Renee's advocacy to ask questions and have your voice heard being built into as well and asking that specifically in a tech setting.

Yeah, that I think I actually had a similar outcome in my breakout group. So for context, both Dylan and I were conducting discussions and helping create dialogue and conversation and some of the breakout rooms of our own. And in my breakout room, we also talked a lot about this idea of like taking agency and really taking individual action, and especially because the people that were in my discussion group, they didn't really come from technical backgrounds. And so they were feeling the sense of responsibility, like a lot of people this event were. But they didn't know what to do with that and they didn't feel like they could really do anything with that. And we sort of played with this idea of like, what is it really worth it to take individual action and to stop using apps that are collecting our data? Is it really making that big of a difference or should we just be joining these bigger movements that are clearly making much more steps towards systemic change than we could as individual entities? And then after we dissected that idea a little bit more, we realized that, no, it is important it's important to take action as an individual. It's important to claim agency over your own data, how it's being collected, how it's being used. And those small actions will eventually compile and bring us towards the greater systemic change that we're looking for. And it needs to start somewhere. So we can't just take a step back and say that nothing that we do matters in the long run. We need to actually recognize that we can make a change at it at an individual level. And it just it starts by by small pieces of action.

And one quote that I pulled from renews talk that really stuck with me in both empowering and challenging ways is when she said, why create new technologies to recreate old divisions? And for me, I think this kind of gets to the heart of both what the next steps are and also how difficult it is to uproot these systemic divisions that are so historically oriented, especially in terms of racism and sexism. And they're embedded in these technological situations and systems. But really like what it's gonna take us for us to look back at ourselves and look in the mirror and for us to do some really difficult social work to uproot those those systems of oppression. And I'm curious, Olivia. What do you hope the folks do with this with this talk?

Since now you have you have birthed it along with Rene and you've brought it out into the world. What what do you hope that people will do in terms of, like, application?

Coming back to, first of all, having the courage to ask questions specifically in technology, I mean, technology is just an echo of who we are as humans. So being able to ask questions towards technology is actually being able to ask questions backwards, who we are as humans. Right now, it's a very charge conversation. And people even even in the group discussions, I felt people kind of tiptoeing around it until it established this, this, this, this even playing ground where everyone went, OK, you know what? We all want the same end goal and none of us really know how to get there. But as we have this conversation, in this discussion, we we start to understand how to get there. So my hope is that this this specific event, as well as events to come and hopefully it inspired more events to come in the first place, but inspired discussions, because at the end of the day, when you bring people together with the same end goal, no matter what the background is, that's when you get the really interesting solutions and the ideas flowing. We're not going to see change if we don't if we don't actually sit down at and discuss what change we want to see. I loved how at the end of at the end of that workshop we had Rene is talking about how she sees this as a movement. And you just saw everyone's heads nodding off like, yeah, this is a community, this is a group, and our voices are coming together. They're even after such a heavy workshop. That was a very heavy topic.

We still came out on the other end with hope and hope that what we're discussing won't stay just in this virtual little zoom bubble, but then it will actually move on to it to impact outside of that. So I think echoing what Rene said, this is a movement and we are coming together in this movement and there are people leading it.

But it's it's leading it in terms of actually being able to pull together everyone's interests and everyone's voices into one loud, amplified voice.

And we invite our listeners to, of course, support Rene and her scholarship and her work with urban A.I..

And if Olivia if people want to help join in in your movement unethical, why is there a place that they can find you or find more events such as this wonderful event with Rene going forward?

Yeah, absolutely. Check us out on Twitter. You can follow us at ethical. I underscore code or find us. Find our Web site at W w w dot. Ethical Intelligence DOT Co.. We have constant updates in terms of the different events we're throwing, usually revolves around webinars and topics and workshops and podcasts and all that good stuff.

Well, Olivia, thank you so much for coming on and being our very first guest host on the podcast. And thank you so much for all the work that you're doing with Ethical Intelligence.

Thank you guys so much for having me. It's been great.

Absolutely. It was our pleasure. So for more information on today's show, please visit the episode page at Radical Adored.

If you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pod catcher. Join our conversation on Twitter at radical iPod.

And as always, we just I'm wondering, Olivia, do you want to. Do you want to do the stay radical? You should do that. Yeah. OK. How do I do it? Just jump in. So so. So just says and as always. And then you say stay right. OK. Just me. Yeah. And as always, stay radical.

Yeah. Nice job. Perfect. Spread it like it is. And tune in next to.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Quickly and accurately convert your audio to text with Sonix. More computing power makes audio-to-text faster and more efficient. Here are five reasons you should transcribe your podcast with Sonix. Create and share better audio content with Sonix. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Manual audio transcription is tedious and expensive. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.