Episode 7: Racism and Sexism in AI Technology? Navigating Systems of Power with Sarah Myers West

smw.jpg

Can you separate technology from power? Is technology ever objective? How do we build technology that meets the needs of everyone? To answer these questions and more The Radical AI Podcast welcomes Dr. Sarah Myers West to the show. Dr. Sarah Myers West is a postdoctoral researcher at the AI Now Institute. Her research centers on the critical study of technology and culture, with an emphasis on historical and ethnographic methods.

Dr. West is currently working on a project that addresses the politics of diversity and inclusion in technological communities by exploring the nexus of artificial intelligence, gender, and intersectionality. She received her doctoral degree from the Annenberg School for Communication and Journalism at the University of Southern California in 2018, where her dissertation examined the cultural history and politics of encryption technologies from the 1960s to the present day.

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

SMW RAI_mixdown2.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

SMW RAI_mixdown2.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A I, a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan and Jess, just as a reminder for all of our episodes. While we love interviewing people who fall far from the norm and interrogating radical ideas, we do not necessarily endorse the views of our guests on this show.

In this episode, we interview Dr Sarah Meyers, W, who is a postdoctoral researcher at the AEI Now Institute. Her research centres on the critical study of technology and culture with an emphasis on historical and ethnographic methods. She is currently working on a project that addresses the politics of diversity and inclusion in technological communities by exploring the nexus of artificial intelligence, gender and intersectionality. Dr. West received her doctoral degree from the Annenberg School for Communication and Journalism at the University of Southern California in 2018, where her dissertation examined the cultural history and politics of encryption technologies from the 1960s to the present day.

Previously on different episodes of our podcast. We have done a segment to preview the upcoming interview with our guest called Loved, Learned or Leave. And as this is a learning process for both Dylan and I, we are going to try something new from here on out where we just quickly touch on the main topics that we're going to be covering and that we go over with our guests during the interview. And then we'll do a little bit of a debrief at the end of the interview for our initial reactions with a longer debrief in our mini sodas that come out about once a month. So for today's interview with Dr. Sara Myers West, the main topics that we covered were the relationship between technology and power, how A.I. systems are reflective of social issues, including racism and sexism, the responsibility of A.I. technology in solving societal issues, and why diversity and representation in the field of A.I. is vital.

I am absolutely psyched to share this episode and it was such a wonderful conversation with Sarah because Sarah is actually someone that I've looked up to for the entire time that I've been doing this A.I. ethics work. And it's her. And also I now are the people and the group that I've probably cited the most in my own research, especially one of the papers that we talk about in this episode, the Discriminating Systems paper, which is about race and gender and power and social systems.

And so it was such a pleasure to hear more from, I guess, the other side to to kind of pull back the curtain. And I really got to nerd out because again, you know, I now and Kate Crawford and Meredith Whitaker and Sarah herself are people that I really think of as people that paved the way for the work that you and I are doing just in bringing up these radical issues, especially around power. So I was really excited to have that almost felt like a personal connection to this interview and to Sarah because of her work.

Yeah, I definitely also have a bit of a soft spot for Sarah and for a I now, because as I was beginning to develop my career in high ethics and deciding whether or not I wanted to even go to grad school, they played a very big role in my decision to go into this field.

And now we're actually getting to engage with the researchers at this institute. We're getting to talk to people like Sarah. And that's one of the reasons why I actually love academia so much, because we have these academic celebrities that we engage with so deeply with their research, and then we actually get to meet them and talk to them and have conversations and sometimes even collaborate with them. So it's it's wild and it's a really fun process and such a journey that we're just really excited to to be able to do this and engage in this way.

And for this interview and some of the upcoming ones that we have with people that we've admired for a long time, we have to remind ourselves, you know, like like take a deep breath. Like, you know, they're people, too. They're people, too, because they're these people that, you know, we've really. Again, we sight we read about and that.

Are really the people that we look up to in doing the work that we do.

And so for these reasons and many more, we are so excited to share this interview with Dr Sara Myers West.

Thank you so much, Sarah, for coming and joining us on the show today. And before we get started, we just want to get to know a little bit about you and not only as a researcher, but just your journey throughout life and what has brought you to where you are today. So if you do the pleasure, we would love to hear your story and a little bit of your background. Sure.

Well, first off, it's really lovely. Did speak with both of you.

And I'm excited about what you're doing here with the Radically I podcast. So I've been looking forward to this conversation.

So right now, I am a researcher at the Eye Now Institute where I'm studying the dynamics of gender, race and power in artificial intelligence. So my background is mostly in cultural studies and science and technology studies, which basically what that means is I approach my work by looking at the broader cultural and social factors that are shaping the ways that technologies are influencing our lives. But I also attend to the material dimensions of how technological systems are built, how their politics might materialize in infrastructure.

And I have always been inclined to history, to trying to understand how the past is reflected in our present.

So I use a combination of history and ethnography to try and study the dynamics of how technologies are produced. So kind of reading our present conditions against the grain of the past and especially in the current moment. Really looking for moments where things might have been otherwise and what we can learn from them.

So for yourself, is this has this always been kind of your research identity, like as a child? Were you like, I'm going to work for a I know at some point it doesn't exist yet, but that's what I'm going to do.

I mean, I've always been really, really interested in science and technology from like a really a really young age. It's so funny. Just having this conversation makes me think back to like I'm a huge, drastic park fan. Like, I loved that movie growing up. And there's so much that's encapsulated in that movie that, like you can see. And what I'm doing today, Booth, the notion of hubris around technology, that it's both dangerous and beautiful. There's strong, like feminist principles in that movie. And, you know, it really tries to unpack the larger, like politics surrounding the technologies that are shaping our lives. So, I mean, I've always I've always been inclined to be interested in science and technology. And I think that that's definitely, you know, been my course from an early age. But would I have anticipated this particular path a billion years ago now? Probably. Probably not. Like like a lot of nine year old girls. I wanted to be a marine biologist or I was. Yeah.

So there's there's a tweet that's been going around while we're recording this episode about Jurassic Park in particular. Oh, really? About. Yeah. So the tweet reads, I owe the Jurassic Park franchise an apology. It is in fact, very realistic. The rich would reopen a park in spite of it consistently resulting in mass death. And this is about like the Kuvin 19 pandemic that that we're currently in. But I'm very interested in this intersection between Jurassic Park and like in sociology encapsulates everything.

I feel like Michael Crichton was an early estimates scholar and some in some respects, it's a movie I wanted to ask you about in terms of your path, which seems to have evolved a little bit, at least maybe in the last decade or so, especially your academic path.

From what I saw in your doctor or research, that was very related to encryption and now you're studying and researching things that are seemingly quite different than encryption.

So I'm wondering what the path was there and how that happened and evolved.

Great question. So, yes, my my dissertation research really looked at the cultural history of cryptography and what my present research shares with the research that I was doing. That is a concern with information and empower, you know, the the ways like encryption technologies are really foundationally about. Well, I argue that there about many different things. But one of the dominant things that encryption technologies have meant to us culturally is about being able to control access to information, to control who's. Access information or to try and break into other people's information that they're trying to keep secret. I think that similarly, we see dynamics of attempts to control information that can help us make sense of the world around us in the infrastructures, around artificial intelligence systems today. So I think that that's that's one one very clear through line. Another one is looking at data infrastructures and data flows. So one of the first pieces that I published before my dissertation work looked at the development of cookies and tracking technologies and how deep. That's sort of like set us on a path towards what's Susannah Zubov calls surveillance capitalism. And a, you know, making that move between trying to study what what technologies mean to us culturally and how they surface in data infrastructures. I think it's very much still a project that I'm engaged in over it.

I know where I first experienced your scholarship was with what's entitled Discriminating Systems, Gender, Race and Power in a I.

And it's something that I've quoted a lot and cited a lot in my own work. And I'm wondering if we could just start at the beginning. So what are these discriminating systems and what do gender, race and power have to do with artificial intelligence?

So artificial intelligence systems are largely classification technologies. Right. So they're they're looking at massive data sets frequently and then trying to comb through those datasets, making predictions about future patterns or identifying patterns and predicting future behavior and things like that.

And so what they do, because these data sets are largely reflective of of wider social conditions, is that they reflect and they amplify existing forms of social inequality. And what I've been looking at are some of the the most pervasive forms of social inequality, which are racism and sexism, which we see surfacing in systems almost on a daily basis.

So that's that's I think at the heart of discriminating systems is trying to make sense of one. Where are we seeing these dynamics of social inequality surfacing in systems? And then, too, if we'd known that this is a problem for as long as we have and we traced this back. You know, I was reading articles in nineteen ninety three that we're talking about issues of gender inequality. Back in nineteen in the early 1980s, grad students at M.I.T. were putting together reports outlining issues of discrimination in at the at C scale. These issues stretch back pretty far in time, but they've they've remained really pervasive. So what's what needs to change? Why. Why haven't things changed? I think foundationally it's because racism and sexism haven't changed. And we need to tackle those deeper social problems at the root.

Do you think that since these social issues like sexism and discrimination, racism, they are so deeply embedded into some of our computing technologies, especially artificial intelligence? Do you think that it's the responsibility of these technologies and these underlying algorithms to just stop the perpetuation of those social issues? Or do you think that it's their responsibility to try and fix them and solve them entirely? Is that even possible?

Yeah, that's it. It's it's a big question. Well, one, I would be wary of situating both the problem and the solution solely in technology or, you know, even one step further and saying, you know, this is a problem of bad engineers were doing bad things with technology. There are cases where that might be true, but it's all we also know that it's a much more complex and persuasive problem than that.

So I think that, one, this is the reason why we need social science and humanistic scholarship, of which there are there now is a substantial amount of work, but work that needs to be attended and listened to to really understand the complexities of social life and how they're intertwined with technological systems.

Now, the second part is how do we turn to technology to do we to fix it in the technology? And here's where I think we need to be really careful. And it's important to look at the larger dynamics of power and how technological systems are implicated in power structures, because there is a risk that by trying to fix the technology, you can actually exacerbate the underlying social problem. So let's say we have biased. Facial recognition system fixing that bias facial recognition system doesn't really help communities of color. If that system is then used for surveillance or for deportations, which we know that they are.

And so I think it's important to attend to like the much wider dynamics from the development of a technological system to its material dimensions, to how is it being used in the world and in what context in order to really get at the heart of that question.

One thing the. I've heard as a pushback to my own work in the humanities, commenting on the work of technology and especially bringing that discussion of power into that discussion in general. Is that technology has nothing to do with power, right? Technology has nothing to do with society. It's just it's totally objective. And I'm curious from your perspective how you would address those critiques or challenges.

Technology has everything to do with power. It's like.

In its in its very definition, it has to do with power that, you know, as a as a cultural studies scholar that. So the notion of objectivity is itself a social construction. It's you know, it's something that's produced around, you know.

Through, you know, certain conditions, through rhetoric, that gives you a sense that something is purely rational or purely neutral.

But we know that that doesn't really exist in the in the world around us.

That, you know, the scientific method is something that is socially constructed as a way of proving knowledged claims. So. You know, I would first question, what do you mean by objective or what do you mean by neutral? And then, you know, look at what technology is doing in the world. It's foundationally social.

And that's never going to be neutral. Going back to this idea of power and the immense power that technology has over society. Something that I know you've done a lot of work with in the past is this idea of representation in the field of artificial intelligence. And it's definitely very important to address that these powerful technologies are being created by a select few individuals. So I'm curious what you have to say about the importance of representation in these powerful technologies and really the lack thereof right now.

It's pretty bad in terms of the dynamics of representation in that field of artificial intelligence. And worse, it's been bad for decades. So we were doing the research for discriminating systems. I pulled as much data as I could possibly find on, you know, what is the state of representation in artificial intelligence.

We found things like, you know, 18 percent of authors at the leading artificial intelligence conferences are women. Over 80 percent of A.I. professors are male.

Another study looked at awards given by leading a I conferences and the Conference on Vision and Pattern Recognition, which is the leading computer get computer vision conference, has never gotten given an award to a woman in over a decade.

Neuropace is only given three percent of its awards to women over the past decade.

We know for a variety of reasons, we don't really have good data on racial diversity in the field of A.I.. But the the statistics that we have suggests that it looks even worse. The.

The machine vision researcher and co-founder of Black An Eye to Nick Ibru found the first time that she went to NeuroPace, which I think was in two thousand sixteen. She was one of six black people in the audience out of about eighty five hundred attendees.

It's just it's astonishing. So the data that we have suggests that representation in the field is pretty darn pad. But we know that the data is only, you know, one.

The data itself is only part of a much larger story about what does representation in the field look like?

Because. It doesn't account for the experiences that women and trans people and people of color have when they're working within the industry.

And we know that discrimination and harassment are rampant at all of the leading companies and that as are working in the field of AI right now.

So representation matters.

Some of those facts and figures, the first time that I read them, and this was in the.

Now 2019 annual report absolutely shocked me, but didn't surprise me.

I guess I would say and a lot of the work that I've been doing for the last several months in my PTSD program has been about white male accountability in this space.

And I'm curious from your perspective, what do we what do we do? Like, how do we implement accountability into these spaces and maybe even into her algorithms, but just across the board, what do we do?

I mean, I think it's really important that the onus to fix issues of discrimination in the field not only be placed on those who experience it. And so I think that it should be the responsibility of everybody to be actively anti-racist, to be actively fighting discrimination in the context in which they work. So I think that absolutely it needs to be seen as more than about achieving diversity or achieving representation.

I think it's it's more multi layer than not. And I think that it's absolutely the work needs to be done by people who.

It cannot just only fall on women and people of color to do the work of diverse like of diversity, and when it comes to diversity, especially in the field of artificial intelligence, it isn't just this one sided problem of representation of who is creating these systems, but it also is whoever creates these systems is playing a big role in the way that they function. And so there's also this problem of discrimination in the systems themselves based off of who's creating them. So how have you seen that show up in your work? And how does diversity and representation in the creators of these systems reflect in the implementation of the systems themselves?

Yeah, actually, that's that's kind of where a lot of my work is focusing on right now. So I don't have, you know, like, quick and punchy answers because it's going to take a lot more deep work to really clearly do the forensic analysis that's needed to to understand these relationships. But I can tell you that in the research that I've been doing, you can see issues of like whiteness and patriarchy stretching all the way back to, you know, really the origin story of artificial intelligence.

I would also, though, kind of push on the premise, which is looking only at the process of developing technology, because we also need to look at how like what do we envision these systems being used for? How were they taken up in ways that might be different from what they were designed for?

You know, you'll often see, you know, learning from the experiences of the disability community. A system might be developed ostensibly as an assistive technology, something that's supposed to address the needs of the disability community. But as soon as it becomes commercialized, it gets deployed for something completely different. And often sometimes at odds with the needs of the community that it was developed for. And so I think looking at that question of secondary use, how things that might be developed for one purpose. All of a sudden become surveillance technologies.

That that matters to looking at at the commercial imperative. And thinking really critically about that as well.

So right now, as we said in the midst of this pandemic and there are discussions about contact tracing and other things like that, do you believe that we're at similar risk of those technologies turning into either surveillance technologies or being experienced disproportionately, maybe even an oppressive way to marginalize communities or communities that are already marginalized?

I mean, I I'm not a health care expert, so I can't speak directly to any particular technology. But I certainly have those concerns. And I think that the underlying concern is that there's a widespread pattern of not involving the communities that are going to be affected by these systems in making these decisions or in any kind of accountability process. So I think that that's where we need to start, is involving, you know, involving the communities that are going to be affected and potentially harmed by these systems. Way earlier in the process and in a way that's actually really meaningful and not just about the optics of community consultation, but really deep engagement.

That actually seems to be a theme that's been coming up a lot in some of our interviews recently. Is this idea that all these technologies should be subject to Democratic discussion, but they almost always aren't. So do you know of a way that we could try to encourage large tech companies or the creators of these technologies, especially proprietary technologies? How do we get everyone in on the conversation in these communities that are being impacted by the technology? What can we do to think about those who who need to and should be a part of the conversation when it comes to designing and creating these systems?

I mean, just the question of what are like what is a community? What are the communities that are going to be most affected? Like, these are really hard and complex questions in some respects.

And I think one is resources, you know, making sure that you not only provide resources to identifying the problem, but creating the resources necessary to enable people to engage, you know, paying people for their time, having a travel budget that can fly people out to, like, meaningfully participate in processes that that kind of thing matters.

But secondly, I just know based on past behavior, I'm not necessarily convinced that.

Leaving it to companies to police themselves is going to be the solution here. I think we can see what the incentive structures lead towards.

And so I think that there's going to be an important role for external accountability, whether that's, you know, regulation or whether that's, you know, protest movements and organizing, which we both of which we see happening much more actively over the past year, year and a half.

That's one of the things I really appreciate about.

I now and Kate Crawford and Meredith Winoker and all the that you all are seated, seated at this intersection between like the Academy, the industry and policy and regulation. And I'm wondering if you have any thoughts about how those different domains can work together towards creating a more equitable, I guess, reality for technology and technology out in the world.

So all I'll only speak on my personal in my personal capacity and not on behalf of anyone else that I know necessarily, but.

And I think that that the strongest form of advocacy is advocacy. That's rooted in evidence. And so what I'm really keen to do in my own work is to do really rigorous research that points toward, you know, social problems that need addressing or points, you know, where appropriate, because some of these problems won't have solutions.

But where there are solutions, you know, wants to be doing the work that provides the evidence basis for them.

So I think that there is an intersection between doing really solid research as the foundation and then orienting that research toward social justice goals. That is, I think, at the core of of all the work that I'm doing. And I think a lot of my colleagues are doing it right now.

And part of what I think is amazing, what you bring to this is a humanities perspective. And I'm curious your thoughts on what role the humanities has to play. You've talked a little bit about this, but I was wondering, you put a finer point on it. What does that have? What role do the humanities? I guess what perspective can the humanities bring to these questions of technology ethics?

I mean, I think that the humanities play a really important role in helping us to understand the underlying human conditions in which we're living and that enable us to provide a larger context and to understand the role of ideas to expand our imaginary through arts. You know, there are important lenses that honestly, if you look back through the history of artificial intelligence, they used to be much more at the core of the field. It's you know, artificial intelligence has grown into a field that is much more narrowly technical over time. But it's not necessarily how the field was always constructed historically.

And so I think that there's room to recapture some of that space of possibility and something that Dylan and I do for every interview as a part of this project and this journey that we're on as we're trying to define what radical A.I. really is and also what the word radical really is, is we would love if you could tell us in your perspective what you think that the word radical means to you. And then also in an effort to try to uplift more diverse ideas, if you could explain a bit about what you think about you and your story or maybe your research interests are particularly radical in this field.

To me, the word radical is foundationally about social transformation. It's about creating the conditions for a foundational social and political change and transformation that leads towards equity and towards justice. So in that sense, all of the work that I do is is deeply radical because it's oriented around around causes of justice.

Now.

In another sense, I don't want any of my work to be seen as radical, if radical is understood as like extreme.

I want social change. Social transformation.

To feel common sense, to feel necessary, to feel urgent. And so I would not probably not situate my work as necessarily radical in that more conventional definition. But in its transformational politics.

That's that's sort of at my core.

Absolutely. I'm I'm curious if maybe even just to take yourself as an example, like from where you are now, if you were looking back at yourself as a younger child watching Jurassic Park and one day having this wonderful career and ethics. What advice you might give that child and by extension, especially younger women who are embarking in this field?

I think probably. Three things. One is to develop it.

Hold tight to a set of core values and principles to develop that that core sense of self and what you believe in to is, is to not ever be held back, particularly if you have an interest in working in science and technology. To not ever be held back by the policing around expertise, because we know historically, particularly for women, particularly for people of color. That expertise is defined against identity.

It's it's it's shaped more by the identity of the worker than about the content of the of the work. So. So I think that that that would be that would be number two. And then number three, I think, would be just to really read widely and broadly as a as a means of experiencing, you know, the wider scope of humanity around you as much as possible.

Great. And as we reach the end of this interview, Sarah, is there a place where our listeners, if they were interested in engaging a little bit more with your work, a place where they could go to find out more about what you do? Sure.

So I would direct them to the now Web sites. A lot of my work is in publications there. I also have a Web site, Samira's W dot com or my Twitter handle. It is at Sarah Myers.

Perfect. Well, thank you so much again for coming on, Sarah. It's really been a pleasure.

Thank you. Thank you so much for the great conversation.

Again, we are so grateful for Dr. Sarah Myers West taking time during this time of uncertainty in our world to sit down with us, and we hope that you enjoyed this conversation as much as we did. So as we mentioned at the top of this program, we are changing up our format just a little bit. And instead of our longer debriefs happening right now at the end of these episodes, we're going to save them for our monthly mini soda that happen about once every month, as the word monthly may imply.

For right now, though, we do want to debrief just a little bit and give our first reaction, because as some of you know, we actually record these pretty soon after we record the interview. So we're going to give our first reaction to this episode, to this conversation, and then perhaps add some sort of follow up research question that we may be bringing with us out into the world from this interview. So just what was your first reaction to our conversation with Sarah?

So my immediate takeaway and reaction from this conversation, maybe it was just because it's something that we talked about near the end of the interview was Sarah's definition of radical.

I think it was just really spot on in terms of something that you and I have been discussing quite a bit lately, Dylan, that we are trying to walk this fine line between figuring out what radical has conventionally meant, which people typically think of as something like extreme, like Sarah was saying, and then what we are trying to turn radical into, which is taking these ideas that people shy away from and making them more the norm. And I love that Sarah mentioned that she doesn't want her research to be viewed as radical, but that things like social transformation should feel like common sense. I think that's so spot on.

It's something that we've, as you said, talked about a lot, even in terms of who do we invite on the show, because we both want to represent, like radical ideas, ideas that are our way out there and that are inviting the entire, you know, institutions and industry into new conversations. And also, we want people to listen. Right. And I know from various parts of my own work that when you go to extreme or to out of someone's comfort zone, they stop listening. And so for us, it's almost like an ethical question, which I think a serious definition of radical got straight to the heart of. It's like we want everyone to be able to engage with these topics to a certain degree.

And we also want to make change happen. So how do we, as podcasters write and as a bioethicist, how do we work with different? I guess I want to say different circles of industry and the academy that stretch all the way from the conservative to the liberal to the hyper conservative to the hyper liberal, where we're really we're making value statements here. Like, that's that's what we're doing here. And, you know, we we want people to still be able to listen. So how do we walk that line?

And I think the answer that Sarah gave is that she is pushing for and I think this is a quote, advocacy rooted in evidence.

And maybe that's a little tough for us right now, where we're living in a world where politically sometimes we talk about fake news and even truth and facts are being rallied against and in certain public sectors. But I think that if we can and ground our concept of radicality in inviting new conversations while also providing advice, advocacy rooted in evidence, then then I think we're doing our job.

And that's something that hopefully we can continue to do going forward. As we interview people who fall near and far from the norm and represent so many different ideologies and values. And something else for me that I feel myself immediately reacting to from our conversation with Sarah was her ideas and thoughts about what objectivity even means and how we assume or many people assume that these technological systems, especially artificial intelligence, are neutral in themselves and objective and why those conceptions of technology might be a little bit dangerous when they're misplaced.

For me, I think the biggest thing that I'm taking away from this conversation with Sarah is simply this concept of power.

And questions like what the heck do we do with it?

And so there is a there's a period in this episode where I asked Sarah almost like a leading question. Right. Because I knew where she was going and I knew where I was going, which was this question about like what about people who say that technology has nothing to do with power analysis. And I like folks at home, probably couldn't see me, but I was so giddy at her answer because for me, it was just like, yes, absolutely.

Like, you can't separate technology from power. And Sarah's answer, I think, was just a beautiful way to say this is why we do this work like this is why A.I. ethics matters in the first place, because these are real people interacting with real spaces. She talked about materiality a little bit and there are real downstream impacts of how we interact with power. And technology is all about power because technology's all about our relationships with one another. And there are still so many questions for us out in the field of OK. Well, even if we had knowledge that power is at the center of these relationships with technology and even if technology is defined by relationships, well, then what do we do with it? Like, how do we have those conversations and come together to create a world of of equity and to use our technology for the betterment of everyone?

Yeah, I think Sarah's words were definitely a very good introduction also for people who might be new to the field of high ethics to understand what some of the broader underlying issues in this field are in terms of ethics and morality and diversity and representation. And honestly, sort of a call not just for me and you, Dylan, but for everyone listening. A call to remind us why it's important to do what we do.

Thank you so much for joining us for today's episode and for more information on today's show. Please visit the episode page at radical A.I. dot org.

If you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite pod catcher. Join our conversation on Twitter at radical iPod.

And as always, stay radical pumped up.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

More computing power makes audio-to-text faster and more efficient. Here are five reasons you should transcribe your podcast with Sonix. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone. Automated transcription is getting more accurate with each passing day. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Sonix has the world's best audio transcription platform with features focused on collaboration. Do you have a podcast? Here's how to automatically transcribe your podcasts with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.