Episode 8: Love, Challenge, and Hope: Building a Movement to Dismantle the New Jim Code with Ruha Benjamin

ruha.jpeg

How is racism embedded in technological systems? How do we address the root causes of discrimination? How do we as designers and consumers of AI technology reclaim our agency and create a world of equity for all? To answer these questions and more The Radical AI Podcast welcomes Dr. Ruha Benjamin to the show.  

Dr. Benjamin is Associate Professor of African American Studies at Princeton University and founder of the Just Data Lab. She is author of People’s Science: Bodies and Rights on the Stem Cell Frontier (2013) and Race After Technology: Abolitionist Tools for the New Jim Code (2019) among other publications. Her work investigates the social dimensions of science, medicine, and technology with a focus on the relationship between innovation and inequity, health and justice, knowledge, and power.

To find more information about Ruha Benjamin’s work, you can find her website at ruhabenjamin.com, or you can follow her on Twitter @ruha9.

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Ruha_mixdown2.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Ruha_mixdown2.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess.

And just as a reminder for all of our episodes, while we love interviewing people who fall far from the norm and interrogating radical ideas, we do not necessarily endorse the views of our guests on this show.

In this episode, we interview Dr. Ruhullah Benjamin. Dr. Benjamin is associate professor of African-American Studies at Princeton University and the founder of the Just Data Lab. She is author of People's Science Bodies and writes on the stem cell frontier and Race after Technology Abolitionist Tools for the New Gym Code. Among other publications, Dr. Benjamin's work investigates the social dimensions of science, medicine and technology with a focus on the relationship between innovation and equality, health and justice, knowledge and power.

In this interview, we explored a lot of questions like how is racism embedded in technology? Can robots be racist? How do we, as designers and consumers, reclaim our agency and technology? How can we teach racial literacy to computer scientists? And what is discriminatory design? How do we build a movement to address it?

It was so meaningful for us to be able to explore these questions with Ruaha because Ruaha was one of the first people to show support of this project and be willing to say yes to coming on and being an interviewee. And this conversation is something that I'm so excited to share with our listeners, because it provoked me in some ways. It challenged me in other ways, and I was able to learn just so, so much.

It's definitely incredibly humbling having people that we look up to so much in this field and people that we see come up in our research and conversations with colleagues in classroom settings and papers that we're reading, and to actually be able to see them face to face or at least monitored a monitor over resume in these interviews. It's it's such an incredible experience. And we are just so excited to share this conversation that we had with Dr. Ruaha. Benjamin, with all of you.

It is an absolute pleasure to welcome Dr. Ruhe Benjamin to the show today. How are you doing today?

I'm fantastic. Thanks.

As we begin, I was wondering if you could tell us a little bit about your personal story.

Sure. And I'm happy to go back earlier. But for starters, I'll start in undergrad where my research interest started bubbling and how that kind of intersected with my my personal life. So I went to undergrad at Spelman College, historically black women's school. And I was looking for a senior thesis topic. And I ended up looking kind of doing this double ethnography, looking at obstetrics and black maternal health from the way that conventional medicine and institutions approach it and the way that black midwives in the south have approached it for generations. And so since I was doing this kind of comparative analysis, I was questions about knowledge, production, power, authoritative knowledge were were some of the themes that rose to the surface. And I think that that's where my my questioning of sort of mainstream science and technology started to rise up as I saw that many of the conventional practices in reproductive medicine and in hospitals actually didn't serve women, broadly speaking. And black women in particular. And that this other what we might call or what. So social scientists think of a subjugated knowledge, you know, in terms of midwifery and black midwifery and specifically how it had a very different approach to thinking about the body, thinking about childbirth and then the practice of healing. And so that's where my interest in medicine as a social enterprise sort of gate gate was sort of bubbling up. And then I went to grad school and I was looking for a research topic that I could do there, in part because I had two small kids by then and I couldn't travel and do the very fancy, you know, overseas research, you know. And so I needed to find a research site that would allow me to ask these questions about power, knowledge, hierarchy, etc..

And at the time, the state of California was investing billions of dollars into stem cell research because the federal government wouldn't fund embryonic stem cell research. And so there was a ballot measure. People went to the polls, voted for this new initiative and a new state agency dedicated only to stem cell research or rose. There was a new constitutional amendment called the Right to Research. And so it was clearly a political and scientific enterprise taking shape. And I ended up researching it from the inside as one of the first training fellows in their foot, first cohort of students who were being trained to do both the science and the ethics around the field. And and the way that I even got into that was an adviser of mine, was one of the bioethicists who was, you know, guiding the initiative.

And so she encouraged me to think about it as a potential research. So it kind of a project out of convenience. And so that ended up being my first book. And and so race after technology, my most recent work was not supposed to be my second book.

It wasn't like the next stage.

I was working on genomics and looking at the way that different countries were institutionalizing genomic science, how categories of race and caste were being incorporated into the field.

But then all of the you know, all of the developments that we now know in the last three years in terms of the data sciences were, you know, coming to the fore in public conversation. And I and I was just interested in how the racial dynamics and questions of equity were being framed in both the popular discourse, but also the way that people within the tech industry were thinking or not thinking about things. And so that motivated me to shift gears a little bit and take those same core questions, which is around power, knowledge, hierarchy, and apply it to this different scientific field that is developing.

And as you shifted from this research that was more related to stem cells and very close to the field of biology in some senses towards more of a technology based research. Was that a natural shift or what made you get interested in the field of technology?

Yeah. And in some ways it was natural because my training was in science and technology said.

So even as a grad student, I was being taught to think about the relationship between these not as completely separate, but in fact we often in in my areas of study, we did talk about it as techno science so that the line between science and technology is blurred not only in practice, but in our analytical categories.

And so if there was still a lot that I had to learn because every every discipline in every field is unique. Right. And so it's not like you can take the very same approach to no matter what field you're you're engaging, but it didn't feel like it.

Moving from science and technology studies to if I was wanted to study like housing policy or economics or something that was not under this wider umbrella. And so there was still a steep learning curve.

But at the time I was when I made that shift, I was on sabbatical as a fellow at the Institute for Advanced Study in outside of Princeton. And so I was interacting with a lot of people doing not even doing like applied mathematics, but doing like theoretical math and and physics and so on.

And so it was kind of the perfect context, too, for me to begin thinking about a I when these were the people who were doing a lot of the theoretical work behind it. And I remember one conversation I did like there's something called like a bar talk where you do like a ten minute provocation on any topic you want and then people just talk about it.

And I did mine right at the beginning of this project and my provocation to get people in the room was our robots racist. And that ended up being a chapter title. But this is well before I knew this was gonna become a book. And I remember the conversation, you know, after I did this provocation and I was talking to one of the mathematicians.

And, you know, his response to the issues of discrimination and equity that I was raising was, well, you know, the I we expect the aid and develop to such an extent that we can ask it what to do about the ethical problems. Right. We will turn to the technology as the kind of source and guidance.

That's how much in my head I'm thinking that's how much faith you have in this city, you know, is that it's not only going to be the tech fixes for all the things out there in the world, but it's also going to deliver the ethic, ethical frameworks, you know.

And so that and well, that exchange at at this bar talk was like, oh, I got to write a book because these are the these are the people who really it was a it was a profound like, you know, faith and belief in its even goes beyond a kind of techno solution ism, like the fact that you think that these things that you're creating are gonna offer human societies the ethical guidance on what to do. I got a I got to intervene here.

And so luckily, there are many other people also planning to intervene.

So in the last few years, it's been so much great work that's blossomed around this conversation around the social political dimensions of technology, broadly, A.I. in particular.

So without giving too much away from your book and we invite all of our listeners to go out and buy the book, it's an incredible read. I'm wondering if we could dive into that question. So is I racist? Is technology racist?

So many good like those are each like three hundred thousand dollar questions and dissertation worthy. You know, investigations, really. And so we have to start, you know, by by coming to some agreement about what racism is before we can decide, you know, whether technology I can be racist.

And so I spent a little bit of time in the book, not too much, but kind of disentangling what many of us are taught to think racism is and what it actually is like in practice. And so part of it is to disentangle it from an assumption that to beat for something to be racist, there has to be the self-conscious intent to harm or there has to be some identifiable, identifiable sort of desire to be malicious, the kind of racist boogey man behind the screen. And so there's a kind of very flat version of racism as individual, interpersonal, self-conscious, intentional. And so one side disabuse us of that definition of racism and we begin to see what racism is in practice, both in terms of the effects of racism and the fact that if we think about technology in relationship to, say, laws, and we know that laws have encoded racism and we can talk about certain laws as racist. The fact that they didn't have to the the the.

Authors of our policies and laws didn't necessarily need to want to harm particular populations. But out of ignoring the effects of those laws, that a social a political approach to the law can, in fact, create very harmful, even deadly consequences that I think we should rightfully think of as racist and classes and sexes.

And so we can think, for example, about our drug laws as a kind of iconic example, about the way that certain drug enforcement and penalties are associated with the kinds of drugs different racial groups use and having stricter higher penalties on drugs, more common among black people than white people.

And so that law is racist in the way that it then has this effect. Similarly, then we can begin to apply that same rationale to technologies that the author or the designer of a technology doesn't necessarily have to sit down and self consciously want to harm a particular group. But by ignoring the history that's being built into whether it's the training data or the models, the proxy's, et cetera, in the design of any system. And it has these disparate effects, I think we can rightly call it racist. And at the same time that, you know, part of what the book is trying to do is to broaden our vocabulary.

When I first started the project, I was like, everything's racist. That's racist, that's racist. And I was just I was I got this kick. Right. And I had some of my colleagues was like, calm down, calm down. You might want to flesh that out a little bit. You can't write a whole book calling everything racist. And so I was like, OK, OK, you're right. You're right. I know you're right.

And so then I thought, OK, what what are the different manifestations of this larger process of discriminatory design? You know, the various buzzwords, algorithmic discrimination. And that that pushback on the part of my colleague, Aaron Penkovsky, by the way, UCLA professor, that helped me sit down and think, okay, let let me think about this on a spectrum of of effects from the more obvious to the more insidious. And so that helped me develop a vocabulary around, you know, helping to name these different manifestations. And those are the structuring concepts for the book from engineering inequity on the more obvious ends of things, where these are technologies that are, you know, designed to stratify, designed to create hierarchies and surveillance. And no one's hiding that fact, really. And then we move along that spectrum to on the other end, you know, techno benevolence, which is where, you know, the desire is to use technology to better the world and help thing, you know, and we don't expect that to have racist effects.

But in fact, in it in the post intentional analysis that I'm offering, even those can potentially lead to a reinforcing and deepening systems of of inequality. And so it was really a process of conversation and dialogue with colleagues and graduate students. And you know that I helped me develop from my my first impulse, which was just to, like, go up guns blazing.

Everything's racist, too. OK. Now we can talk about the new gym code.

And for some of our listeners who are maybe not as well versed in the technological realm and might not know some of the background for these discriminatory systems, as you were mentioning before. I would love if you could dive in a little bit into some examples of how discrimination and racism especially can manifest itself in technology and especially in artificial intelligence systems.

Absolutely. And so before I get to the A.I. version, I'll I'll just add one example I use in the book that's that made the rounds circulated quite a bit that everyone has some experience with is looking at, you know, automated soap dispensers and automated technologies more broadly. So the automated soap dispenser is a window onto this broader process of automation. A few years ago, there was a video that made the rounds to friends putting their hands under this dispenser. And the soap would only come out with the darker front, with the lighter friend's hand, and it wouldn't work with the darker friend. And so if we look at that like the layers of what would lead to this disparate, you know, outcome, it holds some lessons there from much more complex systems. Right. And so how did that particular tool get to the point of being in the context of a consumer context without anyone catching this to begin with? What does that tell us about who was behind the scenes in the research and development phases that would allow make it so that it's already installed in a hotel room? And so it offers some lessons about the the diversity behind the scene of a lot of companies that make it so certain issues never even arise. You know, and we can think about that has very low stakes like, oh, you didn't get your soap. So begi. Not not a big deal. But then we can think about the much higher stakes when we think about the forms of bias that in issues that never arise because people around the table don't bring it up. And so with respect to that, I'm thinking about a study that came out a few months ago, actually, after my book came out that really illustrates this idea of coded in equity or the new gym code, which is around health care algorithms and an A.I. system that's used throughout our health care system.

It's a kind of like digital triaging system that will identify patients who are predicted to get sicker in the near future. And so the health care practitioners intervene earlier to try to prevent them from getting sicker. And so the question about how those predictions are made is where the inequity comes, because the people who designed this didn't set out so that it would ignore many black patients who needed intervention, which is actually what happened. But. They assumed that using the cost of the spending on patients would be a good predictor of health care need. That means the kinds of pain, the kinds of patients that we've spent money on in the past, they are the ones who are likely to get sicker. But we know that in our system, we actually don't spend money on many people who need it. And that's a stratified process. Right. And then we do spend money on people who happen to have insurance and private care and so on. And so by building in this assumption that that cost was a good predictor of need, it stratify those who were who were getting services now and into the future. And so the only reason we know about this particular the problem with this digital triaging system is because the company allowed these researchers to take a look at it.

Right. And so usually so much of this is hidden by proprietary, proprietary, you know, policies. And so it's a good thing that this company allowed the researchers to look at it, because now we see that, in fact, having an race neutral algorithm or A.I. system can be very deadly. Right. And so and nowhere in that system was it taking keeping track of patient's race. So it wasn't like it was seeing black patient and then kicking it out and ignoring the black patient. It was the assumption that cost was predicting need and the fact that on average, our system spends less on black patients for a variety of reasons, including CIS, various kinds of systemic racism from help from doctors bias, nurses bias to insurance issues to all kinds of other institutional factors. You know, the fact that our health insurance is coupled with work and so the kind of work that you do that impacts your insurance all the way down the line. And so what would what would be, you know, in a thought experiment is if someone was around the table when that particular, you know, automated system was being developed that knew this history and knew these disparities existed, that could have flagged this earlier on, like the soap dispenser could have said, you know what?

Why are we assuming that cost is a good predictor of need when there's so many people who we don't spend money on who need it? You know, why? Why are we. Why do we think we live in a meritocracy or a place where everyone's needs are met?

And so it would be interesting to have that kind of conversation much earlier in the product design process that could, you know, sort of see the issues coming in preparation for this interview and also just for my own betterment as a person who is reviewing the new Jim Crow and have the book here by Michelle Alexander. And I was going back over how she ends, and it's with this quote from James Baldwin.

And it reminded me a lot of what you were just talking about. And this is from the fire next time, which I believe Baldwin wrote to his nephew. And he begins it with this. He says, this is the crime of which I accused my country and my countrymen and for which neither I nor time nor history will ever forgive them, that they have destroyed and are destroying hundreds of thousands of lives and do not know it and do not want to know it. It is their innocence which constitutes the crime.

Which first. Like. My God.

Yeah. Baldwin. Yeah, Mike's Mike drops every time. But that really encapsulates so much of the dynamic dynamics. Going back to the first question about can robots be racist if we understand that so much of racism is perpetuated through a kind of distorted innocence or an appeal to innocence?

The idea that I didn't mean for that to happen or you're taking that the wrong way, because we're supposed to believe in the good heartedness not only of our countrymen more broadly, as Baldwin describes, but the fact that until the last few years, a kind of techno utopianism permeated our culture in which we were supposed to believe that anyone who works in in these fields are simply trying to make the world a better place.

There's one of my favorite sort of clips from the show Silicon Valley that I describe in the book is, you know, this scene from TechCrunch and all of these, you know, programmers going up and pitching their idea, one little incremental new design after another. But every single one of them ends their pitch with the same mantra. And this is going to make the world a better place. And this is gonna make the world a better place.

And so in part, that's really what Baldwin is taking to task, is it's not even that, you know, it's I don't think he's encouraging us to develop a kind of cynicism, but it's saying that so often the worst practices are perpetuated because we believe so much in our own innocence. And we don't take stock of how, despite our intentions, we can have such harmful effects and contribute to some harmful processes.

You know, I was just talking speaking to this Conference of Machine Learning Practitioners International Conference earlier this week. And one of the questions I had, you know, for practitioners in the field more broadly is whether you are ever in the course of your training.

Do you learn about the history of technology or the history specifically of automated technologies? And I pointed to the example of IBM and the Holocaust and said, you know, this is a paradigmatic example where people were doing their little bit of the code, doing their little bit of the of the of the system. There was the bureaucratization of evil that that evil practices of the Holocaust were allowed to perpetuate precisely because people were of focus on their own little narrow contribution to this larger thing and encouraged not to look up and don't pay attention. Just clock in, clock out. And so in part, it's thinking about this history of deadly data production in the context of earlier eras, that perhaps if we taught more, you know, people in STEM about it, it wouldn't. My hope is that it wouldn't discourage you from pursuing that field, but it would give you a much more sober understanding that, you know, the idea that what you're gonna do is inevitably going to better the world is is perhaps a recipe for danger, you know.

And so to think about the big picture, think about the history and sociology of our society and how it intersects with tech development, I think should be essential, should be integrated into our pedagogy rather than something that's just kind of like added on. If you're interested, you know, at the end.

Yeah. And that brings up a really good point, too, about this idea of responsibility. And you mentioned the responsibility of computer scientists and designers of algorithms to not pretend or actually be ignorant through this innocence. Right. Because they're not being trained. But then there's also this responsibility that I've heard you speak of before with your research that falls on the consumer and the users of these algorithms as well. So I would love to hear you speak a little bit about the responsibility on both ends of the spectrum here and how they can both play in the the role of the way that these algorithms are perpetuating this systemic inequality.

Absolutely.

You know, my my sense is that there's enough responsibility to go around the problems that we face are produced at so many different levels through so many different processes. And each one of those is an opportunity to rethink how we've been approaching things, because it's not it can't simply be a top down solution, whether it's through law and policy, however necessary, and thinking about how to transform the kinds of accountability that should be there. You know, on one hand, I don't believe we can just leave it to the goodwill of tech practitioners to regulate themselves and to hope that they do things in the best interests of.

Heidi, for a lot of reasons, and so we do need a kind of external accountability structure through laws and policies, but also I just you know, in part what I was wrestling with in the book is where the demand for certain kinds of technologies also take shape in our culture, in our collective imagination. And so it's not simply a supply issue, but it's also that many of the technologies that are then enrolled to stratify, to harm, to create short term fixes, provide convenience for us, provide a sense of tailor made like you feel seen by technology. When we look at like marketing, you know, targeted advertising, medicine and so on, there's a sense where, oh, wow, I'm special because, you know, look at this shoe ad.

They know exactly like I'm feeling that now. I'm getting all these ads during quarantine that are it's so funny that are like business on top party on the bottom ads, like it's like these outfits that are being targeted that they know are sitting in front of screens. And so it's like this outfit will allow you to look good on top and relax on the bottom. And I feel seen like I'm like, you get me you get me algorithm.

And so, like, I think like I want to wrestle with that, that the desire that it's also being filled by many of these technologies, you know, so that we can take stock of our own demand and desire for certain kinds of conveniences. And so and at the same time, I think to do that effectively, I think we have to move beyond thinking of ourselves simply as users and consumers. I think that that's a very limited understanding of our relationship to technology. I think that that's the the dominant view is that we are being groomed to be good users and good consumers.

But as so long as we limit ourselves to that particular role, then I think we give up a lot of power, in part because, you know, when we think about what's been effective over the last three years in terms of shifting the conversation and the expectations around the tech industry, it's been through collective organizing, whether it's tech work or organizing, whether it's community organizing. And so to do that, we have to think of ourselves as more than individual users who just want a better experience.

You know, user experience as as the term goes. And so, yes, to individual level responsibility and thinking about how our imagination gets captured by a particular vision of the good life that sold to us through our screens. And at the same time, I want us to think of ourselves in this broader collective fashion as people who share a society that and we have a shared social contract. And I think that that will lead to much more substantial shifts in in how we begin to see the expectations for tech designers and developers, even folks.

I've heard even with folks that are like agree with you also have this almost passivity to it. And I've fallen into myself like, oh, well, this is you know, this is just the system. This is just the Silicon Valley culture that we're in. And I'm almost hearing your rallying cry as as a reclaiming that agency. And each of us and I would say that. At least part of our audience, maybe even a good chunk of our audience, are folks like Justin I who are on board and were well-meaning white folks who want to do something. And we don't exactly know what to do. And even when I was reading your book, Race After Technology, I was reflecting on like, why I don't want to see some of that, you know, as a white man, like, I can't this just be better. Right. And obviously, it's not. And obviously, there are dire consequences to that. That line of thought and I'm wondering for folks like me and to our audience who want to do something and just don't know what to do.

Like, how do we reassert that agency?

Absolutely. I mean, I think that that is that is the question that I hope people I hope my book leads up to that question where people are like, OK, I get it. OK. 200 pages later, we get it. We get it. So now what? And you know, two, I have a caution and then an appeal. The caution is, you know, in part because the entrepreneurial spirit is so fetishized. Right. You know, that the desire may be or that the end point may. OK, let me come up with some new thing that's going to help address this.

You know, and it's not that new things aren't good, but there's so much happening that well-meaning, good hearted people who want to get in the fight can connect with and lend your your energy, lend your technical expertise, lend your training to rather than starting from scratch somewhere. So in almost every locale, every region, not just of the country, but throughout the world, there are people working not just in terms of community organizing, but tech justice in particular. There is there. And everywhere I travel, I try to connect with the local tech justice Oregon organizations and amplify their work, because even the people often in academia who've been studying this for years, they don't even know what's in their own backyard in terms of people who are thinking through the issues of surveillance or issues of, you know, automated decision systems that are impacting hiring or prisons or whatever. And so my my appeal is, rather than trying to come up with some new fancy thing that's going to do, you know, going to solve the better the world is to connect up with local national organizations that are pushing at the policy level and also pushing from within industry to change certain to change a lot. A lot of the dynamics. And so one of the organizations that I stay connected to on the national level, that is, you know, thinking through a lot at the policy level is data for black lives. And then in different cities, whether it's in New York or Detroit or L.A. and L.A., one of the organizations that I connect up with is stop LAPD spying. In fact, they had a major win last week. They had been pushing for years to stop the predictive policing program of LAPD. And it finally came to an end, although they know from experience that it's likely because racism is mercurial and innovative, it's likely to manifest in a different form unless they and allies and co-conspirators are vigilant about that. But that's a serious win. That was driven not simply by academic critique, but by local organizers who had who knew about, you know, who understand the evolution of surveillance technologies and racial profiling.

And we're really keen on having a community level sort of approach to addressing this issue. And in L.A.. Similarly, Detroit, New York, there's so many good Baltimore. And so I would encourage someone who feels energized and feels on board with a lot of the issues that I raise in race after technology. And to that on the on my personal Web site, Ruaha Benjamin dot com. If you go to the resources page, I have a long list of initiatives and organizations to connect up with, and there are many more besides what I list.

But that just gives you a sense of where to channel that energy in terms of, you know, contributing to what I see as a movement.

It's great seeing that there are a lot of really passionate people who want to help solve these issues that unfortunately, at least in my experience, I seem to see them come from the social sciences. But I'm asking this question from someone who comes from a computer science background. I'm curious if you have any ideas for how we can get this passion for change ignited in the computer science discipline and in the classroom. Honestly, because. In the end, if a lot of these algorithms are going to be proprietary and hidden by the large tech companies, how do we get those who are actually creating them to understand and be equipped with the tools that you're explaining here to try to stop these problems from happening?

And I do I do think pedagogy I think education is like the ground zero for trying to seed a different vision of and role of technology in our lives.

And, you know, for me, that example and when I when I speak to computer science audiences and students in particular, I point to an example in medical schools where the students decided that they weren't being trained to effectively be health care professionals because their schools were not equipping them with a keen understanding of medical racism, the history of, you know, structural competency.

All of these things that they had determined were essential to be effective health care professionals, that they were not getting in their medical schools. And I'm talking about even the very top medical school. So the students as a collective across institutions formed an organization called the White Coats for Black Lives. And not only they worked, they had, but they continue to work in terms of transforming the curriculum so that I know in one school where I where I was in touch with students, they pushed for a first year mandatory course that incorporated a much more robust treatment of racism and health, not race and health, because race is everywhere in health, trade, health and medical school training and the.

And so I could talk a little bit about that distinction, but they wanted to focus on racism and health and they got that. And they looked at the diversity of the faculty. They push, they pushing for that. And one of the things I love is that they issue report cards on the medical schools about how well they're doing, meeting a whole long list of factors. And so when we pull pull back from this white coast for black lives, I think of it as a model for other field students in other fields because the students are not thinking of themselves simply as consumers of education, as people who pay a tuition and then just take whatever is given to them.

They are actively molding the education that they understand they need for a world that they're moving into. Even if the people who are training them don't quite get why X, Y and Z are needed, the students know. And so a set we could imagine a similar kind of student movement arising in the context of computer science. That's institution wide. So even if at your institution, you're only one of a handful of students who is sort of privy to this and knows that this is something that you guys should be thinking about, once you look across institutions, the group is much bigger. And then you can begin to build interest and build build your your critical mass. And so, in part, you know, once you begin to do that, then you see actually there's quite a few resources. There's many kinds of initiatives that have to do with transforming not just education, but even the workforce that you're going into. One of the tools that I would recommend listeners look up is called Advancing Racial Literacy in Tech. That was developed by the Data and Society Research Institute. It's a short work workbook that you can download for free. That has, you know, this three part the goal, the aims are threefold to develop an intellectual understanding of how systemic racism shapes technologies, ones we already have and ones that we're developing, and emotional intelligence of how to deal with racism in workplaces, in higher education. And lastly, a commitment. And this is what was going back to the last question, a commitment to lend your your your skills and your, you know, your energy to assist communities of color in addressing various forms of tech injustice. So this is just one of many resources that students and early career, you know, computer scientists can use as a building block to think about what needs to transform, what needs to change within your field.

But I think step one is to realize, you know, that that you have agency, that you have a say, that you have power, that you don't have to wait for someone else to decide. It's important, you know.

And that's a great you know, and it may be that you want to invite in people from white coats for black lives to talk to you about how they did it, you know, in their process and learn from them rather than feeling like you have to start from scratch.

When Justin I started this project, we had expected to be the only ones asking some of these questions and what we found.

Within two hours of launching the podcast is that we are so far from being the only ones and that there's absolutely like a great legacy of this. And we continue to be just like shocked that we're even talking to you right now because because you're someone that we've looked up to for a while and it's like. God, no. There are so many people.

And one of the first steps in terms of building a movement for that I've learned from my own community organizing days is just to realize that we're not alone.

Yeah, just put up put up the bat signal and then everyone, like, you know, people come out of the woodwork.

Right. Absolutely. And people in our experience are hungry for something. And we've put this word radical around this something. And for us, it has something to do with this movement building. But we would love to hear from you about what that word might mean in this work.

Yeah. I love the word radical, you know, because although it gets a bad rap to me at the most fundamental level, it's about drawing our attention to the root issues, the root causes, what often lies beneath everything else that we're talking around, like what are the fundamental issues that we should return to? And I like thinking about my work and in terms of a radical framing, because it's about questioning what we most take for granted. You know, like part of being social beings and being socialized is that you have to buy in to an agreement that, OK, we're not going to question X, Y and Z so we can get along so we can interact with each other. And so a radical conversation or a radical approach to something says, you know, we've been doing it like this for a while, maybe we should question it. Maybe it's not working for everyone. And it's disruptive in the sense that people don't like to question their long held beliefs and norms and and practices. But I think on a routine basis, it leads to a much healthier just world when we return to those those starting principles and think, OK, is this still what are we still living up to our ideals? Are we still you know? And so as a sociologist that studies technology, part of it is to really push back against a technical fix for many of these problems that fail to address the root causes.

Right. And so it's not that technology inevitably remains at the surface or the superficial or the symptoms. It's not inevitable. But too often we get so enamored with the techno technical fix that we stop there and we don't dig deeper into what is actually giving rise to this problem that we're trying to fix. And so for me, radical A.I. is an A.I. that dissenters itself as part of any movement, as part of any solution. If we look to the A.I. to guide us or to the way that that mathematician did, or we look to the AI as like our, you know, our beacon, what we're trying to do is make it better rather than putting it in its place in terms of a larger social transformation than it's not radical. Right. Because, well, we need to think about as how to foster more just and loving human relationships. So if the technology is not helping to foster that, then it has to go.

So what makes my work, I would suggest radical is that I am not I am not attached. I am. I am not convinced that we always are.

Even all, you know, necessarily need technology in the mix for everything. Right. And so we go back to that first project that I talked about as an undergrad in terms of reproductive medicine. You know, the way that those black midwives approach childbirth is what we would consider pretty low tech. And yet the outcomes and the well-being of the women who use or work with black midwives is so much better than when you have to go to this hospital that has all of this technological, you know, stuff at hand.

But we're too often the needs of the health care practitioners and the efficiencies of the technology trumps the well-being of the women. So for me, as I said, radical eyes, a decentered A.I. and a radical approach to technology keeps on the table the idea that maybe the technology is not necessary in many cases. And so unless people are willing to.

And to hold on to that, then I still think that there's an element of techno, you know, utopianism or techno kind of affection, philia that is is can be dangerous, I think.

Yeah. It's definitely an important concept that both Dylan and I and probably everyone in the ethics space has come across at some point that sometimes the technology that promotes the most societal welfare is, in fact, no. Not yet, all of which can be really hard for technologists and computer scientists to admit sometimes because we've won, like you were saying, we want to jump on that technological solution. It's the entrepreneurship mind. We we were seeking out something to create to solve these problems. And so I'm wondering if you have maybe a piece of advice for people that can be computer scientists or just anyone in this space who is looking for ways that they can maybe help in this space, whether it be through technology as a lens or whether it be without technology at all. But where are some good starting points?

You know, I mean, there's there's a few things that come to mind.

One sort of drawing out the point from earlier about thinking about thinking about yourself connected to communities that, you know, so much lip service is given of technology, helping marginalized people or fixing X, Y and Z problem. And too often the solutions are being derived in isolation from the communities who are supposed to help. So thinking about how you as a student or professional can get connected with the communities who you you want to be in solidarity with.

Right. And then also asking on an ongoing basis, how is what I'm doing affecting, you know, the most marginalized or the most on, you know, the people who are being subjugated in any locale? You know what's on my mind? We're in the middle of a pandemic and what's on my mind? Or are these different lessons we're learning about technology and technological fixes in the context of the pandemic?

And one of the sobering cases is that of Singapore, where, you know, early on they were being lauded as, you know, marshalling technology and their vast, you know, Barock, you know, infrastructure to keep to contain the virus. Between January and March, they had like about 500 cases. And in the last month, the cases have exploded to now, the last number I saw was over 12000 or so, you know, just over 1000 one day, which is more than all those two months before. And so the question is why? You know, they they were using contact tracing. They were using all of these, you know, ways of marshaling technology.

And the reason why is because they ignored the large migrant community in Singapore that were being concentrated in particular parts of of the island in very overcrowded dorms, sometimes 12 to 20 people, a room using the same bathroom, the same soap. And now the virus has running through those dorms and it's affecting the whole the whole country. And so paying attention to them at the very beginning and understanding not just their living conditions, but the fact that many of them, even when they were sick, couldn't go to work. I mean, they had to go to work because they couldn't take time off. All of these policies, all of these structures that existed well before the pandemic that made it predictable that this would happen were ignored in favor of a much more techno focused, you know, approach that concentrated on the already, you know, relatively privileged in the country. And so, again, it's thinking about who are we ignoring, who is not even on our radar.

That may be affected by our decisions.

And so how that looks at the individual level is just the kind of question like look at what you're doing from the underside. From those who may be harmed or ignored. Right. Not because you intend for them to be harmed or ignored, but because precisely because they are not part of the decision making structure, because they they haven't been you know, their concerns haven't been addressed before, are likely to be adversely affected by whatever you do, however, will be well meaning in the present. The people who designed that initial response in Singapore, I'm sure they had all the good in their heart. You know, they were looking out, they were trying to address the public health, et cetera. But the plight of those migrants were at was amputated from their imagination.

It was not on their radar or not on there. That is not a priority. And that led to the current crisis there.

And so, again, I think that that lesson can be, you know, applied to so many other areas where it's not based on anyone necessarily being malicious or the bad actors are plenty, but it's about what you're ignoring.

So what you're what you haven't been trained to think about.

You know, as we move towards closing. I'm stuck. I'm caught on this word that you use that.

I wish that we used more often in these spaces, which is love. And I was recently talking to my mother, who was in the technology still is in the technology industry. I'm single mother constantly in the 90s during the tech boom, had to find ways to stay motivated and stay sane through this. And when I asked her about this recently, she talked about her grounding love, you know, for me as her child and that. And I know at the beginning of this, you talked about, you know, your own experiences as a mother in all this. And I'm wondering if you could just take us home in this interview by talking about the role love plays.

Absolutely. And I think that that is, one, an essential ingredient to anything that purports to and claims to be radical. Right. In part because it runs against our training as enlightenment intellectuals, that that's a whole different sphere of life.

It has nothing to do with the work on the life of the mind. And that grows out of a very specific trajectory and geneology that does not serve the vast majority of human beings. The fact that we amputate how we feel about things from what we think about things. And so for me, love is an essential ingredient to everything that I do, especially teaching. But more and more writing, you know, thinking about what motivates me. And sometimes and it's not a kind of saccharine love like a Valentine's Day love. It's a love often that's intertwined with anger precisely because I love people and I love especially those who have been so harmed by systems of oppression. And I'm so angry about it.

The fact that they're being mistreated.

Anger and love go hand in hand. And I I think it makes me a better thinker and makes me a better teacher. It actually clarifies for me why I'm doing certain things and it and it makes it so that I'm not going to hide my passion in order to perform a kind of disinterested, you know, professorial, you know, take me seriously if you don't take me seriously, because I have I care this much that's on you, not me.

Right. Thank you so much for joining us today. And also, just while we still have you on the line, you know, thank you for believing in the work that we're doing, because that's that's what keeps us going. So thank you so much for being here today.

My pleasure. Thank you for inviting me.

We want to thank Dr. Real Hot Benjamin again for joining us today for this wonderful conversation.

And as always, now is the time to give our first reactions to the interview. So one of my immediate reactions and emotional responses from this conversation was actually a little bit of fear. And that was coming from this concept that Ruaha mentioned several times, that sometimes people look to technology and artificial intelligence, especially to try to solve issues that exist in society and that have existed for a long time and was sometimes this is really great and has worked really well in the past. Oftentimes, this can be really harmful, especially when it comes to A.I. and bruhaha mentioned this a little bit when she was talking about her provocation and asking the question, are robots racist? And the fact that someone even came up to her after that and told her that eventually you might be turning to technology for guidance in social issues and to A.I. for guidance, and that thought really scares me. What about you, Dylan? What were some of your immediate reactions?

Yeah, I just I echo that feeling of a little bit of fear and uncertainty to the question of when do we, as designers and as consumers of technology, center our solutions to the problems that we have, including racism and sexism. When do we center those solutions in technological novelty? And when do we take a step back and look at the social context and the relationships that we're building as the real place that we can solve some of those issues? Because for me, the other feeling that I'm really leaving this conversation with is I'm feeling inspired, especially when Ruaha began to talk about this act of us forming a movement together and Rue Huhs showing that everywhere she goes, right. She's connecting with the people in her midst, the people that are doing this work. She's speaking to folks who, you know, may not believe the exact same things that she does and try and you engage with them in conversation. And I really think that she's one of those people out there right now who's leading the way in how we change some of our preconceived notions of how we can solve some of these deeply ingrained issues in our society while also taking to account some of the technological solutions as well.

Yes, exactly. And one of the deeply ingrained issues that Ruaha is engaging with so much is not just racism in technology, but taking a step back and even defining what racism is in the first place. And this was something that was really important for me. Definitely. And hopefully for listeners as well, to really think twice about some of the words that we're using in everyday conversation and trying to recognize how we actually define them and how the definition is really meaningful when we can assess their impact in terms of technology. And when we talk about an issue like racism, if we think about it as something that has to be intentional, like Rahho was saying, then this might make us not even consider the times when it's subconsciously enacted in systems. And this happens so much in artificial intelligence. And this is something I've encountered in my research before where if we actually just try to ignore these issues, we're making somewhat of a value judgment by doing that itself. And so if we if, for example, in a machine learning system, if we just choose to ignore any data that has to do with race, the race blind design decision there, then that is a value decision in itself. And often actually ends up resulting in more harm for certain races than other races. And so we have to really assess the way that we are defining the problems that we are encountering and that will help us uncover possible solutions or at least steps towards solutions for those issues as they are embedded in our technologies.

And I want to circle back to this conversation about love that we had at the end of this interview, which I think was just so powerful, how in order to make lasting change, we really need to commit like we really need to love this new world into being.

So the question one of the questions I hear Ruaha inviting us into is what are we willing to commit to and how deeply are we willing to love?

For more information on today's show, please visit the episode page at radical A.I., Dawg. And if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or on your favorite pop culture. Join our conversation on Twitter at radical iPod.

And as always, stay radical or stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Create better transcripts with online automated transcription. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Automated transcription is getting more accurate with each passing day. Lawyers need to transcribe their interviews, phone calls, and video recordings. Most choose Sonix as their speech-to-text technology. Get the most out of your audio content with Sonix. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.