Resistance Against the Tech to Prison Pipeline with the Coalition for Critical Technology


Coalition for Critical Technology.png

What is the tech to prison pipeline? How can we build infrastructures of resistance to it? What role does academia play in perpetuating carceral technology?

To answer these questions we welcome to the show Sonja Solomun and Audrey Beard, two representatives from the Coalition for Critical Technology. Sonja Solomun works on the politics of media and technology, including the history of digital platforms, polarization, and on fair and accountable governance of technology. She is currently the Research Director of the Centre for Media, Technology and Democracy at McGill’s Max Bell School of Public Policy is finishing her PhD at the Department of Communication Studies at McGill University.Audrey Beard is a critical AI researcher who explores the politics of artificial intelligence systems and who earned their Master's in Computer Science at Rensselaer Polytechnic Institute.Audrey and Sonja co-founded the Coalition for Critical Technology, along with NM Amadeo, Chelsea Barabas, Theo Dryer, and Beth Semel. The mission of the Coalition for Critical Technology is to work towards justice by resisting technologies that exacerbate inequality, reinforce racism, and support the carceral state.

Follow Sonja Solomun on Twitter @SonjaSolomun

Follow Audrey Beard on Twitter @ethicsoftech

Follow the Coalition for Critical Technology on Twitter @forcriticaltech

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

CCT_mixdown2.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

CCT_mixdown2.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence.

We are your hosts, Dylan, and just in this episode, we interview Audrey Beard and Sonja Solomon, two representatives from the Coalition for Critical Technology.

Audrey Beard, who's pronouns are they? And she is a critical eye researcher who explores the politics of artificial intelligence systems from the Perceptron to resonate and from the researcher to society. She's especially concerned about conceptions of ethics in the development of A.I. associate technical systems, as well as in classrooms, boardrooms, conference venues and developers offices. This year they earned their masters in computer science at Rensselaer Polytechnic Institute. Sonia Solomon, who's pronouns are she and her, works on the politics of media and technology, including the history of digital platforms, polarization and unfair and accountable governance of technology. And she is currently the research director of the Center for Media, Technology and Democracy at Magill's Maxwell School of Public Policy. Sonia is finishing her PhD at the Department of Communication Studies at McGill University.

Audrey and Sonya co-founded the Coalition for Critical Technology, along with an M. Amadio, Chelsea Barabas, Theo Dreya and Beth Semmel. The mission of the Coalition for Critical Technology is to work towards justice by resisting technologies that exacerbate inequality, reinforce racism and support the carceral state. A few of the topics that we cover in this interview include what is the Coalition for Critical Technology? What is the tech to prison pipeline? What should you know about algorithms that are being used in law enforcement contexts? How has academia historically facilitated systemic oppression through technology?

Now, before we share the interview, we wanted to set the scene and say a bit about why the case study we cover here with Audrey and Sonya is so important. On May 5th, 2020, a group of researchers out of Harrisburg University published a press release about a paper they wrote titled A Deep Neural Network Model to predict criminality using image processing. In their press release, the authors reported that the research was going to appear in the Spring or Nature research book series titled Transactions on Computational Science and Computational Intelligence. Now fast forward to June. Twenty third, Dylan and I logged into our Twitter account and saw hundreds of people in the ethics community liking, retweeting and sharing a tweet from a group called the Coalition for Critical Technology. The tweet said, quote, Sign our letter to urge all publishers to refrain from feeding the hashtag tech to prison pipeline with Physiognomy 2.0, end quote. The letter mentioned in the tweet linked to a medium article addressed to Springer asking them to rescind the paper from their upcoming publication and to stop publishing papers that contribute to the tech to prison pipeline will impact what that pipeline is during this interview.

And so after Jess and I looked up again, what physiognomy was, we decided to support the cause and signed the letter. And then we also decided to get in touch with the authors of the letter to find out more about how we might be able to support their work, to end the tech to prison pipeline and see if they would be willing to come on the show. We are so excited to share this interview with Audrey and Sonya to representatives from the Coalition for Critical Technology to tell us more about the mission of the coalition, the motivation behind its creation and why all of this matters. We are here today with Audrey Beard and Sonja Solomon, who are founding members of the Coalition for Critical Technology. How are you both doing today? Doing great, thanks. How are you doing? Well, doing. Well, it's a pleasure to have you on the show today.

I was wondering, before we get into your own personal stories and your journeys to being founding members of the coalition, if you could talk a bit about what the Coalition for Critical Technology is for folks who may not be aware of you.

Yeah, we actually joke about this all the time, that this coalition is probably the best thing to come out of our rage tweet ever.

Theo Dreier posted about this, this now infamous paper which will try to refrain from mentioning a possible and called for an open letter in May to corral against some of the claims made in the paper, namely that I can predict criminality solely from pictures of faces and very specifically without racial bias. And we all just started writing in a big Google doc and a bunch of other scholars and activists joined in and it kind of coalesced into this amorphous affinity group that we call the Coalition for Critical Technology.

So now that we know what the coalition itself is, Audrey and Sonya, we would love to know how both of you happened upon this group or created this group and what caused both of you to feel motivated to or inspired, I guess, to to join together. And, Sonja, why don't we start off with you?

Yeah, absolutely. So as Audrey mentioned, it was just this kind of serendipitous moment on Twitter that that evolved. But I think we all saw the paper and reacted to it in a similar way, which was that we were sort of tired from our own work in these respective spaces. We were tired of seeing this come up again and again and these claims being circulated and justified despite, you know, repeated and longstanding work showing how harmful these claims are and how they perpetuate injustice and cause real world harm. And I think so that was really kind of the motivation to just start the work and then it very quickly and sort of organically evolved from there into into what it is.

I came on again, like all of this originated on Twitter. I for the past, like a year or so, I've been grappling with what what it is that my field is doing for the world and to the world and how I am fitting into this. And I've been developing my own ideas about what? What we as computer scientists can do and what we as computer scientists have been doing and I.

I mean, this really popped up at a really fortuitous time, I was finishing up my master's degree and I was full of rage just as everybody else was in in May.

And I I had the opportunity to work with, like, some very brilliant people. And I was, I think, able to contribute in a meaningful way, which was something that I was really searching for in that moment to to to work on something that felt like more than. More than what is typically considered computer science research, I guess.

So let's get to specifics now. Dylan and I first heard about both of you and the coalition from this medium post abolish the tech to prison pipeline. So it could one of you begin by just kind of explaining what that letter was and maybe just a brief reminder of what Springer's original article was, and then we can dive into how that helped form the coalition?

Sure, yes. So the letter, the medium poster you're referring to, which is this this open letter that we wrote along with a number of other activists and tech workers and scholars, along with the core coalition members that you named earlier. And the letter was really in an attempt to kind of intervene in this circular incentive structure that really generates and maintains carceral technology. And their logic's suggestion briefly pinned back to the original paper. And I think it's important to just talk about kind of the the more broad structural problems of of claims like these instead of the individual metrics or details of of the paper or its authors. You know, the paper claimed to predict criminality solely from facial images with 80 percent accuracy and no racial bias. And so publishing really serves a key and crucial incentive here, which consists of, you know, naturalizing what has repeatedly been debunked and is decidedly racist, kind of phrenology about predicting criminality and facial features of criminality and things like this. And I think this kind of naturalizing work itself sort of reinforces the demand for tools or technologies that can then, you know, accurately or efficiently. And I'm doing air quotes right now and on a podcast that can efficiently or accurately predict criminality. Right. So it's kind of an insidious cycle. And I think you can intervene in different parts of the of the wheel. Right. You know, the development of carceral technology, the effect of certain surveillance technologies on vulnerable populations, the political economy of carceral technology. We just happened to intervene in the publishing incentives because we see it as a core obligation of rigorous academic evaluation and academic work.

Totally. And if I can elaborate on this a little bit and drill down into how this cycle plays out in machine learning or Emelle, we're seeing recently this elevation of phrenology, which, as Sonia said, is a long debunked and racist pseudoscience. We're seeing it get elevated to the level of what we call ground truth, which is Emelle and data science practitioners word for the knowledge that we base Emelle models off of this. This practice of elevation naturalise is these like neo chronological claims and reinforce the carceral logics and and ultimately entrench racism more deeply into not only our field, but society in general.

Can we talk a little bit more about what we mean by phrenology here? Because like in my coming from a sociologist background, I think of something very specific, which is the measurement of skull shape in order to continue to further marginalize, especially in like the Enlightenment period.

And I'm curious about how that plays out in this technology element. So if you could just unpack that like tech to prison pipeline a little bit more specifically, that would be, I think, helpful.

So we're we're sort of extending this this term of phrenology into the space of of A.I. and more specifically, computer vision, which is what we're seeing more and more in. In this case, we're seeing a sort of bio essentialism of race and sort of taking this this socially constructed notion of race and end of criminality and connecting them to very explicitly together in through the fields of computer vision, data science, machine learning. And I would I would also extend this this notion of like neo phrenology, two to. Other sorts of biometric data, you know, y'all might have seen on Twitter a few days ago, there was a there was a sort of proposal that was that was made made famous coming out of Indiana that seeks to quantify the chances of recidivism based on things like heart rate and and cortisol levels and these other these other like biological factors that that really serve more than anything to to. Make criminality a sort of biological trait, which, of course, from a from a scientific perspective, from a machine learning perspective and from a sociological perspective, is really flawed and really problematic because the notion of criminality is socially constructed. It's something that doesn't exist naturally. It is it is contingent on societal norms and it's contingent on it's contingent on laws. And all of these, of course, are are mediated through racism and mediated through imperialism and colonialism and xenophobia. And we're seeing this this really get enshrined and lionized as sort of objective in in the field of machine learning and data science.

I think just to expand on on the you know, the category of criminality is itself racially biased, as Audrey just explained. So research of this of this nature and these kinds of accompanying claims to accuracy really rest on assumptions, default assumptions about data regarding criminal arrest or conviction, serving as kind of reliable or neutral indicators of underlying criminal activity. And yet, like several scholars have demonstrated that these records are far from neutral. Right. They reflect historical court and arrest policies and practices of the criminal justice system and who police choose to arrest, how judges choose to rule, which people are granted longer or more lenient sentences. Right. So that that's why we in the letter, we really tried to intervene in these, like ground truth claims rather than is this a case of bad data, like dirty data? Is this a case of bad actors or is this intentional malice on the on the part of the authors? And and we maintain that it's it's not it's a deeply embedded and structural problem that moves beyond individual bad actors or individual specifics and try to look at the kind of longer problem here.

Some of the discourse that we've seen following this letter, ah, is sort of like, oh, how could this sort of thing happen? Like, clearly this is wrong. The institution of computer science or whatever has failed us and. I think that's something that we've really tried to head off in the letter that maybe didn't quite come through. People are sort of taking this notion of like garbage in, garbage out and expanding it a little bit. But but we can't really solve these sorts of problems with with more data or better data or like individualized fairness metrics, because ultimately it's not the data or the algorithm that is ultimately rotten. It's the entire criminal justice system that is founded on these these racist ideas of criminality and and the valuation of property over the valuation of people in human lives, specifically property owned by white people.

Getting back to this this letter specifically that both of you contributed to and wrote for Springer, just to kind of briefly summarize for the listeners in case anybody is having a hard time keeping or following along here. So we have this article this spring Republishes. That is about how it's an EHI model that can predict criminality by faces. And that is a way of basically saying that criminality is something that can be a biological trait within humans, which is a hugely racist claim. And so as the coalition, you all come together and you write this letter to Springer that now has two thousand four hundred and thirty five signatures backing it. So what were the goals of this letter? What were you all hoping to get out of writing this letter and sending it to Springer?

Yeah, thanks. Good question. I will I will start by I can hear people criticizing on Twitter already. Springer had not officially published. It was a it was a press release on a specific university's website saying that it would be published. And so I just want to make that clear for the people who will defend Spurger until the day that they die. But I'd say that the goal of of this letter was an attempt to start to dismantle the power structures and resist the complicity within them by undertaking a sort of very specific, directed intervention in in this cycle that Sonia and I both sort of mentioned. And and you can read a little bit more about it on our on our website for critical tech for anybody who wants to check it out.

We asked Springer three kind of main we meet three main requests and none of them have been have been fully met yet. So we asked that Springer refrain from publishing this piece, which they quickly sort of quickly but vaguely agreed to. Just on Twitter, we asked that they refrain from publishing this article and to explain publicly why which has not happened. We also asked that Springer should publicly acknowledge their role in kind of propping up and circulating these kinds of claims and this debunked pseudoscience. And we asked that publishers and general refrain from publishing this this work that is extremely problematic. Right. Like it this kind of work legitimizes and perpetuates real world harm and with severe consequences. We saw a recent case of a wrongful arrest using certain kind like this is a matter of life and death for vulnerable populations. And so it's it has to be considered according to a very rigorous and high stakes evaluation metric.

I'm curious about what responses you got originally to that open letter, because as I just said, you know, you have thousands of people who have signed on to it. But I am curious about maybe the opposite side as well. Like, did you get a fair amount of pushback? Did Springer respond? And then what was kind of like what does the state of it now?

Generally speaking, we were very blown away by how positive the reactions were. We had very few people who were defending it, which. Maybe it just didn't get picked up by the right, all right, media or whatever, but, um, most of most of the most of the negative discourse, I would say was around was just misconstruing it as a single bad apple study or whatever. And we tried we tried not to engage with that sort of logic there. Because ultimately, ultimately, we can go miles and miles down into that rabbit hole and really get nowhere. But some of some of the other some of the other responses that we saw were a lot of, for instance, like senior academics particularly, or people high up in industry might have been hesitant to sign their name on because they hadn't seen the letter or because they thought that possibly the claims could be valid under certain circumstances. And and quite quite honestly, that was pretty disappointing to see to see people still sort of latching on to this techno optimist kind of perspective without really understanding how deeply rooted these problems actually are. Large, largely those largely, though, that blowback came from computer scientists and and same same with the bad apples study of the folks who ultimately responded to the letter. It was it was a pretty mixed bag. A lot of, as you might imagine, the the radical folks and in like, quote unquote, ethics of A.I. we saw a lot of people signing on from sociology, anthropology, history and a lot of law scholars as well.

And then, of course, tech workers and journalists who cover tech, who might not necessarily be in the academy, but are certainly adjacent, as well as some prominent machine learning researchers, some of whom have pretty problematic histories of their own. I would say the the most surprising and kind of heartening response was we saw at least one person who had written who'd done work like this in the past and and didn't actually understand in the moment why this was why this was problematic and why what sort of narrative that work sort of fits into. But they they sort of repented, which was which was really interesting. And I think a an indication that we had done something right, that we had built this sort of, you know, one of one of the things that we're trying to do is build a coalition across disciplines and across, you know, industry and academia. And seeing somebody who had benefited from this type of work come out and say, yeah, I messed up. This is this is actually really problematic. And I have. I've contributed to it. That was really powerful and ultimately, you know, if if that's the only thing that we accomplish, that we, you know, shake a few people out of that. Out of that cycle of just perpetuating it, then I'd say that the letter was, in some regards, a success and this letter is kind of symbolic of the start of a movement.

At least this is something we've all spoken offline about before. And so let's kind of dive into what this letter symbolizes and represents and what it's hopefully leading us and also the coalition towards. And it's this notion of academia facilitating oppressive technology. Right. So what is that?

What is what does that actually mean and how is that being enacted in today's day and age?

I think academia, like any any other site of work, I know it's uncommon for some people to think of academics as workers, but we are and we have obligations to our communities. You know, academia participates in this in this kind of like co-production of race and technology. And there are ample fields, my own included, where tech fixes for social problems are not only common practice, but they're really venerated and seen as innovative and socially benevolent. Right. And look, in many areas they can be. But that's precisely why we need to keep asking how the technology will be used and what assumptions are required for it to work. You know your point about oppressive technology? I think it really thrives under this veneer of objectivity or neutrality and under a lack of reflexivity. And I think there's definitely clear incentives for for building and supporting these kinds of claims, using technology. You know, tech is like this affordable and efficient way to do policing and surveillance work. And we see this in so many examples. You know, companies like Volunteer who built some of the most insidious and sort of opaque mass surveillance tools offer up their services to government for free. And we're seeing this in Canada and globally with facial recognition systems and moratoriums on their use. And so that's a huge part of the kind of tech lash that you're describing right now. I think to to to answer the other question about what what does it maybe symbolize? I think that this kind of veneer of tech solution ism is beginning to crack or rather we're seeing pockets of ruptures and it's really led by community organizations.

You know, here I'm thinking of the work of data for black lives, our data bodies, the carceral tech technology resistance network. And a lot of it has coalesced specifically around facial recognition and surveillance and, you know, fantastic work, work there by the Detroit Community Technology Project, Algorithmic Justice League, Fight for the Future and others.

So I think it symbolizes attempts to strengthen academic and tech worker alliances as well. And hopefully that's something that continues to grow. We kind of joke about like 2020 being the the year of open letters Awdry deal. Yeah, yeah.

This is this is something that we've seen a lot. Right. You know, there was that the group of mathematicians who had sort of come together to make a similar statement. Of course, we saw that garbage Harper's letter. And and we recognize that open letters are certainly not going to change the world. Right. Anybody who thinks that is deluding themselves. But but we think that it represents a sort of. It's indicative of these organic clusters of institutional and extra institutional mobilizing sort of pop up here and there, Sonia called them ruptures. And I think that I think that's really appropriate to say. We we think that. Capitalizing on on this on this sort of energy that people are feeling is a potential vector for building what we're calling infrastructures of resistance, which is something that we're that we're working on right now through our coalition in trying to to build a stronger resistance network to to combat this morally reprehensible technologies.

We heard that a lot in our interviews with folks about there's there's something different happening right now. Like there's some people are seeing these systems in a different way. Part of that is there's more social scientists being involved in the conversation. Part of that is just things that have happened in our world, the murder, the murder of George Floyd and beyond. You know, everything that's been going on in our world. And there's still this question of of the how.

Right. So we're we are angry. We are frustrated. And there are a lot of folks out there who are looking to do something.

And it sounds like you all are kind of on to a particular way of of doing something. And I wonder if you could say more about what it looks like, you know, in a day to day way of how you build those infrastructures of resistance or pockets of resistance towards creating justice and equity in these systems that are just so immense and complex.

Yeah, that's a I mean, that's that's the million dollar question, right? Like, how do we do this? It's it's really hard. And Ottery Lord talked about how hard it is to dismantle power structures using tools within those power structures. She talked about this back in the 80s. I think the answer is really different for each of those groups. And I'll sort of address each of those and talk about how how we sort of fit in there. So within the academy, for instance, individual students or individual workers have almost zero power, right. Because a lot of academia views students as customers and staff staff. His staff and faculty are really workers, employees for the university, including grad students, I will say. So ultimately, each of us collectively have this enormous power to resist this sort of work, and a lot of the students and grad students, undergrads, faculty members and and staff members can gain a lot by emulating worker unions, something that we're starting to see a lot happening in. For instance, like tech workers, the tech workers coalition strikes me as one in particular. I was involved in some work like this, a grassroots effort at my at my school Rensselaer to do this kind of organizing. And I've seen that. I've seen it do a lot of good of mobilizing students around a single cause or a series of causes that ultimately impact each of us in different ways.

But I'll say that educators and other academics play a role here, so they shape the discourse in these spaces. They you know, you hear in politics a lot of like the Overton Window. And I think I'll draw on that a little bit by by having different conversations and asking different questions in our classrooms and in our labs. We have the power to to make our politics explicit in pedagogy, because no matter what we're doing, we're doing political work. Right. I mean, the the insistence that we're not doing political work is, in fact, political. Right. You know, you're maintaining the status quo here, but by being explicit and saying, no, this is this is how this is how the work that we're doing is political. And this is the work that this is the sort of political work that we are doing. I think I think that we can really shape how students and how researchers view view their work from a social perspective in ways that we are trying to do this sort of individually and as a coalition is by by drawing on work done by community activists and and by folks from all sorts of different like radical philosophical and political ideologies to to sort of bring into our spaces.

Because as as I said earlier, it's not as if these spaces aren't political whatsoever. Right. Computer science is very deeply political. And and by by shifting how we relate to each other, how we relate to our work and how we relate to those who ultimately downstream are are are reacting to it and being affected by it. We can we can actually do some work here. It certainly doesn't stop at, like, intellectualising and and doing research or reading. Certainly there's there's no there's no substitute for more direct engagement or on the ground mobilizing and and sort of deferring to people who like our elders in this space, who have been working on this type of these these types of struggles for years. But I think one thing that we're trying to bring to this and that we're seeing more and more is people allowing. Allowing activism into the academy and letting the academy prop up activism, and I think that's ultimately what what we can do collectively that would be most effective to lend support to to these struggles that have been going on for centuries.

Yeah, I think I think that's incredibly important. I think interdisciplinarity, you know, cross disciplinary work, you know, interdisciplinary work or interdisciplinarity is a term that gets thrown a lot around a lot in academia. But it's it's so critical and crucial for I you know, one of the projects we do at the like, for instance, I'm just thinking one of the projects we have at the Center for Media, Technology and Democracy at McGill, we really try and do this work. And it's tough, right? Like we have a project called Tech Informed Policy and we do these tech and foreign policy briefs and just working with computer like writing collaboratively with computer scientists and machine learning students. You realize quickly how limiting speaking different kind of disciplinary languages is. And if if we think of, you know, the definition of radical as as kind of excavating these unlikely connections, then the tough work of cross disciplinary that is reflexive and care focused is radical work. And I think the possibility for that is is can be found everywhere. And I think to just sort of working to demystify A.I. and just kind of grounding it in its historical context is important, you know, especially if we understand, Aiyaz, as a kind of particular importance of power. Sure. The scale and kind of the monopoly of big tech is revolutionary, but it's not historically unprecedented. And I think that's important.

I want to sort of report that for a moment. Sonia mentioned that interdisciplinarity is hard. And I think that bears repeating. It's not just hard from the perspective of sociologists and humanists. For four computer scientists who find ourselves at this at the border between computer science and humanistic inquiry. It's really difficult. And I've I've talked about this on Twitter and more recently for us that that ultimately it can be really traumatizing to to try to do this work, because it's it's not it's not an individualistic that you can't have an individualistic solution to a structural problem. And and as much as we would like to intervene as much as we can and I think we should, it's. It's something that ultimately we collectively need to address. And so I'm thinking about particularly very precarious, very precarious workers here, sort of like grad student workers, black indigenous people of color, women, queer, trans folks. Intersections of that precarity make it very difficult to do this, make it very difficult to do this interdisciplinary work, especially when the entire structure is designed to cordon off disciplines into different schools, into different buildings, into different campuses and and often totally different institutions. I think that's something that we need to grapple with, and that's something that. Faculty members and advisers need to need to take a lot of work to to sort of ameliorate. Educators have a lot of power here and they need to support us in trying to do that work.

So you've both said the million dollar word at different points here. You both said radical. You said, you know, power, uprooting systems of oppression. So you had kind of a brief description of your definition of radical from your perspective, too. So, Sonia, could you just quickly define how you view the word radical as it relates to EHI technology and then maybe explain a bit as to whether or not you think that the Coalition's work fits within that definition?

You know, Audrey defined it. It defined radical as this kind of fundamental reimagining. And I'll let her expand on that a little bit. But I think that's so key here. It reminds me of a Benjamin who who you've had on the show and whose work is so fantastic. You know, she talks about imagination as a contested field of action. Right. And I think about that a lot. I think about how the most precarious groups in our society, you know, don't have this luxury of reimagining futures and, you know, really talks about how many of us live in other people's imaginations. Right. So I think I think for me, the term radical really evades kind of a fixed definition. I think of it more as like a process or a verb. I would define it as shoring up or finding unlikely connections. And by that definition, I think the coalitions work. And his radical, although I'm not sure if everyone if everyone would, I would find it as such.

Yeah.

So Sonia, Sonia mentioned radical, my definition of radical here being sort of like fundamental reimagining of our constructed reality because it is constructed based on critical inquiry, both internal and external. And so I think that the word radical does a lot of work here and a lot of different contexts. Right. And it serves different purposes and it lends power in some spaces and removes power in other spaces. And I think we need to acknowledge that. I also think that it's. It in spaces where it is powerful, it is likely or liable at least to be co-opted and and in spaces where it's where it's not powerful, where it's actually like problematic is we'll see it being weaponized to sort of blur the lines between different types of engagements. And, you know, we can we can debate over who is radical and who is not. And I'm sure a lot of people would call me radical. I'm thinking of a lot of people in computer science, but also a lot of people probably would not call us or our our work radical. You know, an insurrectionary anarchist would certainly not call what we do radical. And I think to Sonia's point, it is something that we are that we are doing. We are doing this this reimagining this critical inquiry. And and in context, that can be the sort of. The sort of work that we're doing, writing and coalition building, but, you know, we're seeing it, we're seeing it on the ground, we're seeing it in people who are resisting police brutality. We're seeing it in people who are resisting capitalistic social relations we're seeing and people who are resisting colonialism. And so I think. I think any just conception of of radical or radicalism needs to acknowledge the the this the deep history of the term and of radical thought and action, while also recognizing that it really does mean and do different things in different places as we move towards closing.

I was wondering if you both would be willing to share both final thoughts that you may have and also your vision for the coalition going forward. Maybe, you know, in like five years. What are your grandest aspirations for where this coalition can go?

I think that the the most important thing that I drew out of this entire process of writing the letter and forming the coalition is that. Our work is never done. Our our work is the process, it's the process of taking care of each other. It's the process of of doing justice and acting in ethical ways towards towards the connections and the relationships that we have. And I'm drawing on a very specific, like, sort of philosophical thought here of like of building relationships and and and gaining gaining sort of like freedom and power through our connections, our ability to affect other people and be affected by other people. And sort of to that end, I think a success, you know, five years down the line. I think a success for the coalition would be a a. A strong group of people and, you know, group I don't I don't think that the coalition as an institution or organization is a particularly important thing for me. But but really, the affinity that we are that we're cultivating among different different people, that's that's really what our next project is focusing on, that I'm sure everybody will kind of see later. But to that end, I think that I would love to see the ability to support our collaborators, not just in not just in a caring way, the way that we, I think, do right now, but also materially support our collaborators. I mean, I can't I can't I can't tell you how much work the letter was. I think all of us were treating it as a 20 hour a week job for five weeks. And that's certainly not sustainable. But if if we can ultimately support our collaborators through grants or through fellowships or what have you, I think that would be really powerful, sort of disengaging ourselves from the institution while still relying on the institution of, you know, the industrial academic apparatus, I suppose, but. Really like building building solidarity among different people and being able to. Build a sort of autonomy in this way. That's what I would call a success in five years.

Yeah, I think that's a great that's a great definition. Maybe just closing closing thoughts. I think at the end of the day, we need to kind of collectively reclaim the problems that I and other tech is like marketed to solve. And so many scholars and so many, you know, folks in a space and so many folks who you've had on the show have demonstrated repeatedly how attempts to, you know, eliminate race or discrimination actually perpetuate racism and cause real world harm.

So I think reclaiming those problems is a powerful and radical, dare I say, and for folks who want to be involved with the coalition going forward or who want to find out more about too as individual researchers, where can folks go to get more information or to connect?

So you can always go to four critical dot tech. That's our very rudimentary what website right now that has links to that has information on each of us, and it has links to our our work that we've done both this medium post and then also like a PDF, if you want to include it in, say, a syllabus, feel free to cite us and you can always email us at Coalition for Critical Technology at Gmail dot com.

And we'll be sure to include all of that and more resources in the show notes page. But for now, Audrie's Soniya, thank you so much for coming on the show and telling us a bit about your story and the Coalition for Critical Technology.

We want to thank Sonja and Audrey again for joining us today for this wonderful conversation and just let's throw it to you first. What did you think about this conversation? What stood out to you and what do you want to talk about?

Well, there's a lot of this conversation that really hit close to home for me, especially Audrey's comments, because both Audrey and I are computer scientists. And so I think that we've been surrounded by similar communities when we've been really interested in ethics especially. And so this is something that I talked about in our recent episode with Animation and Kumar and just some of the trials and tribulations of being interested and being an advocate, a public advocate for ethics within the tech community and within especially the computer science community as a computer scientist. And so for me, I think one of the one of the comments that Audrey said that really stuck out to me was how we as researchers, especially as technical researchers, have this duty and this responsibility to think about the social implications of our research before we begin the research. And so something that I see come up a lot, actually, especially on the computer science research community, is that there's a lot of there's a lot of potential research that sounds like it could be really interesting. So I'm talking about algorithms that can predict gender from faces. There was an algorithm recently that came out that could predict BMI from images that it pulled from, I think, a Weight Watchers data set and predicting criminality from people's faces like we saw in the paper from this interview. And these seem like they could be really interesting algorithms to create. And it seems like they could solve really interesting problems. But it's much more important to think about the impacts of this work than it is to just dive into creating these algorithms without thinking about the ethical impacts and the consequences on real people. Through the creation of these technologies.

Well, let's let's stick with that for a second, like, why do you think that people think these things are?

Good ideas in the first place.

So in the case of of this paper, write this paper that was titled A Deep Neural Network Model to predict criminality using image processing, I feel like it's it's easy on our side of being able to claim, like the ethical high ground where I mean, I feel like it is kind of pretty easy to see the where this can end up. Right.

And just how much of a slippery slope this is in terms of reinforcing racial bias and stereotypes and already oppressive systems of criminality. But if I take a step back from kind of like what I know around that and I try to put my mind in the place of these people who are writing this paper, like from from both of our ends as researchers, I think it's important for us to ask, like. Something seems so obvious to us, why is it not seeming obvious to other people? So for you, just why do you think that people do this? Is it just because these are interesting questions? Is it just because, like maybe us as engineers or even as academics were just trained to ask interesting questions without thinking about the consequences of our research or of our actions or design choices? Or is it something else?

I wouldn't even say it's because we're academics. I think it's because we're human. We're very curious creatures. We're kind of like cats in that way. And I think that I mean, the first thing that comes to my mind is computer scientists aren't educated, at least not commonly, to think about the consequences of the questions that they're asking and the algorithms that they're writing. And this is something I experienced. I mean, I told a little bit of my story in the very first episode that we came out with, but I literally learned in my very first data science class how to scrape the Web and pretty much any website from the Web that doesn't have an openly accessible API. So even websites that you're not really allowed to scrape. I learned how to do that in my first data science class without any mention of the ethics of doing this. And at the same time, I was lucky because I was taking a computer ethics class and I learned at the same time that that's not OK and that I should really watch when I do that, where I do that in the impacts of doing that. But I was lucky that I was in that computer ethics class. And I think a lot of computer scientists don't have both sides of the coin when they are being trained to do what they do as computer scientists. And I don't think this is anyone's fault necessarily. This is just the system that we currently live in. And so I don't think that it is a computer scientist fault or it makes a computer scientist immoral in any way to ask interesting questions and want to find interesting results. I do the same thing all the time, but I think now we have to ask ourselves, what do we do to educate computer scientists and to change the field of computer science so that we are asking these questions of consequence and risk before even thinking about designing an algorithm.

Sure, and again, like I I generally do like push back a little bit, because I do think that what you said at the beginning is right on, which is like although computer scientists, at least right now in our history, may be uniquely positioned to cause particular harm because of the fact that, you know, like 50 engineers in Silicon Valley can impact two billion people in a way that was never really the case before. Right now, I do think that it is a uniquely human problem. It's not just in that computer science space, even if there's a particular, I guess, focus for us on that because of this podcast and things like that.

But but I think that in general, that question of intentionality and the question of if you know what, what is the impact of what we do in the world?

How do our actions impact other people is something that we're not necessarily socialized to think about? I think across the board, honestly, and obviously there are some also some exceptions to that rule.

But I know that, like, empathy isn't like taught in elementary school.

Right? It's not like math science. You know, you do a little bit of algebra and then you do a little bit of empathy work. Right.

There's different things that we prioritize right. In that space. But all of this, I think, goes back to the question of this particular case study, which for me is. Who has the responsibility to shape the narrative around these topics, so as much as Audrey and Sonia and the coalition were calling into accountability, the authors of this article, there were also very much calling into accountability, the editors over at Springer. And I think that is just such an important part to to piece out of the story of these are the massive systems. And there's a lot of people that do actually have agency in the story who we might not always think about as the people that need to be held accountable, but really like it's across the board. It's not just the engineers. It's not just the people who are looking to publish these studies, although they need to be held accountable to. It's also the people who are publishing the studies, the people who are sharing the studies, the people who are then making those studies into technology, who are strategizing around them. You know, the H.R. and PR departments like we're all culpable to this and a certain degree, especially if we have power or agency in these systems for sharing, really sharing any of this across the board.

And and I think, again, that's that's a human problem. And it's a problem that we need to look at in a holistic and a macro way.

Yeah, well, maybe this is the point in this episode where we now break the fourth wall and we say, you listener, whoever you are, whatever position you're in, if you are a researcher, if you are a coder and engineer, if you are sitting on the editorial board of Springer, if you are involved in academia or the tech industry in any way, you are in a position to be asking yourself what the risks are of the questions that you're asking and the technologies you're creating and the potential systems of oppression that you are enabling. So it is up to all of us to figure out where we are uniquely situated to do something about this and to stop sitting still and being quiet just before we sign off.

I listeners probably don't know since they weren't with us when we were right before recording this.

But you were showing me this this set of pictures about this island in Japan with all these cats.

And earlier when you so just like just picture an island with hundreds of cats, go look it up. It's great. And you were when you were talking earlier about how curiosity is like a very human thing and we're all kind of like cats. I kind of got that image and I started thinking about like, even if we are a bunch of curious cats just kind of on this island, you know, we call Earth or whatever, that it doesn't it doesn't absolve us of our culpability.

Right. Like, curiosity is a beautiful thing and creativity is a beautiful thing. But there needs to be a level of intentionality to that curiosity.

Anyway, that's my image of this. Do you have any thoughts on that image of cats just and how it might impact our carceral systems? I think you said it all. You summed it up.

For more information on today's show, please visit the episode page at Radical.

I beg you, you really don't want to know what to say. I love it. I just. You said it well.

And if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite podcast, your join our conversation on Twitter at radical iPod and as always, stay radical.

And for those who are curious about this cat island, it is called EOS Cima. It's spelled a o s h I am a go look it up.

We'll make sure to include it at the very, very end of the show knows, because it has very little to do with the very serious topic.

Yeah, but seriously, go look this up if you just want to feel good, if if the world's getting you down. A lot of cats on this island.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix has the world's best audio transcription platform with features focused on collaboration. Sometimes you don't have super fancy audio recording equipment around; here's how you can record better audio on your phone. Get the most out of your audio content with Sonix. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. More computing power makes audio-to-text faster and more efficient. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.