Episode 21: Finding Joy in Meaningful Work: AI for Social Good in Social Work & Social Justice with Eric Rice


21_ Eric Rice.png

Where is the limit in the use of technology to solve societal problems? How can Social Work utilize AI to address social injustice? To answer these questions and more we welcome Dr. Eric Rice to the show.

Eric is an associate professor and the founding co-director of the USC Center for Artificial Intelligence in Society, a joint venture of the USC Suzanne Dworak-Peck School of Social Work and the USC Viterbi School of Engineering. Rice received a BA from the University of Chicago, and an MA and PhD in Sociology from Stanford University. Eric’s research focuses on community outreach, network science, and the use of social networking technology by high-risk youth.

Follow Eric Rice on Twitter @EricRicePhD

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.

Relevant Links from the Episode:

Eric Rice on Huffington Post

Eric Rice’s Profile on USC website

USC Center for Artificial Intelligence in Society


Transcript

Eric Rice_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Eric Rice_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess. In this episode, we interviewed Dr. Eric Rice. Eric is an associate professor and the founding co-director of the USC Center for Artificial Intelligence and Society, a joint venture of the USC, Suzanne Dworak PECC School of Social Work and the USC Viterbi School of Engineering. Rice received a B.A. from the University of Chicago and an MBA and Ph.D. in sociology from Stanford University. Eric's research focuses on community outreach, network science and the use of social networking technology by high risk youth.

A few of the big picture questions we cover in this interview include How can social work utilize A.I. to address social injustice? How do we define social good? And then how do we design technology to uplift that social good? Where is the limit in the use of technology to solve societal problems, among others? It was our pleasure to have this conversation with Dr. Eric Rice. And now it is our pleasure to share this conversation with all of you.

All right, well, we have on the line today, Eric. Eric, how are you doing today? I'm good. I'm good, thanks and thanks. Yes, absolutely. It's great to have you here. And I was wondering, before we get into into more of the specifics of your research and the exciting projects that you're a part of right now, if you could tell us a little bit about how you've arrived here and the work that you're doing and a bit about what motivates you as either a researcher or just as a person doing the work you are doing.

I don't have a very linear academic trajectory. I mean, I think some people finish college and then they go to graduate school. They know exactly what they want to do and they go and become a professor doing that. And I got to I via a winding route. It was not to be honest, I didn't know that I was doing a research.

When I was first doing my research.

I thought that I was doing interesting mathematical modeling with a computer scientist. And then he told me that we were going to go and visit an A.I. conference. And I thought, I don't even know what I means, what are you talking about? And then it turns out he's you know, this is this is Milen Tomé. It turns out he's one of the, you know, big deal guys in A.I.. Right. And really probably one of the biggest deal guys in the AI for social good arena from the computer science side. And and as he and I became friends and got into this work, I have subsequently become somebody who does AI research and actually knows that I'm doing a research. But I was originally I was trained as a sociologist.

I was a math nerd when I was an undergraduate at the University of Chicago.

I started my first semester thinking that I was going to be a math major and then I ended up falling in love with social theory. And so I became a sociologist and took a lot of math classes along the way as electives. Some people take, I don't know, whatever art history I took math as my as my college electives because I thought that they were fun classes, which is held me in good stead as a sociology trained person who is a social work faculty member. I'm a I'm I'm in a school of social work at USC and the Suzanne Davoren Park School social work. And and I've been in social work now for more than a decade. So it's been started in two thousand nine at the School of Social Work. And I finished my PhD in 2002. And in between there I had a little stint at UCLA where I was a non tenure track researcher in a community health research center, which is really where I started to get involved in the work that inspires me, which is the work that I do with homeless adolescents. And and what happened was when I finished my PhD program, I was a theoretical sociologist. I did a lot of very axiomatically stuff that that's very akin to microeconomics is really what it is. And we did experimental studies and and I really thought, wow, there's six people are going to read this stuff in the future and three of them are going to automatically like you because I'm on their theory camp and three of them are going to hate it because I'm not. And what am I doing with my life?

And I reached out to my networks and one of my early advisors who had moved away from Stanford by the time I was graduating, his dad was a professor actually at UCLA who was running a HIV AIDS training program. That was this interdisciplinary program that involved people from sociology, psychology, social work and public health, that he was training his fellows and he had a couple of postdoc positions. And I applied for one of them. And when I was interviewing with him, he said to me, let me put it this way, Eric. If you do HIV work, you'll never go home at the end of the day and think, what am I doing with my life?

And I thought, this guy's reading my mind.

This is exactly the problem that I was having, was that I felt like what I was doing wasn't terribly meaningful, although intellectually it was fascinating because I was doing this very rich work about social networks and we did a lot of work around axiomatic thinking about those models.

And so I even had some professors that were interested in some computer simulation type theory development work within sociology. And so when I started working with with with Milland back in 2013, 2014, it was kind of a return to my nineteen nineties sociology roots because I had become this very applied social network researcher that worked on issues of HIV AIDS with homeless youth in L.A. and thought of myself and still think of myself as a very grounded researcher who knows young people who are experiencing homelessness by name and and they know me by name and we have relationships with one another.

And and I understand what's happening in their lives in the rich context of of the complexity of their lives and their people are not, you know, sort of I don't know the people that I can drive into little agent models, which is the sort of stuff that I was doing as a graduate student.

But when you start to work on a modeling, having these very foundational, fundamental, almost axiomatic views of human behavior can be very useful when you are trying to create mathematical models of human behavior that you can then do computational experiments with to see what's going to happen, which is the sort of work that we got involved in.

And so, like I said, I wasn't even sure that I was doing a when I first started doing I you know, it was there was it was a total shock to me.

And and then some day in the recent past, I woke up and realized that I ran an AI center called the Center for Artificial Intelligence and Society at USC. And and I'm still somebody who's trained in a as a sociologist, who is a social work scientist and thinks of himself as a social worker. And, you know, I'm not quite sure how I got how it is quite that I'm running an A.I. center, except that I I mean, I know the series of events that led up to it. They just don't always make sense to me. So, yeah, it's, you know, some of it sometimes sometimes in academia, things happen because you are pursuing things that feel exciting and interesting, or at least that's what happens in my life. I've been very fortunate to. Be somebody who has chased after things that I thought were interesting and been lucky enough that other people thought that they were cool too. So when I was in in the 1990s, when I was a PhD student, I was really interested in social networks. And then by, you know, the early part of the 2000s, the by the time you get into MySpace, Facebook days by sort of twenty six, seven, eight kind of era, suddenly everybody's talking about social networks.

And here I was sitting as a person who had been trained in thinking about network models and I was very lucky. And I feel the same thing is true with the AI boom of the last few years. It's that I, I started doing this work with Milen because I thought it was really interesting and I was chasing because I thought it was cool. And then, you know, somewhere around twenty, seventeen, eighteen, I kind of woke up and there the world was really jazzed about A.I. and it was AI, this and that and this. And I thought, well I'm sitting here doing this. I work. I guess I guess I'm in the right place at the right time again, which is kind of cool. You know, it's it's I don't think it's likely to happen a third time. But, you know, lightning in a bottle twice is pretty awesome. So.

So, yeah.

And I mean and like I said, I think that my my personal motivation for this work is really aside from thinking about things that I think are intellectually interesting, it's really motivated by this passion that I have for social justice issues.

I mean, that's really why I ended up in a school of social work and not in a sociology department or not in a school of public health or the public health has a lot of concern about social justice, but not the way that social work does. I mean, social work is inherently a an area that is fundamentally concerned with social justice and addressing issues of social justice. And to be the kind of researcher that I am who's interested in homelessness and HIV prevention and systemic racism and homophobia and these other issues, you know, you can sometimes in more traditional disciplines like sociology or psychology or economics, you know, I think you can sometimes become marginalized if you are working on issues that are about marginalized populations, because those are disciplines that are interested in explaining human behavior. That is more the experience of most people. Right.

Whereas social work, because it was founded as a practice discipline that's trying to help address the causes and consequences of really urban poverty, I mean, is really where it started in industrialization that, you know, working on homelessness and systemic racism and disease prevention is exactly what social work does. You're not marginalized at all. You're you're in the middle of the pack.

And that's a nice place to be, to have colleagues that also think that the work that you do is is is interesting and exciting.

And in your work, since you are since you are doing EHI for social good people kind of label you as a researcher doing for social good, I'm wondering if you can maybe unpack what that is just at a high level and then explain what that means for the research that you're doing specifically right now.

I mean, I guess at a high level, what it means is that how well, first of all, I am not an I I'm not a computer scientist, so I partner with computer scientists like FEBA, V.A. and be Sadil, Keena and Melun Tombo and a lot of and others who who are computer scientists. And we collaborate on projects where we can use AI technologies and techniques to. Do some usually thing that has to do with intervention work, so, so, so work as a discipline is very interested in intervening in the world and trying to address social injustice, not just observing it or cataloging it. And so I can be very useful in a couple of ways.

I mean, for for one, you can use some techniques like machine learning to do data analysis. When you're using big data sets and you're trying to do something like predictive analytics, you're trying to understand what are the what are the causes of a particular outcome and can we can we do some data mining that's maybe a little bit more sophisticated than what you can do with linear statistics, which is what most social scientists are trained to do. Right.

Although honest. I mean, one of the great laughs for me when I first got involved in my work was that when he explained to me that logistic regression was something that machine learning folks do, and that's the bread and butter of a social scientist when you're doing sort of public health and social work type of work. And so what they were calling machine learning, I was just calling statistics. And so we realized that there was there was some more overlap than I think we understood. But the way that we think about data is a little bit different. They're more inclined to let the let the models tell them what is going on. Social scientists tend to have more apriori assumptions about what's happening in the world that they want to be testing as hypothesis testing as opposed to more exploratory data mining, which is more what a computer scientist would do. But regardless, logistic regression is logistic regression. Rather, you let the computer decide what the variables you're going to keep are or if you decide what the variables are going to keep are so, so proud of. What it means to me is that is that I'm a collaborator with computer scientists and that we're trying to solve really thorny social problems like homelessness or systemic racism or HIV prevention. And we're trying to create new technologies to facilitate new solutions. Right. And so one of the the the key projects that launched the center that I that I work on was an HIV prevention project.

And so I was interested in designing a social network based HIV prevention intervention. And one of the thorny issues there is that it's not always. It's not it who delivers the messages about HIV prevention in a norm changing message disseminating campaign is sometimes as important as the norm, changing messages themselves, right. So telling your friend to wear a condom when they have sex is not that challenging of a message to concoct. If I am telling it to my friends, they're going to hear it better than if somebody who they care about less tells them that. So one of the things that was really a challenge for us when we were designing these programs was how do we have a wide reach quickly so that with this very transient group of young people, we can get the message out really fast and we can find these key influencers. And what's really cool about the work that we did with the modeling that Milen Tombo and Ameliorative and Brian Wylder and particularly those three helped us design for this particular study, was that they were able to create these influence maximization algorithms that could really beat out any more static computational, oh, this is the person with the most connections sort of model. And and when we did the field tests, it actually worked. Right.

Like we actually found out that when we did this field test with about 800 youth, that the what we did was a study with three arms. And it one was just an observation services rendered at the drop in centers. As usual. Another version of the study was group of youth who were selected based on being the most popular people in the network. And then we used the algorithm that that had been designed. There was an influence maximization algorithm to actually select the peer, the people who would be trained as pure leaders who would disseminate these messages. And then we found out that, you know, when we did this with about eight hundred youth, that in fact, the AI driven arm, not only do we have more young people reporting increases in condom use, especially in anal sex, but also that the changes happened faster, which for us was really important because the idea is that you want these messages for homeless youth who are experiencing homelessness that are very transient. You want them to happen really fast because they're they're going to move the friendships can break very quickly. And also, some young people are going to leave your network because they're going to go to another city or they're going to get housed or they're going to go to jail.

I mean, there's all kinds of reasons why young people leave the network, but they do. And so and that that kind of work is really exciting. And so, you know, and I guess serves as a good case study for the what does it mean to do in practice like an eye for social good. Right. I think it's such an abstract concept that sometimes I think it's a little bit easier to say, oh, we had a a very specific influence maximization computer algorithm that we built for this context that solved a particular intervention problem for us. And then we as a researchers, we tested out to see if that, in fact worked. And so they, as the computer scientists did a lot of computational experiments using existing data that I had on youth networks. And then I, as a social work scientist, did actual boots on the ground training MSW interns to deliver the intervention. They delivered the intervention to several hundred young people over a couple of years. And and, you know, we were fortunate to find out that our efforts were well rewarded, that this thing actually works, which is really cool. And we're now we're doing the boring academic thing of like turning this into a manuscript and sending it off to a journal to be published. It's like the you know, but you got to do that stuff, right? It's you'll never get tenure if you don't publish your stuff. You can't just do that. You can just do the fun stuff and then move on to the next fun thing. You've got to prove that you did the good stuff.

So one question that I have, and I'm coming from a religious studies philosophy background in terms of my PhD, which probably means I'm about to ask an obnoxious question. And it's true. I am where I get caught in this conversation is that question of social good. And like who gets to decide what the social good is? What are the metrics of social good, especially if it's coming from the academy, which historically maybe there have been some critiques about the academy not necessarily helping with the overall social good, depending on where you're sitting. Right. Like so for you, as you go to design these studies, like how do you think about the question of social good and then where I can participate in that conversation?

So as a discipline, social work is very concerned with the ethics of doing social work. And of the end of the action of engaging in trying to address social injustice, right, so it is so I made the joke about what is not good enough that I'm just a social work professor. I mean, because on some level, it's the it's the guiding principles of social work that really direct the mission of social good. And in fact, when we were creating this research center that I call case, so the Center for Artificial Intelligence is Sisisky. So if I refer to a case, that's what I mean. So when we're building our case, you know, we were we were looking to the grand challenges of social work, which are things like ending homelessness.

So grand challenges of social work for those of you who are not in social work is a series of 12 challenges that the National Academy of Social Work came up with a few years ago, that the idea was if there was movement made along any one of these 12 problems in the next decade in a meaningful way, that the world would be a better place for it. And there are things like ending homelessness or addressing racial inequities or healthy development for all youth. One of them is even very prosaic. It's technology, the promise of technology for social good, which I think falls into the very same trap that you're saying before.

So but most of them, most of the challenges have a little bit more content to them than the technology for social good one does. And so part of my thinking is that this is the sort of thing that we use within the discipline to think about what is the work that we're doing to try to make the world a better place. And that and that those are the kinds of things that we think about. But interestingly, no social work is a very pragmatic discipline as opposed to, say, religious studies or philosophy. So we're not necessarily maybe.

Arguing with this is our definition of social good, this is how we this is how we are going to determine who determines what social good is it? We're a little bit more we're going to do it and see how it goes. And we're going to try to do the best that we can. That's kind of the, you know, the tradition of Jane Adams, which is not to say that it's thoughtless or that it's or that it's not intentional.

It's just it's very pragmatic. And so in that respect, you know, we have these 12 challenges from social work that are these guiding sort of problem areas. And most of the work of my center at least falls under one of them. I mean, most of the stuff that we're doing is, you know, we're doing work around health and well-being, around substance abuse.

We're doing work around fairness, bias and equity issues. We're doing work around homelessness. We're doing work around suicide prevention specifically. We've got a couple of projects. We're doing some work about the impact of global climate change. We've recently done some work looking at trying to help with covid. One of the studies is actually around where could we allocate mobile testing sites so as to get to the most vulnerable individuals. In another study was trying to understand how to make fair decisions about how to allocate scarce resources in moments of crisis like, say, experienced in Italy or in Florida, where you've got you're running out of ICU beds and things like this. Right. So how do you know what is what are the what are the priorities of those systems to who gets those resources and how might those things be done? And there's ways that algorithmic thinking can help, you know, come up with ideas for solving some of those problems or ideas for querying.

In the case of the preference, the preference is about who gets resources. It doesn't make the decision for you. But it's a it's a series of algorithms that help you balance essentially comparisons of do you like version? Do you like choice or choice be better. And it's an algorithm that helps you wind down the usually when you're thinking about making decisions that are complex, you have thousands of options and it helps you narrow quickly by pairwise comparisons, you know, with six or seven key questions, what really is your preference? You know, is it for equity or is it for efficiency or is it for transparency? What do you really care about as the decision maker in this context? And and if and if balancing those things and balancing those things quickly is something that you want to do, then we can help design tools that might assist that. But, you know, we're not we're not trying to replace human judgment either with most of our tools. It's usually more the idea that we're trying to augment human decision making processes. And I guess that's another piece of the pie for social good that my research center being steeped in a school of social work and in engineering. So it's a it's a 50 50 split at USC between social work and engineering. And because of that social work, you know, we're we're really not interested in replacing human decisions with algorithms, but rather creating decision making AIDS or automating parts of the decision making process that are very burdensome for people that people don't actually like. I mean, people don't like to schedule things right. But whereas computer algorithms are very happy to schedule things for you, the computer and I will not complain about the drudgery of scheduling, you know, it just does it. So it's anyway.

Not sure if that helps, but that's that's the wanderings of my mind this afternoon. So there you go now.

Definitely an area.

I'm pretty curious what your view is on the potentially negative thoughts towards some of the work that you're doing, specifically referring to the solution as a trap, which was introduced by Andrew Selbst and others and a paper a few years ago. And then also the term techno chauvinism, which is all kind of saying that it's it's this concept that some people think that technology is the best solution to some of the hardest problems in society, for better or for worse. And I'm curious if anyone's approached you with these terms or similar terms and critique your work in that way and what your response might be to that?

Oh, I hope that the podcast conveys my my sense of humor about these things. So a lot of the times I think, wow, you're you're talking to the wrong dude. Like, I still write things down with a pen and a notebook. And if we were in a if we were in a different room, I could show you that I've got a shelf with like a couple of thousand LP's on it.

I'm a very analogue kind of Luddite sort of guy. I don't actually think that technology, for technology's sake, is a good thing.

It's more that I am interested in solving thorny problems and I'm interested in thinking about things in novel ways, because I think if you imagine a problem like homelessness, right, if it were something that a particular discipline could solve easily, then we would have solved it.

If all we needed was a bunch of really smart economists or a bunch of really smart social workers, then we'd be done. But we're not we're we're mired.

And so I think that bringing people together that think about the world in very different ways, which it turns out shock of shocks, social workers and computer scientists think about the world differently. Interestingly, they also think about the world similarly, which is kind of a funny thing, because engineers and social workers are both very pragmatic oriented folks. But, you know, our training is very, very different.

I mean, most I mean, I took like five semesters of college calculus, but most social work professors did not. Most of us probably tapped out at that first semester when, you know, your general education requirement was done and haven't thought about partial derivatives ever since and are happy that that's the way that their life is. And, you know, whereas, you know, obviously, you know, if you're if you're a computer scientist and you're watching this podcast, I mean, you guys talk about proving theorems all the time, right? I mean, I know there are likely some social worker scientists that have maybe not proven a math theorem since, like high school geometry. If maybe then, you know, but.

Disciplinary training aside, you know, that that those differences in perspectives about how to think about an problematize things can be very important, because when people that are that are smart and are thoughtful about their specific training, their specific disciplines get together and start asking each other what's going on to the other one, we end up asking each other really fundamental, really hard questions. We ask the sorts of questions that like Dylan and you are asking, is it even OK to do this stuff right? I mean, so they'll ask me, you mean usually cause they're computer science. They don't ask me, is it OK to do stuff like computer science, like to do stuff they you know, but but they will ask me things like, how do you know that this aspect of human behavior happens in this way? And so then I find myself having to really dig down into what are what are the assumptions of human behavior that are underlying this? What is the research that backs this up?

And, you know, and vice versa. When I'm wanting to know what they're doing, I will ask them probably what seems like very foundational questions about how these algorithms work. And so we I think we've all become much more in touch with what we've had to do. We have to know our stuff a lot better because we have to explain it to people that are not trained with all of the techno babble that that we all get trained in as academics. And but, you know, I don't know if I'm even answering the question at this point. I mean, I'm not trying to dodge the question about, you know, do do people give you a hard time about something every once in a while? Yeah. People give me a hard time about like, you know, you're you're you're trying to create algorithms to solve social problems. And shouldn't that be something that is about human judgment? And and some of my answers are, yes, of course, we're not trying to replace humans. We're trying to create tools that will augment the work of humans, not replace humans. Are we fetishizing technology? Maybe. I mean, even even though I myself tend to like notebooks and albums, you know, it's it's you know, I mean, look, it's it's you know, I still am a guy who design.

You know, I'm doing a project on addressing systemic racism around homelessness and we're designing algorithms to do it right. Certainly there has been criticisms about using algorithms that have generated and perpetuated systemic racism. Right. Especially in the context of, say, the justice, the space of bail sentencing. Right. I mean, bail bonds sentencing. And we've I've known if I'm saying that right, I'm probably not. But I mean, there is sort of a very famous case that I think it was reported on in about 20, 16, 17, where there had been an automated system based on a machine learning algorithm that was essentially, if you were black, your chances of getting a bail were were radically less than if you were had the exact same set of offenses.

But you were. But you were white. And and so there was all these problems about racial inequality that were being essentially perpetuated by MLA. And frankly, that can happen. Right? I mean, we were looking at this in the context of this study that I'm that I'm doing where we're trying to address systemic racism in the context of the Housing Service Authority in L.A. You know, if you look at the data that we've got on existing housing resources and you and you use the existing scoring tool, which is kind of a paper pencil algorithm, right.

It's not terror.

There are some problems about racial equity in outcomes that that you can observe in the data in that in some of the data sets that we have from some. We have got a 16 city data set of youth from across the country. And we can see there that African-American youth in particular are not doing as well in the outcomes of their placements. If you do an informal approach and you just try to maximize the efficiency of the outcomes based on trying to maximize who is going to get who, you can actually exacerbate the already bad race inequalities can become even worse if you just do like just a out of the box decision tree or a logistic regression sort of small thing. But if you then do some some clever algorithm design and and try to impose constraints that you insist upon the upon fare distributions, you can actually rectify the situation. You can actually find you can actually generate allocation.

Algorithms and tools that would that actually are fair and unbiased with respect to race, so, you know, you can use technology poorly and you can use technology. Well, right. And I think part of what makes the the work that I've been involved in potentially powerful in a good way, is that by bringing together people like myself to do whose background is in social work and we're really steeped in the context we're really thinking about. What are the problems to be alerted to and so, you know, we are asking things like, is there racial bias in these data that we need to maybe solve for as opposed to let run run rampant?

Not to say that computer scientists can't be thoughtful, but I mean, you know, I've spent 20 years thinking about homelessness and what are almost 20 years think about homelessness and what the issues that we need to be thinking about in that context are. And I can bring a lot to the table to a model or whom for whom they may have been thinking about homelessness for a few months or this point. You know, they've been like Fetim, who is working on this particular project with me at USC, Phoebe Viviana's. I mean, she's been thinking about it since about twenty seventeen. So it's a few years.

But still, you know, I still think that, you know, I mean, she's you know, I've become better about computer science. She's become better about social work. We learn a lot from each other. I mean, that's kind of the fun part of the part of the fun. I mean, you know, I think this is the only thing that I think sometimes gets lost in academia is that people are not always very joyful. You know, I think that it's it's a you know, it can be such a terribly serious business, like we all take ourselves so freakin seriously. It's like and you know, and sure. You know, homelessness is a serious issue, right? Like, I take it serious. I just don't take myself that seriously. And I think that, you know, a little you know, and one of the things that's cool about the sort of the I for social good, at least the way that I've concocted it, where it's these collaborations, you know, we have fun. You know, we we we get together. We work on we work on things that are that are different from what most people in our respective disciplines are working on. And we try to, you know, keep a sense of humor about the fact that some of our colleagues are going to look at us like we're from Mars and like that's going to happen and that's OK. And also that, you know, and to try to enjoy the some of the confusions that we create when when we really I mean, because, you know, it is challenging to learn to talk to computer scientists when you don't have a computer science background. And it's challenging, I'm sure, to talk to a social scientist when you don't have a social scientist background. I mean, we all learn these very, very specific, very jargon filled languages. And we've got all these shortcuts that we that we speak. And, you know, I mean, the three letter acronyms alone are enough to drive a person crazy. Right. You know, so it's it's it's. Yeah.

Anyway, so so on that Eric, the I'm really caught by this idea of of joy. And then also it sounds like some level of like curiosity in this interdisciplinary space.

But I'm I'm curious if you I guess the other thing that's that's stuck with me through this entire interview is when you started talking about how you got into this work and the population that you work with, especially homeless youth, and that concept of knowing the names of the people that you work with and them knowing your name. And I'm wondering if there's any sort of in terms of that like personal connection and there's any sort of like story or memory that you have of this work that is particular staying with you, that reminds you of that joy and that curiosity.

I mean, there's I mean, there's a lot I mean, it's it kind of depends on which piece of this I'm thinking about. But I mean, I guess. Yeah, I mean, so so one one story that sticks out is sort of the way in which algorithms can be smart in ways that sometimes social workers even are not. So one of my favorite stories to tell is, is about this one young man who when we were doing the HIV prevention intervention, I had done an HIV prevention intervention a couple of years previous and had done sort of the a typical public health thing, which is, you know, if you don't pick the most popular, you sort of hand-pick young people that you think will be good at this work, which usually means that they're sort of prosocial and some sort of way that seems like they're adhering to kind of norms of sort of mainstream society in adulthood. Right. And it turns out that that's not always the best set of decisions to make.

You know, and what we found when we did this one algorithm was that algorithms will sometimes pick people that are unlikely candidates.

Right. So we the algorithm picked this one, this one guy who I'll call him Jeff. That's not his real name. It's a Jeff. Right. And so Jeff got pulled from by the algorithm. And I hadn't seen Jeff for a few days at the at the drop in centre. I was coming up time. We needed him to show up for this training that we were going to do on on Friday. It was a Wednesday afternoon, so I decided to go. The drop in centre was by the beach, Venice Beach. So I went down to the Venice Beach boardwalk and I went up to headed toward the skate park and looking for this guy because he hangs out on this is sort of one of the areas that he hangs out. And I was told and and I see him on this grassy area that's just before you get to the beach proper. And he's kind of on this hilly area with the palm trees, a couple of his buddies, and he is literally passing a joint with his friends. Right. As I walk up, because, I mean, this is this is California, right? Like, you can get away with this kind of thing. And as I walk up to him, I think, great, this is going to go this is this is just I mean, so I say to Mike, hey, Jeff, what's going on as a gay dude?

What's up? I'm like, all right, hey, man, you know, so remember that that that that study you signed up for.

And we said that a computer program might pick you, you know, to potentially be trained to be a peer leader. Yeah. Oh yeah. Yeah, yeah, yeah. So it picked you. Oh, sweet dude. I was like, so do you want to do that. Oh yeah, yeah, yeah, yeah. And I'm thinking to myself.

Does this guy even going to remember this conversation, I mean, he's not drunk, so probably he's going to remember it, but is he going to care?

So I say, all right, like, you know, the training is going to be on Friday morning. It's dafter tomorrow. It's going to be at the drop in centre.

It's going to be at 9:00 a.m., you know, so you can be there. And he's like, oh, yeah, yeah. Do you have for sure. For sure. And I think to myself, no way. This guy's that is action. So I show up about quarter to to set things up. I get out of my car. I've got a whole bunch of stuff that I'm staggering on my car with. There's Jeff on a skateboard sitting in front of the gate, cup of coffee in his hand. Hey, Eric, can I help you get stuff out of you, help you unload stuff from your car? You know, and he went on to be like the like one of the best cheerleaders we trained. Right.

So me as a you know, despite the long hair sort of square academic who's going to be like, whatever, this guy is not going to show up. You know, it turns out that, you know I'm wrong, right?

You know, it's that, you know, the algorithm knew this is a guy who's connected. And it turns out that all he needed was something somebody in this case, a computer program to see him and say, you're important, man. Do you do you want to be a part of change in your community? And his answer was, yes, I do. And and he was all in and and like that's a you know, that's a really cool reminder that sometimes, you know, our even our best intentions can go a little bit funny. And and sometimes, you know, the solutions that we create are better are more impactful than even realize they're going to be right. And and so this this program, one of the things that was kind of cool about it, I kind of joke about it was sort of like the Breakfast Club versus the high school football teams. This is dating myself. Right? So, yes, I was a teenager in the eighties. There was this movie. If you haven't seen it, you can look it up on IMDB called The Breakfast Club. And in this movie, there's there's this group of these five kids that are spending all day detention.

It's like a jock or a stoner, a nerd, a a you know, a goth and I guess a princess.

And so these these kids from these different these different social cliques, they all come together, they become friends for the day. And then at the end of the day, they they kind of all go their separate ways again. And that's kind of the dynamics that we had with this algorithm, like it brought in all of these young people from these different social cliques. We trained them up to be our outreach workers, and then they went back out into the world. And that's what the algorithm is pulling for, was people that were important to these specific little communities. Whereas when we when we did the the popularity version, which was like the way that public health is thought, you should do this for the last twenty years, what happened was we basically got what felt like the high school football team in this case. It was like some skater brose. But basically what it was, was a bunch of guys that knew each other and they had some hangers on to their clique. But, you know, the information just really spread outside of that core group very fast. And they brought all of their nonsense with them. Right. So it was you know, the trainings with that group are just so hard because they just were you know, there's all over the place because there's all this sort of jockeying for their their internal status hierarchies that was going on. Whereas when we had our little breakfast club there, they were there with the mission. It was is so cool and so algorithm's do really cool things. And so I think about some of those kind of anecdotes about why do I do this work and how does it work. And and honestly, it's amazing to see people thrive.

Right. Like another one of these young people who was one of the early peer leaders in this thing. You know, I ran into him on Venice Beach about a month or so before covid started. I just happened to be walking down the street and I ran into this guy. And, you know, he's he's he was currently working two jobs. He was stable. His life had turned around. And he was really is really cool. He was like, you know, your your your program was one of the first things that I got to do that really gave me a sense of purpose and meaning, again, when I was when I was experiencing homelessness. And so it was so cool to be a part of that. It was one of the things that helped me. It wasn't the only thing for this guy, but it was one of the things that was part of his story that turned things around. And, you know, that's a really. Yeah, that's a really rewarding experience to have happen, it doesn't happen all the time, right? I mean, it's a you know, it's a it's it I can end up on Venice Beach and not run into any of these folks many times, you know, but but every once in a while, you know, and then it's you know, it's it's a nice reminder that, you know, and that's I think the some of the some of the joy of it. So I think I, for social good, can actually be a very human experience, you know, especially if you go to the store level where you actually implement some of these solutions in the real world and not just kind of do them as as hypothetical desk exercises.

Yeah. Thank you for sharing that story. I think it's a really good symbol of how meaningful this work seems that it can be and kind of switching directions a little bit here as we near towards the latter half of the interview. As you know, Eric, you're on the radical EHI podcast, and we were briefly mentioning this earlier to kind of foreshadow the question that we ask all of our guests, which is to ask you how you define the word radical and if you situate the work that you're doing within the radical EHI definition that you have.

Well, I did mention that I was a child, a teenager in the 80s. Right. So, I mean, so, so radical is something that was radical media. It's like it's like a slang for it's cool.

But no, it's I, I do think of what we do as radical. I think I've I've always been a fan of the the the line of social work that's part of the Saul Alinsky community organizing rule. You know, he has this great book, Rules for Radicals. And and while he's not a social worker in a traditional sense, you know, he's certainly somebody that the social workers, you know, read and point to as a social work itself is a pretty radical discipline in the sense that if you think about radical as being on the on the on the sort of more extreme side of the left of the political spectrum. Right. I mean, this is this is a you know, where and for me, the A.I. that we do is very much about social justice. So it's about trying to do trying to make. If we're lucky, radical change that we can have and that I could help facilitate that now, probably the best example of that is this project that we're doing with systemic racism, right? I mean, so we're you know, we understand that housing insecurity, homelessness is something that happens, especially for black Americans in ways that are deeply tied into the four hundred year history, going back to the first Africans brought over as slaves and a series of laws.

I mean, one of my PhD students who just graduated, doctor trying to hill, she has a chapter in her dissertation.

If you go look it up online, that is about the she calls it the algorithm of of of black homelessness. And she and she uses that as a as a essentially like a play on words, but also as a way of thinking about the fact that there has been a systematic series of policies and laws that have created disparate outcomes for black Americans, experience homelessness. And and so we're trying to design algorithms that could be used to combat that long history of systemic racism. That's a pretty radical idea. I mean, if you think about systemic racism and the protests that are happening in the streets around Black Lives Matter, I mean, you know, some academics designing some algorithms is not the same as thousands of people marching in the streets. But we are interested in trying to change and address systemic racism with that project. That is that is absolutely one of the goals of this of this work. And and, you know, it's kind of an interesting moment in time that I feel that I'm even allowed to, within the academy, say that I have a project that is about addressing systemic racism and that people. You know, A, understand what that means more so than they used to, which is pretty amazing, and and B, that that I'm allowed to to do that.

And like, I keep my job because that's one of the benefits of having tenure is freedom of speech. But I mean, but it's important to to to to to. It is important when addressing issues of social justice and social injustice to be plain and direct and not to and not to obfuscate the truth, right. I mean, if if what we're talking about around homelessness is that there are issues of systemic racism in the history of America that have that have led to inequalities for black Americans around homelessness. This is something that should be talked about because it's not going to be fixed if we don't address it just by talking about it's not going to fix it. We have to actually design new systems. We have to create new laws. We have to create new policies. You know, these these algorithms could potentially be tools that could be implemented in such and such systems. But but the dialogue is a part of it as well. Right. So, you know, and it's and it's radical, too.

As we look towards closing out the interview, we like to ask pieces of advice from our guests. And I'm curious for you to have a lot of folks that listen to the show who are really struggling with how heavy these topics can be and struggling to find hope and joy in this. And I'm wondering for maybe if you have, like a colleague in mind or someone who's working in an industry or a student, maybe like what would you say as a piece of advice to them to help them reconnect with that joy?

Sure, sure. I mean, I think.

I've always been a fan of Victor Frankl. He has this he's an existential psychologist. He has this book called Man's Search for Meaning. And in that, one of the ideas is that. Part of what human beings are striving for is a sense of meaning and purpose in their lives. This is one of this or fundamental things that we need to do as humans is find meaning. And I think I that resonates for me. And and I think part of my joy in this, when I talk when I tell stories about these these young men who experience homelessness or the young women who experience homelessness that that I've known in my past, you know, there are I try to focus in on the the winds, the successes. And I also realize that working toward social justice, working on problems that are difficult, provides a sense of meaning and purpose in one's life. And I think that is that there is joy to be derived from living a meaningful life. It's maybe not a, you know, a dopamine high kind of joy. I mean, it's more of a of a of a contentment sort of joy, right. It's more of a, you know, not to be too spiritual, says the hippie. But I mean, I think that there's a you know, it's the more the kind of looking it is a it's a joy in knowing that you're walking a path, that you are in harmony with other people and that you're in harmony with the world and that you're trying to be a part of solutions and that all of that certainly feels good. And and I think that, you know, but it can also be overwhelming at times to to work on issues of social justice and social injustice. And it's the sort of thing that can easily bring one to tears of frustration as well as tears of joy.

If you if you you know, if you really think about these things deeply.

And that's OK, though, you know, and I and I think I think sometimes in.

American culture and society were always kind of looking for a quick fix, happy, you know, it's I think this is part of why we have, you know, some of the world's worst rates of substance abuse, et cetera, is that we're always we're looking for these these quick highs, these quick fixes.

We're not necessarily willing to put in the work and wanting to think about what is the long term, you know, joys, the engagements that come through a sense of meaning and purpose.

And I think that that's a. Those.

A life where you work on problems of social injustice, in my experience, is one that is very joyful in the in having meaning, in having friendships and having colleagues that you can you can work sort of hand in hand with.

I think that the competitive aspect of academia is really less in these spaces where social justice is a is a is is so prevalent because the the concern to do the right thing sometimes is more important than who gets to be first author or who's getting another grant or who got published where. And you know, that's nice because really, you know, if there are some homeless youth whose lives are made better because of the work that I do, that's awesome.

If I publish another paper that's in a journal, I don't know that that matters that much, you know? I mean. And.

That, you know, isn't always the most popular thing to say as a professor, right, I'm supposed to say it's really important to publish and win grants because that those are the markers of success and achievement in the academy. And it's like, you know, yeah, those things are those things are good. Those things are important. Those things are good for career advancement. But, you know, those things may not onto themselves bring you a sense of purpose and joy in your life. And, you know, this is it. Guys like this is this is the human experience as we know it. I mean, if there is something else after this, like we don't know what it is exactly. So, you know, and be here and be as present in this in this here as you can as my thoughts about it. But then I'm becoming a devolving into being a hippie again.

So, Eric, thank you so much for that healthy dose of perspective, especially coming from fellow academics. And thank you for sharing advice about joy and the meaningful quality of your work. So thanks for coming on the show. It's really been a pleasure.

Oh, thank you for inviting me. I wish you all the best with with this and and keep pushing for more radical.

I think it's a great thing to do. So it's for my pleasure.

We want to thank Eric Rice again for joining us today for this wonderful conversation, and my first reaction to this interview was a bit of conflict, actually, and this is something that I've felt a lot when it comes to AI for social good initiatives, because on the one hand, I see so much potential for amazing results and I see so much potential for really meaningful work like Eric was explaining through his firsthand experiences. And then I also read papers and I see different talks that are being given by people like Deborah Orji who are explaining some of the potential harms of things like technical solution ism and techno chauvinism, and people who think that technology can be the solution for everything. And I don't think Eric is one of those people. And I think that he is actually a good example of less harmful ways to implement AI for social good. But I do think that it is a fine line that we have to balance and we have to ask ourselves when can we leverage A.I. for different ways of solving societal problems and when are we maybe exploiting technology for things that could be more harmful rather than beneficial when we place them out into society and just hope that they might be able to solve something that they have no place in solving in the first place.

I think and I know we're probably getting close to the time here, but one thing I just want to throw into the mix is the role of of ego in all of this and like this enlightenment ego especially of like, well, we know we can fix it. We can fix it off. Like, I have the power, like I'm going to make nature bend to my command I and make the social structure bend to my command through this like technology as a tool. And I think Eric's work is a really good especially like I'm just so touched by his stories of like how he knows everyone he works with. Right. Or most of the people he work with, like on an on a first name basis is just like incredible to me because I realize how little that that can happen in research spaces. And so that question of humility, but also like as researchers, as scientists, as people dealing with technology and designing technology, like how do we treat our own ego?

And is it possible that sometimes the ego grows a little bit larger than the task that we're trying to complete or the problem that we're trying to solve? But I think that's all I can say for that on that for right now.

But for more information on today's show, please visit the episode page radical I dug.

If you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite podcast to catch next week's episode on Wednesday and join our conversation on Twitter at Radical iPod. As always, stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Quickly and accurately convert your audio to text with Sonix. Automated transcription is getting more accurate with each passing day. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Manual audio transcription is tedious and expensive. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript. Better audio means a higher transcript accuracy rate. Lawyers need to transcribe their interviews, phone calls, and video recordings. Most choose Sonix as their speech-to-text technology.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.