Predicting Mental Illness Through AI with Stevie Chancellor


Stevie Chancellor.png

How is AI used to predict mental illness?

What are the benefits and challenges to its use?

In this episode we interview Stevie Chancellor about AI, mental health, and the benefits and challenges of machine learning systems that are used to predict mental illness.

Stevie Chancellor is an Assistant Professor in the Department of Computer Science & Engineering at the University of Minnesota - Twin Cities. Her research combines human-computer interaction and machine learning approaches to build and critically evaluate machine learning systems for pressing social issues, focusing on high-risk health behaviors in online communities.

Follow Stevie on Twitter @snchancellor

If you enjoyed this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.


Relevant Resources Related to This Episode:

Need help?

Suicide Prevention Lifeline

Suicide Crisis Lines for different countries

National Eating Disorder Helpline

Advice:

  1. Talk to a trusted friend or colleague

  2. Find a support person outside of your social group that can hear your feelings if you are worried your feelings might alienate you from your groups

r/suicidewatch can help you talk to people who are trained in suicide crisis

Resources from the episode

Stevie’s Personal Website

Methods in predictive techniques for mental health status on social media: a critical review

Who is the “Human” in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media

Fairness and Abstraction in Sociotechnical Systems (The Solutionist Trap)

Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts


Transcript

Mental-Health-AI.mp3: Audio automatically transcribed by Sonix

Mental-Health-AI.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information.

Speaker2:
We are your hosts. Dylan and Jess were two PhD students with different backgrounds researching AI and technology ethics. In this episode, we interview Steve Chancellor about A.I. mental health, the benefits and the challenges of machine learning systems that can predict mental illness. Throughout the conversation, we cover topics like mental illness and suicide. So if you're looking to stay away from those kinds of topics, this might be a good episode to skip.

Speaker1:
Steve Chancellor is an assistant professor in the Department of Computer Science and Engineering at the University of Minnesota Twin Cities. Her research combines human computer interaction and machine learning approaches to build and critically evaluate machine learning systems for pressing social issues. Focusing on high risk health behaviors in online communities.

Speaker2:
And without further ado, today we're just going to get right into it. So we are so excited to share this interview with Steve Chancellor with all of you.

Speaker1:
We are on the line today with Steve, Chancellor Stevie, how are you doing today?

Speaker3:
I'm great. How are you doing, Dylan?

Speaker1:
Doing well, doing well and again, welcome to the show. And today we're talking about human centered machine learning and mental health and artificial intelligence. And to orient us into this, I believe you have a story to possibly share with us.

Speaker3:
I do. So I do research on mental health, but I also in my spare time, moderate an online community where people, sometimes the topic sometimes crosses over into mental health. One time as a moderator. Just a couple of years ago, we had somebody come into our community who was really clearly struggling with disordered eating. It's a women's community, and so we see this happen, you know, fairly frequently we have resources, but this person really was adamant that our community was the safe place for them to be the model. The moderators, there's a team of us struggled with how to handle this. You know, we didn't want this person to use this community as their diary, because that could be really triggering for people who come to our community and don't expect to see in content about eating disorders. And the other thing we were mindful of is we wanted to be a supportive space for this person as they navigated their journey. We ended up deciding that it wasn't going to we weren't going to be able to provide the support that this person needed between not knowing how to medically handle their issues, when to escalate these kinds of questions, who's responsible if they start, you know, pushing perspectives that may be damaging to others in the community. And we ultimately decided to issue them a temporary ban because of how adamant they were about posting in our community. And that really left me feeling quite puzzled about how to handle people who talk about really challenging mental illnesses on social media and how even moderators who are. I've been moderating for three years and I still don't know how to handle this, and this person unfortunately never came back, even though we told them we wanted them to come back when they were in a better headspace. And I wonder sometimes where are they now? How are they? How are they

Speaker1:
Doing in this? I'm wondering if you can give us some of the mental health and eye and mental health and machine learning. There's there's so much to it. There's there's moderation, there's design and all of that, could you? For folks who may be new to this space? Can you orient us to some of the major topics or maybe research questions that you ask?

Speaker3:
Yeah. So the way that I think about my approach is how could you use social media data to help infer the status of people as they are going on to online communities and looking for support and assistance with their mental health? Now I focus specifically on a kind of a narrow subset of this, which intersects with a lot of different issues that we see in online communities and social media, which is high risk mental illnesses and or behaviors related to high risk mental illnesses. So that includes behaviors like suicide crisis pro or disordered eating behaviors and self-injury and self-harm. Now these are sort of the most, I would say, clinically severe manifestations of symptoms that you, a doctor, would probably get worried if they knew that their patients were engaging in self-injury actively. But these issues intersect across a wide variety of things that play out in social media platforms, like moderating this content. Where is acceptable spaces to have these conversations? How do you provide support to somebody who's in crisis and how do you do so in a way? Or how do you provide support in a way that is? Respectful of the people who have to look at this kind of content day in and day out, it's very emotionally taxing. And moderators and community administrators are often the first line defense against this kind of content when it appears in places like Instagram or Tumblr. So lots of lots of intersections here. So I try to get the technical side as well as the the technical side, as well as all the kind of complexities that come along with it.

Speaker2:
Let's talk about that technical side for a second, because you mentioned that you use social media data to infer the mental health status, and I'm assuming inference here means predicting. Is that fair to say?

Speaker3:
Yeah. In this case, I think inference is a prediction. So I'm a computer scientist by training. My doctoral degree is in computing, and I have spent the last seven years building ML or machine learning systems to predict when somebody may be engaging or discussing these high risk behaviors. So these dangerous behaviors that we've talked or that I've mentioned before, and a lot of that work has focused on general purpose social media sites. So places like Instagram, Tumblr, Reddit, where people will go and seek support when they're struggling with what are really complex emotions and behaviors and feelings that they may be feeling almost isolated from. And they can't they don't feel like they can reach out for help, either from a doctor or from close friends because of how shameful or stigmatizing these behaviors are. So most of my work is in developing these machine learning systems that can make these kinds of guesses when somebody may be somebody, severity of their illness may be escalating, or the difference between, say, somebody engaging in healthy weight loss behaviors where they're trying to lose weight because they have health motivated goals versus somebody who may be using weight loss as more subversive technique in a repertoire of disordered eating symptoms. I'm curious.

Speaker1:
How we do that well. So I imagine, like, you know, with the with everything that has happened the last few years with the headlines around, you know, predictive stats I around race, gender, other forms of identity, it seems easy for these systems to over, predict or predict in such a way where it causes harm to the individual instead of helping the individual. And I'm wondering how we can move towards designing these predictions around something as sometimes nebulous as mental health in a way that we can maybe more ensure that it helps the user know.

Speaker3:
That's a really a good question. And I think one of those wicked problems in this space, because oftentimes when I when I talk to people who are struggling in interviews, they're like, I know I need help, but I can't get myself the help that I need. Now, that's the most, you know, the biggest and best opportunity for us to make an intervention or a nudge or some kind of engagement with somebody when they're at the place where they know they need help and wants the help. But there's also a lot of people who very reasonably don't want to have patronizing messages from Instagram or Facebook that says, Hey, we see you're not doing well, you sure you don't want to call the suicide hotline. And so I think that one of the challenges in this space is identifying people who are at a place where you could make a gaffe or make a guess about their status, whether that be increasing or escalating and severity leveling with people that you may be available to help them before you even get to the place of nudging them or knocking on their proverbial social media door.

Speaker3:
And then also like the constellation of what even that intervention should look like. So the vast majority of social media sites, when they think that you're engaging in behavior that they would prefer to not see on the platform will give you little nudges. That's Hey, do you want to call a hotline? Here's some resources about of who you may reach out to to feel better, and the vast majority of the research that's been done. It's complementary to my own, though not done by me, has shown that people actually find that really patronizing and doesn't meet them where they're at. And so this whole complicated system, this web of when do you reach out? How do you talk to people? How do you even know what to say to somebody is an issue I dealt with as a moderator. I still don't know the best ways to talk to people or engage with them. And now we're kind of outsourcing those decision making and those interventions to A.I. systems that gets really hard, really, really fast and something I'm not even I don't even know if we know how to do it right yet.

Speaker2:
Well, that's kind of what I'm wondering, because I imagine as a human moderator, if you see a message from someone who's clearly in distress and they are obviously asking for help or like as a human, you can read in between the lines and recognize that they want help. It's like it's easier as a human to be able to identify that from text. But I imagine as an AI system, it would be really hard to like, get that granular and to understand the complexities of human language in a way where you can give help to somebody who needs it. But you can recognize when someone, I guess, give help to somebody who wants it. But you can recognize when someone who maybe needs help doesn't want it. And so you know not to intervene. And I'm just wondering, since you've worked on these kinds of A.I. systems before, how have you in the past tried to tackle that problem?

Speaker3:
Oh, I love your analogy here of A.I. systems don't do a good job at reading between the lines. All they see are the literal things that you say, and even the best systems may have richer data, like the photo that you share on Instagram or the tags. The time the metadata kind of the associated things you put in addition to the text. But even then, like we know in the machine learning community, it's really hard to detect humor and sarcasm and terms of speech and other kinds of more subtle forms of communication for through text, let alone the problem. That text is not nearly as communicative as voice and seeing people in person. The way that I think, I think the best strategy for dealing with this is when you've got a high stakes decision making like whether it be deciding that somebody needs to be nudged about their well-being or a post needs to get taken down or other high stakes AI decisions. Humans have to be deeply involved at all stages of the process, because otherwise you miss those opportunities where subtlety, context and nuance and honestly contradictions sometimes come into play. So for instance, one of the things that we worked, I've worked previously when I was an intern at Tumblr on building a moderation detection system to help mods as they were going through pro eating disorder content on social media or on Tumblr. Now, for reference, Pro Eating Disorder Content promotes actively eating disorders as a lifestyle choice. It's distinctive from people who are struggling with an eating disorder who are talking about what's going on but aren't advocating for it. That's pretty subtle. What are the challenges is that Tumblr bans pro eating disorder content, but allows for people to have conversations about the about eating disorders in general because they want to provide a supportive environment for people who are struggling to go through that journey together, building a classifier to distinguish between the two of those, it's really difficult.

Speaker3:
And so one of the things we fronted at the beginning of this in the development of the system was OK, how do we how do we build a system that actually doesn't take agency away from people in making those close calls? And so one of the things we did is we said, OK, instead of a two tier system for this is this breaks the rules or this doesn't. We're going to have a low, medium and high risk that we perceive that this might break the system. We're going to show moderators the things that we label as medium or high risk and then the mods themselves can evaluate that. Now one of the things I think is the surprise here is mutts are not a surprise. I guess mods are already overwhelmed by the amount of content that pops up in their cues. And so by adding another layer of A.I. labels on top of this, we've now added a huge pile of posts that they have to go through. Plus, this content is pretty emotionally disturbing and graphic and can be really burdensome for moderators to have to deal with on top of looking at copyright violations. Other graphic content, hate speech, racial slurs, all of that. And so we ended up deciding in the design process to not implement the system live because of these kinds of concerns.

Speaker1:
One thing I'm struck by is just how many stakeholders there are in in this. I mean, you have like a medical side, you have a moderator side, you have a user side and like you, just there's so many different users as well in the space. How can we design systems that that take all of those moving pieces into account and and can we because like you're saying, these are also very sensitive topics. There are real impacts to to how we design these systems.

Speaker3:
Yeah. So I think that the only way forward with these high stakes decision making for if you're going to use AI to be involved in high stakes decision making, the only way you can move forward is if you try and center the voices of the people who are involved. And historically in AI, we tend to look at the people who use the outputs of the AI system, not the people who are most dramatically impacted. And I think this has played out a lot in facial recognition technology research, which has shown big gaps in predicting on non-white and non male faces in datasets. And for me, the way that I've been thinking about this and this is sort of a new line of research is how do you how do you solicit people's opinions and preferences about what the AI system should even do in the first place? So I've got a research project that I'm working on right now with a group of people who have very who sit in these stakeholder positions when it comes to mental illness communities. So there's a doctor on the team. There is a couple of people who've struggled with mental illness and specifically struggled with the illness that we're focusing on.

Speaker3:
There's computer scientists, HCI researcher, and what we're trying to do with this is what even should the problem task look like, right? Assuming that the problem should be, let's predict if something breaks the rules is an assumption of what the important issue is to solve and how you take that issue and that social conundrum and operationalize it for machine learning, right? We're taking even like step zero, like what should the problem statement even be? And it turns out that the vast majority of the computer science research my own research included assumes that the right way to do this is to identify those high risk behaviors immediately. And our goal is to design workshops and other opportunities that center the perspectives of people who will be directly impacted by these algorithms to figure out way if we build something for you, that would. Help you when you're when you need help. What is when is that opportunity and what would that look like and then bring in other stakeholders on top of it?

Speaker2:
You mentioned that this kind of ideal system would center the perspectives of people, and I think that's a perfect transition into human centered machine learning, which is something that you have a long history being an expert in. And so I'm wondering if you can maybe define what that is and also maybe tell us what human centered machine learning isn't just so we can specify the difference between the two?

Speaker3:
No, I think that's a great question, and I get this question a lot about what counts as human centered machine learning. I didn't even call the work that I did human centered until maybe three or four years ago and trying to conceptualize out a process by the the way that I think about problem solving. And so for me, human centered machine learning is a set of practices and a mindset that you adopt in trying to use machine learning as a tool to solve socially challenging or pressing problems. Now the way I think about that has two pieces, and they're equally important to each other. Obviously, the machine learning side implies technical innovation and building robust and accurate systems. And so a lot of the work that I do, I care a lot about getting the answer right, because if I get it wrong, there are social consequences to getting a mis evaluation of somebody's well-being state and downstream uses of a machine learning model, whether that's an intervention or a non-intervention. But I also at the same time that I care about the technical interventions or the technical side. I really care about doing right by the people that the intervention applies to or the machine learning system will affect in a broad sense.

Speaker3:
And so that's led me to, you know, having questions about the ethics of deploying automated A.I. tools to assess suicide risk or what do interventions look like that would respect the privacy and agency of people to talk about these while also being aware that there are larger effects of this behavior in online communities that might cause contagion like impacts for suicide crisis in particular, this can be pretty pernicious. And so how do you evenly and fairly balance these in building the system, which gets at this idea of the only way that I can think to do this is to center people or humans in the whole process from the start all the way past the end. And I say pass the end because ethics and governance through moderation are not necessarily the domain of MLW's specifically in terms of a technical contribution that a computer science audience would get excited about. But they're sure to really important to making sure that the system we build is respectful and make sure we we meet people where they're at. I guess I didn't answer the what is human centered machine learning. Not that's such a hard question I like. One of the things I think that I get asked a lot of questions about what counts, so I get like, OK, if I study human data, does that count if I do work in a human focused domain like health or law? Does that count? Ok, if I talk to people, does that count? And I want to say that those are necessary but not essential pieces of making sure that your mindset and the approach is in the right place, right? I don't want to say like it's almost where your heart and your head is that through the whole process.

Speaker3:
But to have a mindset and a set of practices that guide your approach means that the person and the outcomes are taken as important as the technical innovation that you may be able to be deployed. And so in some cases, a human centered approach would say maybe AI isn't the right way to solve this problem. We need to take a step back and see if there's other non-technical interventions or technical, non-technical solutions that may actually address the issue better. And that humility is, I think, a good way for me to at least identify that somebody is willing to be challenged about the assumption that the technical side is necessary.

Speaker1:
I have a vested interest in your research as someone who studies grief and loss and processing in online communities and one of the I guess nuances that has shown up in my own work is data collection and how you collect data in online spaces for some of these sensitive topics and again, how you do that well or ethically. And I'm wondering your thoughts on how like, I guess what is your process for data collection and are there ways that you think that could be applied and maybe other modalities or contexts?

Speaker3:
Yeah. So one of the things I think that's really important with data collection and communities in general is respecting the community for the things that they want you to do and if they tell, you know, back off. So there are some really good examples from prior work where somebody just wants to take a do a field study about a community and the community is like, we don't actually want researchers here. This is not the space for you. And you know, I think that that's a really important perspective to take because the community has told me they don't want me around. And as a good person and also as a good researcher, I don't think I can injustice do that kind of work when I'm in a space where I'm not wanted. That being said, not being wanted on Instagram, where there is kind of an amorphous community of people who use hashtags together, I don't even know what it would mean for me to be, quote unquote kicked out of Instagram as the community. And so the way that I try and approach my data collection is to try and best respect people and their data without stepping on their toes where I'm like meddling in or participating in their life. Part of that would be, you know, only gathering data that's publicly available, and that means data that hasn't been removed or taken down. Eventually, you know, sometimes people decide later they don't want to share this information, and I try and go back and find whether or not the user has decided to delete it.

Speaker3:
If they did great, I take it out of my dataset. But that also means that when I gather data and the strategies that I use, I'll tell you the strategies that I used. But I don't share my dataset because of that risk of the data being taken down and the context in privacy that is expected on the context of sharing on Instagram. I also try and do things like not publishing people's names and removing those from data sets, removing locations and other personally identifying information that, if it made its way into my models, would and could and has risked before identifying somebody in in my systems. And I don't want that to happen. That's really risky for me. I also realized that that's not fair to them. And so there's this tension, I think, in in gathering public data about mental illness. Even if it's on Instagram and Reddit, where it's technically public, people don't expect it to be used in research. And the best way that I think researchers can move forward with this is acknowledging that as a complicated factor, considering the situation and the stakes of what happens if the research or the data is released in unflattering ways to the people and doing your damnedest to try and protect people before these even happen.

Speaker2:
It sounds like a lot of the processes and the attitude that you have going into these kinds of questions that you're asking in the ways that you're solving them is it's very responsible and very respectful of the people who are involved in the stakeholders who are involved. Probably, maybe in part because you're using public data. And I'm wondering if everyone is being this respectful. I think earlier you mentioned that like some platforms and some social media platforms are already doing some of these predictions or making some of these predictions and putting things like suicide hotline numbers in front of users. What are they doing to make these predictions and to display this information? Are they doing human centered machine learning or are they also being respectful? Or what is the status quo in industry right now?

Speaker3:
So I know the status quo in research very well because I've done a series of research studies on the practices of machine learning researchers who do predict mental illness on social media data. The short version is, I think everybody's intention in this space is really, really well intentioned and very genuine. Right people are strongly motivated to help people feel better. Nobody's coming at this from an exploitative perspective of I need to get more papers. I need to make sure that my models are robust because the space is just so complicated. And I think that the intentions are really articulately laid out. And I've shown in some prior work that these intentions of of wanting to center and help people are are good and true. I think the challenge comes in translating those intentions to practices. So machine learning for mental illness is a really new area. Right. Ai has only gotten very exciting and hot in the last 10 years, and we don't have great ethical practices or standards within the community that says, Hey, we should do it this way or we should it. And so I try and take almost conservative approaches to the ethics because I don't want to hurt my participant or hurt people in my data sets. And I also want to be mindful that I can't foresee all the consequences that may come out of their inclusion in my research and to protect them as much as I can.

Speaker3:
I have to take conservative decisions that may run against values that are commonly held within the ML community, like values for reproducibility and benchmarking because I don't release my data sets. So you can't benchmark against Steve or Chancellor at alls high risk mental illness severity system with her ratings of one to three. You can't do that and that's that's harmful to science. But I think is the best trade off for not knowing how my work will impact people in these really vulnerable positions in terms of industry. I think people have awareness that companies like Facebook can use your data to make predictions for some things like advertising, making you recommendations of groups to follow because we see and are exposed to ads and group recommendations and friend requests and all of these kind of procedural or procedural role parts of Facebook's interface that we kind of interact with if you go online. I don't know that most people realize that Facebook behind the scenes, has been doing suicide prediction for about two years at this point. They introduced a suicide prediction, I think, in part because people were going on the platform and using Facebook Live to publicize their suicides, which is horrific and tragic and very disturbing if you happen to be privy to that.

Speaker3:
And I think one hundred percent understand like not wanting that kind of live feed to pop up on your platform because it could be seen as Facebook tacitly allowing that kind of stuff. And so Facebook behind the scenes, uses your data across its site to predict if you might be at risk of injuring yourself and tries to make a nudge or an intervention if they believe you may be at risk. And there's a whole host of blogs about that Facebook publicize because they worked with, I think, one of the national suicide prevention organizations in the U.S. I don't think people realize that their Facebook data is being mined like when you go on to Messenger for like hints that you might be at risk of injuring yourself. And I think informing people of that is important. But it's also really complicated because as soon as you tell people you're surveilling them from something, they're going to change their behavior and probably move off of Facebook somewhere else where they can talk about it without being bothered.

Speaker1:
So I'm just speaking for myself here, but I'm going to go out on a limb and say that especially the Western world is still in the process of figuring out what to do with with mental health, especially in institutions and. Yup, yup. And so as as we saw with the, you know, the gender shades article or, you know, a lot of work that that Timnit Gebru did or, you know, Benjamin or many others, right? Is that when we're trying to build algorithms to quote unquote solve social or human problems? Sometimes it's really difficult when those human problems like racism or sexism are still so pernicious in society. And I'm wondering for mental health, I guess. Is that something that your work has been in dialogue with that division between this social problem that we still really don't know what to do with and then trying to find some sort of technological solution or infrastructure to address it?

Speaker3:
Yeah, I wish you could hear me smiling through audio because this is like one of the stickiest things I deal with. I have a clinical collaborator who once told me it was like, Stevie, you know that mental illness is like a hundred and fifty or two hundred years behind other illness, and not because we don't care about it. It's because our standards are just so far behind. We don't in some cases know or have consistency and how we apply certain terms or diagnoses to people because of the complex biological, physiological and social and sort of, you know, larger ecosystem of things that impact people's mental health. So we're really far behind in diagnosing it, let alone. There are just no great standards for how you should react in person if somebody comes to you and is like, Hey, I've been really struggling. I've been thinking of taking my own life now. Most of the time, it's actually not that explicit, but most of us don't even know how to react in that case, and there are better ways and worse ways to react when somebody shares these kinds of disclosures with you. You know, we've for a long time stigmatized people from having mental illness and disclosing it alone. And you know, it was only, I think, in the last couple of revisions of the Diagnostic Statistical Manual of Mental Disorders, which is sort of the king of the diagnostic criteria for mental disorders that we said that being gay was not a mental disorder that could be cured with conversion therapies, right? And so everything's still evolving. The reality is that we don't know what to do with our social responsibility of even if we found somebody with one hundred but one hundred percent accuracy, we are positive this person is a risk to themselves.

Speaker3:
What do you do? Whose job is it to intervene if you find them on Facebook and they're struggling? You know, I work with doctors who when they believe that somebody may be struggling and they see them in person, they have an ethical duty that they're bound to intervene. They don't even know how to handle the social media data, where people in my data sets are willingly disclosing distress and despair and grief and crisis, and let alone how myself as a computer scientist is supposed to intervene. So the strategy that I try and use is to try and keep people in the loop with the whole machine learning process because. I would hope that a team of doctors of concerned people who care about these issues would be able to at least point out, Whoa, whoa, whoa, whoa, we are way in over our heads because I don't even know if I found this person myself skimming through social media that I would know how to respond. What should we do and what should Facebook or the AI system or whoever is obligation be to intervene? And we need to have those kinds of dialogues before we start wantonly building A.I. systems that solve social problems that before A.I. systems, we didn't have a solution for it in the first place and that over eagerness to deploy technical solutions to try and solve social problems is admirable, but at some threshold misguided because we just don't even know what to do in the first place. Without the without the air intervention.

Speaker2:
You mentioned this ethical duty to intervene, and it made me think through the non-digital alternative to these machine learning systems, and I imagine if a close friend or family member came to me and disclosed that they were struggling with suicidal ideation or some sort of mental illness, I would feel a personal duty to probably act on that and support them and help them in whatever ways that I could. But now, when I take that to a technological system and maybe it's someone that I don't even know either as a content moderator or even as like an AI model or algorithm or, you know, machine. And I am gathering all this data and I have this impersonal connection. Do you think that these online platforms have a duty to intervene in the same way?

Speaker3:
Oh, Ben, I have no idea what the answer to that question is. It's so good. So what are the examples I think that really captures this is like, let's say that I'm scrolling through my Instagram feed and I see a friend, though not a close friend or family member who's who's really struggling. And that's something that's actually happened to me, both. As the moderator, I see somebody struggling on my in the community that I care about, but also just people I know through Facebook that I'm friends with. They'll post about struggling with depression and anxiety, and maybe sometimes I I'm worried about their well-being based on something they say on the platform. I would not be surprised or judge anybody if they just scroll on by, because what are we supposed to do in those situations that would actually make a difference? This person may be two thousand miles away because they're a high school friend. I haven't seen them in six years. I don't know where they live, let alone the context of which they shared a single post on the platform. And then I, as a person, may also have just bystander effects where I feel like I can't intervene or somebody else will. That's closer to them, you know, and the the issue about the intervention with Facebook being responsible, I think, is even more complicated because that is moving away from the, I think, the best and most strongly from the research. We know that the best and strongest interventions for these kinds of behaviors come from friends and family.

Speaker3:
They're not coming from a dispassionate person that is really far away and distant and doesn't get the context and doesn't know the person because people's needs are wildly different, right? Somebody who's struggling with, say, job loss that's causing them to have or to be suicidal has a totally different set of perspectives. If they're suicidal because of depression for another reason, or because they just struggle with depression as an ongoing presence in their life, those the job loss versus the depression example are like really good instances where you'd want to take different approaches to trying to help them and saying that Facebook should have kind of a one size fits all solution that works in these interventions is, I think, almost unfair to a company that is trying to do its best to provide places for people to talk. And we don't even have those kind of obligations set up for organizations outside of the digital space. I wish there were better ways for us to think through that, and I think the only way we're going to be able to to think about who's responsible is through better dialogue and more trustworthiness. With places like Facebook. I think there's issues and people just trusting Facebook to do the right thing, and I don't even know what the right thing is in those scenarios. So, you know, that's a whole can of worms.

Speaker1:
What you said about the research showing that the best and most holistic interventions are friends and family, and it sounds like in the local community that seems to be in disagreement or contrast to the lifeblood of Silicon Valley, which is scale, scale, scale, which is how do we take this algorithm and make it apply to most people or in like a utilitarian way or something? And I'm wondering for you, as someone who is a computer scientist and who designs these algorithms, how how do you think about scale and is it possible for us to have those holistic interventions while also scaling these algorithms?

Speaker3:
I feel like the less charitable take is Steve. Do you stay up at night worrying sometimes that you've created an algorithmic monster that you can't actually control? And the answer is yes. I think one of the biggest fallacies that you can make with prediction as a technique is that the general case applies to all people and in mental illness. That has been something that has troubled me in doing this research because at some point there are contextual and individual factors that impact this, and it's something I've been thinking about is how could you use the general information you know about diagnosis or about the things that people discuss in social media alongside people's personal experiences and journeys to build systems that are more attuned to a person's needs? Now there's a ton of technical challenges in that area. How do you even have enough data to know what a person's personal experience looks like? That's going to take hundreds of posts for how we build machine learning systems now, but also you bump into these like. Thorny ethical challenges of, Hey, I want to build you like a personal depression detector.

Speaker3:
Are you OK with that? Positive like that with the little, you know, that's a little glib, but I think that we we don't we as computer scientists, don't do a good job at explaining the benefits of the system or how systems work to people, and when we focus on issues like scaling as fast as we can, minimizing or shrinking model size so that they run faster or so that you can use them as people upload content online before it even hits a social media website. We miss that nuance, and in that process, we can't inspect and interrogate the outcomes of these systems until something goes wildly off the rails. And so in trying to get a more personalized approach to this, I think, is one avenue. But I think at some point we're going to have to have a conversation as a as a computer science community. And, you know, an industry community that wants to do stuff like this is no matter how good our intentions are, we actually doing good for the community? And I don't know what the answer is going to be.

Speaker1:
We don't know what the title of this episode is going to be, but it's probably going to be something like mental health and I are exploring mental health and I and I imagine there are some listeners out there who are struggling with mental health on these platforms. And I'm wondering for those users before we wrap this interview, if there's any specific resources which we can also put in the show notes or just. I understanding you're not a clinician, right? Just general thoughts you might have about folks who might be struggling right now.

Speaker3:
Yeah. So for people who are struggling, I think that the most important thing is to try and find the thing that works best for you. For some people, that will be calling a large or large national or local suicide hotline because you have privacy and anonymity when you talk to people who are struggling, right? It may also be reaching out to a trusted friend or colleague, or trying to find a support person who's outside of your social group that you can talk to about your feelings. If you're worried that your feelings will alarm or hurt or alienate people that are really close to you. The other thing is, I know I study these communities and I call them high risk. The reality is they save people's lives. People go to suicide crisis communities on Reddit, like our suicide watch because they feel like they can talk to people who are trained in suicide crisis and walking you off the proverbial ledge. Sometimes there's bad stuff on these communities. Same thing for the eating disorder one, sometimes the stuff gets pretty, pretty sticky. The reality, though, is that these communities save people's lives, and these online spaces can be another place if you feel isolated or ashamed of or needing that support. They can be a lifesaver for people, especially when you don't have the resources or money to go talk to someone in person or you're scared. And so I would try and find online spaces that may be able to help you, whether that be through Reddit or Instagram, but also online resources that have low cost or free talk sessions. So Seven Cups is a really good site for peer support. I know that there are some low cost therapy options like I don't want to necessarily plug specific ones that charge, but there are low cost and freely available therapy and talk therapy available that you can find. So lots of resources. You should find the one that works for you.

Speaker2:
And of course, as always, we'll be sure to include all of those resources in the show notes for this episode and to add to those resources. Steve, for any listeners who want to either get in contact with you or to explore some of the work that you've done a little bit further, where is the best place for them to go for that?

Speaker3:
Well, I'm pretty hot. I am pretty active on Twitter of I post a lot of content and retweet a lot of people who are way smarter than me that think about problems kind of in the same way that I do. So on Twitter, my handle is s and chancellor. You can also find me on my website at Stephie, though Twitter is probably the best avenue to catch up with the work that I do.

Speaker2:
Wonderful, and that is unfortunately the end of our time for today, but Steve, thank you so much for the wonderful and important work that you're doing in this space and thank you for talking to us about it all today.

Speaker3:
Awesome. Thank you so much for having me. I really like I really I really hope that people are feeling inspired by the opportunities in this space that there is places for us to make a big difference, while also centering the people that need help the most and building tools that can that can make their lives better. It.

Speaker1:
We again want to thank Steve for this wonderful conversation, and before we begin our normal discussion outro because of the contents of this interview and the sensitive topics that we've covered, we want to remind everyone that if you or a loved one or a friend or anyone in your community are struggling, there is help out there. So please see our show notes for various hotlines and other mental health support. So just as we begin this outro, what's sticking with you about this conversation?

Speaker2:
So many things, Dylan, and I think that it would be foolish of us to do this outro that we were recording on October 5th without at least recognizing what's happening in the tech world currently. Because a lot of this conversation was centered specifically around Facebook as a case study for some of these technologies, and Facebook is going through quite a lot right now. There was a big scandal with the whistleblower coming out and bringing to light all of these various problems that are happening within Facebook, and the Senate hearing is literally happening today as we're recording. And so it's interesting. I guess it's a coincidence that we are also talking about Facebook in a very different context in this interview, but I did at least want to mention that all those things are happening in the background. And speaking of Facebook, that was actually one of the first things that I wanted to talk about as a takeaway from this interview, because something that really stood out to me from what Steve said was this almost catch-22 of some of these technologies as they're developed and deployed in tech platforms, in Big Tech platforms. And the example that she gave was when Facebook is attempting to predict if somebody is struggling with mental illness and to predict really if somebody is planning on committing suicide.

Speaker2:
And I'm really conflicted about this specific deployment of this kind of technology because on the one hand, I understand why Facebook should have these technologies in place. Facebook does not want to be an ecosystem that at least seems to promote this kind of behavior on their platform. They don't want to help aid people in, for example, streaming them, killing themselves live on Facebook, which is something that, as Steve said, has happened. So I understand why they would need to have some sort of system in place to at least attempt to detect at scale when these kinds of things are happening so that they can try to prevent it. But that being said, we know that these kinds of systems aren't perfect, especially now after talking with Steve. And we know that there's a lot of problems and challenges that come with developing these systems in a way that doesn't cause more harm unintentionally. So I'm feeling a little bit conflicted about what to do with this specific case study for for this kind of technology, for predicting mental illness. And I'm curious what you think about it. Yeah.

Speaker1:
Well, I think Steve's thoughts are are right on that there. It's a thorny issue. Like all of this is is a thorny issue in terms of not doing nothing like we have to do something. And also there are a lot of wrong ways to do this as we continue to learn. Hopefully we can do better and as a more research like Steve's comes out. But it's it's tough, like when we haven't. I mean, I'm speaking for myself in this, in this. But like, I don't think there's a way to like, solve mental health, right? I don't think there's a way to to solve some of these social issues around mental health. I do think there are harm reduction models we can use to make things better and to give people the support that they need without either reducing them to some sort of diagnosis or causing additional harm or creating more stigma around these topics. One of my favorite research papers of all time is one of Steve's papers, and it's titled Who is the human in human centered machine learning the case of Predicting Mental Health from social media. And I believe Steve, he talked a little bit about this, about this project and our conversation and that question of like, who? Who is the who is the human here? And what does prediction mean when we like make assumptions potentially about who that human is? And it made me think about, you know, when we do like human centered machine learning or human centered design, like, like, who are we actually talking about? Because I think there's an assumption that like, Oh, we're putting a human in, that means we're doing better than if we're not bringing any human in the loop whatsoever.

Speaker1:
But then it's almost like paying lip service to that idea. And so one thing that I love about Steve's work is really diving into. Diversity of of human experience and how maybe just designing for one particular type of identity or one particular type of human does not mean that it actually creates a just or healthy system. In fact, in a lot of ways it can be based on, you know, where we're getting our data, who we're modeling after, et cetera. It can cause a lot of harm to people that do not fit that mold.

Speaker2:
I also really appreciate the second half of the question that you asked, so the like the first half is the who is the human, the second half was the what are we predicting? And it just it makes me think about what the actual goals of these systems are. Because I mean, let's say in some ideal world that doesn't exist. We somehow come up with a model that perfectly predicts if someone is having a really hard time and they are mentally ill and they are about to commit suicide. Let's say we come up with a model that perfectly does that a nonexistent model. What then? What do we do with that information? And I think this was kind of what Steve was getting at with some of the questions we were asking about Facebook and who's working at Facebook and who's sitting at the table. Because let's say we have a table full of computer scientists who are amazing at their job, who build a system that tells moderators that somebody needs help or needs intervention. Well, then what? What what do we do? Do we give them tools and resources and guidance? Maybe they don't want that. Do we try to patch them through to a hotline? Maybe that ends up making things worse for them? There's not really a perfect solution here for how to try to intervene in these kinds of scenarios. And that's regardless of what technology we're using. And so when I was listening to this interview because we actually recorded this a few months ago and I was listening to the interview before Dylan and I recorded this outro today, and I I was just like screaming internally when Stevie was talking about the people who try to solve social problems through technology, even when those social problems haven't been solved yet, socially and in my head, I was screaming, The solution is trapped.

Speaker2:
The solution is trap, which I remembered. I think we probably quoted the solution as trap on this show, like almost every episode for like several months. At one point there, we'll link in the show, notes the paper that that comes from. It's a really brilliant paper of a bunch of different traps that people fall into, and they're creating technological systems. And one of them is the solution is trap, which is this idea that when you try to solve a social problem with technology that has not been solved socially yet, you can cause more harm than good. And I think this is just a perfect example of that, and it doesn't mean that we should not try to do this work because as we were talking with Steve about earlier like this, this work is necessary and there's reasons why we need to have these kinds of technologies to try to make these sorts of predictions. But there is a lot of potential for harm here. And I I sit here just not knowing what the best answers are and trying to. Think of a solution space without. Using the word solution, maybe. Maybe that's my problem, I need to think of a different word here.

Speaker1:
Well, that's and that's what I was going to say, I think even the and again speaking for myself, I think even the framing of, well, we're going to solve this thing, this thing of like what it means to be human right are like being human, are struggling and being human and just like anything to do with mental health. To frame it as like, Well, we're going to solve this thing, especially as you're saying, we're going to solve this thing through technology, but we don't even like I mean, I think that's a losing gambit to say we're going to solve mental health period, regardless of what tools like, even when you're talking about interventions, you know, like what? We have we continue to struggle, we as a society as as a especially as a society that continues to stigmatize mental illness of all kinds, we have interventions that potentially work better than others. But even those interventions are not one size fits all, and they can't be. And so to take a system like Facebook, for example, or any big system at scale which is trying to do this predictive work, the scale is just it's just not going to work right. And there are going to be some casualties now. Again, I think to Steve's point, we can do better and we can keep doing better, but we shouldn't live in this world in which we're like, Well, we're going to solve some of these really thorny issues because I think that that is the hubris that gets us inadvertently, but like still causing an immense amount of harm to users of so many different backgrounds, even the ones that are, quote unquote neurotypical or whatever phrase you want to use. I think there can be harm cost across the board when we're dealing with some of these issues. But I believe again, just as as often is the case, we could talk about these things for quite a long time, but I believe we're at time.

Speaker2:
So for more information on today's show, please visit the episode page at Radical Eye.

Speaker1:
Org And if you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite pod catcher. Catch our regularly scheduled episodes the first Wednesday of every month with some bonus episodes in between.

Speaker2:
Join our conversation on Twitter at Radical iPod. And as always, stay radical. It.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including secure transcription and file storage, collaboration tools, transcribe multiple languages, powerful integrations and APIs, and easily transcribe your Zoom meetings. Try Sonix for free today.