Episode 3: Are We Being Watched? Unpacking AI Surveillance

Featuring Kandrea Wade

download.jpg

In this episode of the Radical AI podcast hosts Dylan and Jess interview Kandrea Wade.

Kandrea is a PhD student in the Information Science department at CU Boulder focusing on algorithmic identity and the digital surveillance of marginalized groups. Along with developing her research at CU Boulder, Kandrea seeks to discover and assist in creating proper ethical regulations and education on algorithmic identity and digital literacy. With a background of over 15 years in entertainment and media, her interests have evolved from demographic programming for entertainment and media theory to corporate user ethics and legal protections for the digital citizen.

Kandrea holds BA in technical theatre from The University of Texas at Arlington and an MA in media, culture, and communications from New York University.

You can connect with Kandrea on Twitter at: @KandreaWade

3. Kandrea Wade transcript powered by Sonix—easily convert your audio to text with Sonix.

3. Kandrea Wade was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your co-hosts, Jess and Dylan.

Just as a reminder for all of our episodes, while we love interviewing people who fall far from the norm and interrogating radical ideas, we do not necessarily endorse the views of our guests on this show.

And we encourage you to engage with the topics introduced in these conversations and take some time to connect with where you stand with these radical ideas. We also encourage you to share your thoughts online on Twitter at radical a-I pod.

In this episode, we interview Kendra Wade, a p._h._d student in the Information Science Department at Seiyu Boulder, focusing on algorithmic identity and the digital surveillance of marginalized groups with a background of over fifteen years in entertainment and media. As a sneak peek of the interview that will follow at the beginning of every episode, Dylan and I like to do a little segment we call Loved, Learned or Leave. Here we talk about different topics that are brought up throughout the interview that we either loved that we learned from or maybe that we want to leave behind. And this just means something that particularly challenged us throughout our conversation. So one of the things that I loved about this episode was that Kendra's interdisciplinary approach to her research, because she has over 15 years in the media industry, she really uses that as a way to inform her research and then also inform people about their data and the dangers of data assisted surveillance, which I think is like a really unique approach and just really relevant today with people's short attention spans. So I love that idea, something that I learned. It was the concept of contextual collapse, which is one of many dangers of data when it's misused and abused and something that I would leave behind. So something that challenged me is this idea of not giving up on cleaning your digital identity, coming from someone who has had Facebook since 2009 and has hundreds and probably honestly thousands of photos and comments and posts that I do not care to take the time to prune from my digital identity. I have probably in a sense, given up on cleaning it at this point. And Kendra gave me a little bit of hope to maybe change my mindset on that issue.

There's still time, just there's still time for you to clean your digital identity. It's beautiful. There's still hope for us all. I think one of the things that I really loved about our interview with Kendra was actually about her background and her background in theater, specifically because just as you may or may not know, I have a lengthy history in the theater, starting from high school when I did Tach and what I did acting and all of that up through college, up through after college, when the pressures of the New York City theater scene were a little bit too much for me. But I think it's amazing how Kendra has utilized her experience in media and theater to apply to what she's doing now in data, ethics and an ethics, something that I learned. Was more about digital literacy. I don't know if before this interview I could have given a good definition of what digital literacy is or at least some of the nuances or more of the nuances of it in kanji. I really opened my eyes to some ways that I am digitally literate, if that's how you say that. And some ways in which I am digitally ignorant. And if that's not a phrase already, I am going to patent it.

And I think the thing that I want to leave behind isn't really something that Canberra represents, but is something that we talked a lot about, which is this concept of surveillance and data surveillance. And after this conversation, I'm still left wondering, do I have agency with how my data is used out in the world and what really can I do as a consumer to protect myself beyond checking the boxes of the user agreement? So that's a question that I'm still wrestling with myself.

We are so excited to share with you this interview with pandaria weighed.

Kendra, thank you so much for joining us today. How are you doing today? I'm great. I'm great. It's great to be here. Thank you so much for having me. How are you guys doing? I'm doing OK.

There's a lot going on in the world right now. So thank you so much for taking the time to chat with us. And before we get into the specifics of your research and everything else that you're doing, which is super exciting, could you just tell us a little bit about yourself and your journey towards where you are right now?

Ok, of course. Interesting journey. So so I was born in Germany to two military parents. My parents were both in the army and I spent most of my life in Texas. And so I've always been in love with art and media and technology, which led me to the super diverse path that I'm on now. And so I have a bachelors degree in technical theater and worked as a technical director and lighting designer for over 12 years. In that time, I taught at a college, worked at several concert venues, held tech director positions all across Texas. And then from there I started working in events and film and television, and I've worked for South by Southwest, Viacom, MTV, Bravo, Disney and several other production companies.

So in addition to entertainment, I've also worked in education for 15 years and I've taught at colleges before. I was a senior admissions advisor for the Princeton Review. And that was just before I went to NYU for my Masters in Media Culture and Communications. So that was about two years ago. And at NYU, I was really interested in studying demographic programming for production companies in TV, film streaming services, things like that. And as I moved through the program, I realized that in order to do demographic programming, you have to have user data. And this made me start thinking about what exactly these companies can see and what they're doing with that information. So this introduced me to user data, ethics and bigger questions of how we as humans are sorted and categorized in systems. And so I started taking all of my electives and data science and applied statistics and made my Masters program kind of a combo between media studies with a focus and data ethics.

And so I started to look at bias and ethical dilemmas and the usage of user data and tech in corporations and government. And that whole idea led me to see you bolder where I'm now pursuing my p._h._d. And I'm focusing on an algorithmic identity in the digital surveillance of marginalized groups. So that's me in a nutshell.

That's such an awesome story. I love people with non-traditional paths. Yeah.

So a lot of questions that I could ask right now, but I'll start off with something that just stuck out to me. I think it makes sense from your background why you got into a digital identity space. But I'm curious where this surveillance piece came in, because that kind of seems a little bit different from your previous research interests.

So I think what it came down to is a lot of what I was learning in the classes that I was taking at NYU and the research that I was doing in digital identity.

It comes down to, again, we're being sorted and categorized as people. And one of the most famous cases that a lot of people in our field know about is a company, Northpoint. And that is basically a system that's sorting people who have been arrested and determining the outcome of, you know, what their sentence with or what their criminality is based on. Just some data points about them. And so that along with, you know, who I am as a person, who I am as a researcher, I'm a black woman in America. And so there are a lot of concerns that are valid to me in the physical world that I realized were being carried over into a digital space. And so a lot of that has to do with. I think I think as a black person in America, we often feel surveilled just in general. And so I think that being carried into a digital space where there are cameras and there's Internet tracking and monitoring, you know, criminal justice records, credit scores, things like that. I think it just became more and more important to me to serve myself. But, you know, not only myself, but my community. And so that's where surveillance really became important, is because I know that taking a look at people from that type of high level perspective doesn't always come back with the most accurate results.

So I feel like data ethics has become almost a buzzword in some ways now, at least out in the field.

And sometimes I feel like we use that word without defining exactly what we mean or maybe were new to the field. And we have no we don't want to embarrass ourselves by asking.

And so I'll embarrass myself by asking you when you say data ethics or when you talk about the types of data that people can see on me when I spend an hour. Line, could you just unpack that a little bit, of course.

So I'm talking about ethics, something that's really in line with the codes of ethics that are attempted to be created around data. It has is falls in line with what we look at in health care or health information management. And it also has to do with how we treat humans in a physical space when it comes to medical ethical behavior. So looking at that and that you have rights, you have consent, there's you know, there things are inhumane to do to people physically. We're trying to carry that over into a space where it can be inhumane or inappropriate or unethical, just the same. To have certain visibility's, you know, for same things live like Ferb for schools or HIPA for medical these that you can't release certain pieces of information. And so something that we're looking at is really trying to determine what the best way to go about creating these ethical policies and regulations would be. And the types of data that people are looking at are things that we all voluntarily or, you know, interacting with the Internet. And we're all kind of we're stuck and using the Internet now because of what's going on in the world. And not everything that we use is necessarily having full consent from us. There's a lot of implied consent, which is, you know, a problem that was historically in health and medical. With experiments being done on people. And so in a way, it's almost like technology is doing these experiments on our data with us as humans with anything that's from, you know, your purchases that you're making online to your Internet, your your browser behavior or even, you know, depending on what you're using and the technologies you have in your home, it's listening to you, it's tracking you. It's even tracking your eye movement depending on some of the tech that you have. The cell phones that we use. And so there's a little bit of something everywhere that we're really questioning the ethics of how far can these technologies go to really kind of be invasive in our lives without us knowing what they're doing is something that we've spoken about before.

Cantrill That I think is kind of interesting from your perspective with this this data ethics idea in mind about just how much data about us is out there and how much data we put out there from our own. And I think a lot of people there's like this tension when we talk about responsibility and a lot of people tend to put the responsibility for this data ethics idea on the companies who are collecting the data. But from your perspective, it seems like you actually seem to be a little bit more focused on the alternative side of things and you're flipping responsibility. You can totally correct me if I'm wrong, if I'm putting words in your mouth, but it seems like you are placing maybe some responsibility on the users and data literacy and for them to understand what they're doing with their data. So I'd like to first for you to correct me, but then also for you to tell me what what your views are and what your research has been with with this space.

I think it's important to educate users about data literacy. I wouldn't say that I'm necessarily putting it in the hands of the users solely. I think it has to be kind of a dual effort because ultimately the way that I see it is that money, because money is the bottom line for a lot of data handling right now between companies and even governments, they're going to go to extreme efforts to dodge any regulations on user data. And I do feel like that you educating the user is a more effective approach at the moment. So ultimately, it would have to be a simultaneous effort to continue to work to develop policy and regulations within government and industry alongside increasing digital literacy among users. And that needs to start. I mean, as early as grade school, you know, as soon as these kids were given their first connected device. So it like I said, it's a dual effort. But until we can find a way to make these data brokers take stake for the good of the user, it's really up to the user to protect themselves themselves. And so, you know, much like any other part of life or the consumer, the individual interest may not be the main concern for that company or entity. It kind of falls on the user or the consumer to be responsible for protecting themselves.

When you talk about all of the data that people have on me and on all of us who are listening to this podcast, I start to get a little afraid. Should I be scared?

That is a great question. And I would say yes, I would say to be reasonably skeptical. Because the scary thing about that question is that no one can answer it. No one knows. I mean, ultimately, there are so many repositories of data on you, on each of us that we don't know where they are. We don't know what they're doing. We don't know who's buying, selling, trading this information. So your question is a great one, because it's actually hard to answer. I think in being skeptical, not scared, skeptical, that we can all kind of enact some better habits and in standards of how to protect ourselves and to be more aware, like increasing or all of our digital personal digital literacy about what's happening with our data to the big the furthest extent that we can. But I think ultimately, because that question can't be answered, there is a lot of inherent fear. And I don't blame anyone for feeling that way. But the user has a decent amount of control that they can enact in this as well. And so, you know, you can talk to your policymakers, you can obfuscate your own identity and go through some securing of your own personal activity online and in the world. And so there's a lot of things that the user has power to do. And I think people often feel scared because they feel helpless. And so I think of empowering the user just like all of us here. I think that would help to kind of mitigate some of those fears.

It sounds like one of the things that you're talking about is how to give the user more agency in their life and in the use of their data out in this world, this massive forest of big data that we found ourselves in.

And I'm wondering, when you talk about data literacy and digital literacy, if you could just unpack that a little bit, like what what would make me a digitally literate person or how would I increase my digital literacy?

So I think a lot of it has to do with having an understanding of how these networks are working and that there is a copy of you. There is a virtual copy of you. And I think a lot of people we all know we have profiles. You know, we have social media.

We have Facebook. We have Tick-Tock, whatever.

And I think people see these as little pieces of their identity or, you know, kind of proxies for who they are. Examples of just this one facet of their life. But what people need to do, increasing their digital literacy is understand that this is a part of a much bigger network.

And so all of this is more connected to a profile that is being kept on you, your name, your Social Security number, your profiles, your usernames or log in information. So I think increasing digital literacy, like I said, has to do with helping people to understand the infrastructure.

And I think it has to do with helping people to understand to be just generally smarter. I mean, a lot of the ways that people. It's easy for people to be tracked is using same usernames, using same passwords, using just basically building really trackable habits, turn out turning off their tracking location, you know, location tracking on your phone, not really reading when you're seeing even those Web sites that come up with cookie acceptances and things like that, not actually taking the time to look at what you're agreeing to.

And so I think a lot of increasing digital literacy in data literacy is helping people to understand that the problem is not bigger than they are, but it does take more steps on the user's behalf to protect yourself. And so I think it's just like I said earlier, it's very scary to a lot of people because they don't feel like, oh, I already have a Facebook, I already have a tick tock. I'm too far gone. And that's not true. The Internet moves forward, time moves forward. And though everything is saved and, you know, archived, you can start today to change those things.

And so I think part of digital literacy and data literacy is also informing people that, like, it's never too late to start making a difference to protect yourself.

Yeah, kind of along those same lines, too. I think a lot of people generally tend to fear that the big data world, because of so many unknowns and because they're not really sure like how much data is out there, what data is out there, and then also what is being done with the data. And I'm curious, like with your research that you've done so far about surveillance and your interest in surveillance, maybe you could dive a little bit into like what some of the implications are. The dangers of this data being out there could actually mean for people in terms of surveillance, like what are some of the possible futures and misuses of this data that we should fear that might motivate us to actually become digitally literate citizens?

Well, that's a great question. So part of what happens in when all of this data is aggregated about you, it's not a full and complete picture of who you are as a person. And so it's basically it's essentially a snapshot of who you are at a certain time, at a certain place. And so it's incomplete. It's an accurate and it's it's time sensitive. And so some some of the risks that happen with that come down to a lot of times the group that you belong to demographically and some people are more at risk than others. And so it's particularly when my work, I focus with marginalized groups and vulnerable populations and. So, for instance, with the people that I work with, being surveilled or having her information or data looked at in this very snapshot type of way can lead to a lot of assumptions that are made about individuals.

And so this this information being aggregated this way, it creates dust disparities and discrepancies in insurance rates depending on proxies of zip code.

It changes credit score depending on geographical location or what they assume of you. Based off of all of these data points, it changes. Like I was talking about with companies, it changes your criminality. It changes the recidivism rating that you have based off of who you associate with and who you look like to these systems.

And so when you're being surveilled, for instance, there's a there's a really good example of that.

There are people there are people in New York who were using the food stamp system with food stamp card and they were going the only place that they had to redeem these was to pull it out with an eight at an A.T.M., was to go to liquor stores or to bodegas, places that, you know, typically sell like tobacco and alcohol. And this happened to be, if I remember correctly, it was in New York.

And so basically what happened is that the food stamp funding started being called into question in this particular area of, you know, urban.

I think it was in upper Manhattan Harlem area. It started being called into question if these people deserved food stamps because they were obviously spending their food stamps on liquor and cigarettes because of the surveilling of where the transactions were happening. And so the surveillance of that started to put these people's livelihoods of their money and their food in jeopardy because of an incorrect association of where the money was being pulled from. And so the money was never being spent on these things. It can't be spent on those things. You know, the where these transactions were happening was in places where people just needed to buy milk and eggs and things like that. They were going for essentials. But, you know, being surveilled when it what it is, it's contextual collapse.

And a lot of my research has to do with when the context of who a person is and their true lived experience is removed and you have nothing but these data points in these proxies for who they are. You completely. It's a catastrophic loss of of who they truly are as a person. And it has a lot of implications that can have real life effects.

It almost sounds like the algorithms or the technology at least get used in such a way where it creates an echo chamber, where it's a self-fulfilling prophecy, where an already marginalized community. Then that data gets fed back again and again to create it to really to further marginalize that community.

I'm curious, I guess the question that back in my head is, well, what do we do about this, Andrea? Like like what can we do about this? Which also has to go as a question that just asked earlier, like who's to blame or who's responsible, but where where do we begin to do this?

Can we just make changes in algorithms to make them more fair? Or does this have to be? Does transformation have to occur on a larger, more societal basis?

Perfect. It's both. It's both.

And then all of the research that I've done and everything that I've looked at, it's 100 percent that there has to be policy and regulations surrounding the handling and the viewing and the usage of data from users and citizens.

But it also has to do with the fact that a lot of the bias and a lot of the feedback loops that happen within these systems come from negative historical and historical markers in society.

And so as much as we can try to, you know, influence companies and these the people who are writing these codes and governments to really take more stake in what they're doing and the implications of how they're handling or mishandling the data of people, how they're using people, honestly, as much as we can create policy to make sure that they understand how much that's important. You still have coders in a room that are feeding historical data into these these algorithms and you still have people who are making decisions about who your biggest stakeholders are and who your audience is and who you care about. And so even in development of systems, there is bias. So it's not just the code itself and it's not just regulating it. It comes down to a very human level of. We do have to also work to help humans be better as a whole, as we have been for many, many years now. But helping them to also, again, with digital literacy, helping them to understand how those biases are being transferred into a digital. Now, a space that moves faster and more accurately, not always positively, but that moves faster and more accurately and more efficiently than a human ever could. So like you were talking about that loop, that feedback loop. It's moving faster than it ever could with, you know, humans sitting down with pen and paper.

And so that's also something to consider about this, is that it is a very it's a dual approach to the problem.

So you say that like this, going to the machines and having machines and algorithms now make these decisions that humans have been doing for so long. Like in the case of, you know, criminal recidivism risks and fair trials and coming up with like food stamp eligibility, things like that. So you were explaining that like machines are faster and more efficient in many ways, which is why we've like place them into society. But then clearly there's a lot of problems with them, like black box algorithms to understand they're perpetuating a lot of problems that are societal problems that are reflected in the data. So you have these problems with them. But I'm curious if you think if that means that we should avoid those problems by just not placing those algorithms into society and just doing things the old fashioned way, just keeping them good old fashioned human bias and discrimination? Or if or what we really can do, because there kind of is this tradeoff, right, that like machines are they're more efficient. And there's a reason we're we're putting we're placing them into these systems that we have. But if they're introducing potentially worse problems than we had before, then is it worth it? Like, where do we draw the line?

So I think with this this comes down to is that I think human in the loop, keeping the human in the loop is really important.

And what that means is that even though we have all of these A.I. systems and these technologies running for us, it's really important to still have a human to be the regulator in the way that I put it is that we as humans are still the qualitative regulators of quantitative systems, because A.I. does not have the ability to make qualitative decisions at this point. And so I think what it also comes down to is that I think that technology is amazing. You know, I mean, was what I research makes me sound like I maybe anti-tech or, you know, super, super skeptical about all of it. But that's part of the power. And what I do, I have to be skeptical and also believe in it at the same time.

And so I think one of the better things about technology is that it is faster, it is more efficient, it is more capable than any of us are as humans to do things. But I think, one, that human and loop is very important. But I also think that there we may need to revisit as we move forward practical application for these things. I don't think that A.I. is necessarily the solution to all problems. And so I think that we may find as we move forward that some things need to take some steps backwards and go back into the hands of humans. I think we may find that naturally we have problems of automation where we've seen that happen frequently.

But I think it comes down to that as we go forward in developing this, we'll see more and more cases where we'll be able to pinpoint that this tech doesn't need to be used just because it's new and flashy and cool. We really need to take a better look at like, OK. Would a human be better for this job? We have tech that's capable of it, but would a human be better? And so I think we'll start taking a look more in the future and making decisions between what's the best tool, whether it be the human or the A.I. for the project in front of us, instead of just assuming like, oh, computers can do it all.

That's not true.

I'm wondering if we can say on this human in the loop idea and if you can explain a little bit more about it, but I'm gonna add a little monkey wrench to it as well.

Because when I hear human in the loop, I start asking, you know, which human what what humans do we want in the loop? Because when I go and talk to, say, a tech startups, I see a lot of people that look like me. And for those of you who are listening to a podcast, you can't see me. I'm a straight white male. Right. And I wonder, from your perspective, as a black woman in this space, when you talk about human in the loop, is there kind of an ideal of how we do that partnership or how representation issues affect this?

Absolutely. You know, it's funny being on this podcast, you know, thinking about what's what's radical about like me and my research.

It is like as as you were saying, as a black female researcher, I feel like my existence in the field is radical in and of itself. You know, there are not a lot of us in though. We're growing in numbers and visibility. There's still a huge deficit in the amount of people who look like me in the field.

And so with that being said.

I think that we need to, of course, have more diversity in the room for who these humans are. Like you said, human in the loop, but also developing these systems, making the decisions for the hiring teams of who work at these companies. You know, the people who are leading these projects. And so it's not just in the human, the human, the loop and being like the regulator for the system when something goes wrong or we need a double check on this. It's from every point from the beginning of we're going to create this code for this purpose. You need to have a diverse set of stakeholders in the room to ensure that you aren't hurting any populations. You know, it's not going to be possible to account for everyone all the time. But the best thing that we can do is to do better than we're doing now in the world of tech to diversify that room, whether it be in the hiring room and H.R. or whether it be in the room of coders or it being the room of testers, you know, it it needs to be spread more evenly across the people who are going to be interacting with these technologies, which is something that's super bizarre to me, that, you know, the populace that's using tech is so diverse and so wide and broad. And we have this very small spectrum of people that control and make all these decisions in it.

And so it's more than just keeping a diverse human in the loop. It's keeping a diverse human in all stages of building these technologies.

And you mentioned a little bit about what what you think makes you somewhat radical in this field. And something that we love to ask to our guests on this show is what what you think the word radical even means to how you would define radical for you.

Ok.

Radical for me means cool, extreme, outlandish, potentially unique. I feel like the word radical is indicative of a force of change.

And I think that oftentimes you have to go to radical extreme measures to enact change. And that can be for good or for bad. So I feel like that's what radical means to me.

So something that you've mentioned before to both me and Dylan outside of this podcast episode is your wishes to bridge the gap between what you called the academic silo and the layman masses? I really like those two terms. I would love for you to unpack that and to dive in a little bit to what what that really means to you in terms of representation and accessibility.

Yeah, absolutely. So I'm very, very interested in bridging the gap. And, you know, it is between the academic silo because academia has has a tendency to we do all this fantastic research, but then we just talk to each other about it. And so that's not really helping anything. And then, you know, the laymen masses who you know. Are not spending all day, every day focusing on this, and that's my job, you know. And so I think that there's a way to bridge this gap in probably three different ways. There's three different means to do this. So one, I feel like his common.

Language, you know, non-technical language is essential to helping the layman to understand the research that we're conducting in academia and disseminating the importance of those resource, those results.

I think, too, that academics need to take more stake in speaking to their stakeholders and considering that those stakeholders are a wider audience that includes the populations that they're researching and the communities that they're affecting who are typically not academics.

And then the third thing is I think that, you know, I have.

Of an interest in merging my background with my research. And so I'm attempting to use entertainment and media as a as a means to disseminate the messages of my research. And so I really do think that media and entertainment is a really effective way to demystify data in digital privacy and make it more accessible.

You know, I'm a big fan of academics and scientists who have utilized creative outlets to get their messages across. You know, like Bill Nye and Neil deGrasse Tyson.

And there's a rapper who's also a Google engineer, Brandon Troy. I think that it's a really powerful and great way to reach a bunch of audiences. And I think specifically with entertainment and media, you can reach a broader audience that is going to be younger, which is important because these are the kids that are going to be our new programmers and coders and the ones who were deciding these things for our future technologies. But it also reaches a more broad and diverse audience if you approach it in a different means.

And so in trying to speak to the layman, you've got to kind of reach them where they want to be in. Right. Right now it's in media and entertainment. And so I think that with how many people consume, especially right now, how many people are consuming all this media and entertainment to kind of distract themselves from what's going on in the world.

You know, you can still have some really salient important messages in that that are applicable to all sorts of diverse populations.

I mean, we should name as well what's going on in the world right now as we're having this interview right where worry have covered 19, the pandemic traveling across the world. The numbers are starting to to climb. And I'm wondering if there's anything particularly about that context that you think as a either as a big data researcher for those of us who are concerned about our data during this time, or just as Kantra, if you have any words of support or advice to folks listening right now, we're all we're all going to make it through this.

We're all going to make it through this.

And we always have to be really cognizant of the fact that, you know, there are going to be some things that go on in technology while we're all at home and only based on using it.

And so, you know, be smart about the decisions that you're making and realize that there's a lot of digital surveillance going on right now, because we're not only trying to figure out what's going on in the world and with people's health and with the well-being of of, you know, how everyone's doing around the world. We're also there are a lot of companies and entities out there that are loving the fact that we're at home and using our computer so much.

And so I have a couple of different thoughts about some of the things that are going to happen in tech based based off of Corona virus.

I think that the issues in the U.S. are primarily going to surround and accurate data collection reporting. And so at this point, the spread of the disease across the US, we just don't have the ability to accurately track or surveil anyone for any reason. But I do see a potential for cases growing in overpolicing due to new powers that the police possess to stop individuals who are violating quarantines or shelter in place. And this is something where public surveillance could be increasingly problematic for individuals or groups who already don't have the best relationship with law enforcement. I also think that in other parts of the world, like in China, where surveillance has started to have a direct correlation to social ranking and status, i.e. the social credit system that there there are reports of people being surveilled is not being truthful about their history and involvement with the virus. And so that's like being seen as having the virus or being in association with anyone that has had it. And this is a reason for them to be docked or demoted in their social credit system, which has real implications and consequences in all parts of their lives. But, you know, like I was saying across the world, I really do just feel like, you know, just be smart about the decisions that you're making because online behavior is on the rise. And all of us are kind of in a position now where we only have the option to work remotely for work or school or other typically in-person activities. And so more time online leads to more data on each user. And this is prime for all sorts of interesting invested entities who want all of that delicious data and they're going to pay top dollar for it. So just be careful about what you're doing while you're sitting at home doing it.

Wise words, delicious data, too. It's a great and a great alliteration right there.

So, Kendra, as we reached the end of this interview, if you have any social profiles or Web sites or anywhere where people could get in touch to you about this research that you're doing, if they're interested in reaching out, do you have anything that you'd like to share for the listeners?

Absolutely. Everything you can find me on is under my name. It's Kadry A wait. So that's K A and as in Nancy D and David R E A W E, no spaces, no dash, no dots.

You can find me. On LinkedIn, you can also find me if you just Google my name, you'll find some profiles of my research that I'm doing.

See you bolder as well as with the identity lab run by Jed Boubakeur at Boulder.

So Google my name and you'll find me awesome.

Well, thank you so much again for coming on. It's been a real pleasure.

Absolutely. It's been a blast. Thank you, guys. Stay safe.

And we just want to thank Kendra again for joining us today for this wonderful conversation. So now what did we find radical about this episode for me? First, I think the thing that stood out was Kendra's one liner about the power of being skeptical and a believer at the same time. I think that this is a really cool idea when it comes to research and academia that you kind of you get so deep into the weeds in these topics that you need to inherently be a skeptic while you're a researcher. But if you get too skeptical, then you kind of lose all hope in your research. So like in the sense of like surveillance, if you get so deep in the weeds that you are such a skeptic of this idea of people ever having control over their data, then you might lose all hope over this in a future where surveillance isn't possible. And so you might wonder why your research is even important in the first place, and there's no sense in doing that. So you do have to still hold on to that one part of you that's a believer and think no. Like we can create a future that has digitally literate people and we can create a future that pushes back against government and A.I. fueled surveillance. And that little bit of hope is what provides the motivation to push you through all four, five, six, seven years of your p._h._d program.

Not that not that anyone's counting the years of the fusion or anything.

That's something. This conversation.

Is one that scares the heck out of me in terms of data and surveillance, because it's something it's not getting better. Like people are finding more ways to find data and to use my data and to apply my data to try to do any number of things that psychologically will make me buy their product essentially. And there are so many ways for us. To have our data used in pretty nefarious ways, and Kendra named some of those ways.

So I'm not hopeless and I appreciate her call for hope, especially towards the end of the interview. And I also appreciate her call for us to claim more agency as consumers. And I also am feeling powerless to a certain degree of what can I actually do? It's one of the reasons why I found myself in this field in the first place as I wanted to be part of the conversation. But there are you billions of people in the world that can't be part of the conversation for how we design these algorithms and these products and can't be in the room. So for me, that was another major justice element that made Canberra's interview radical and Kendra's research radical is she's trying to find a way to make digital literacy a thing that can be accessible to everyone and not just the people that are in the room, but it's a it's a tall order and their systems of oppression and so many other factors. That are going to make it really difficult to make technology in the next 50, 100 years. The next seven years of your fiesty to make it applicable in such a way that it's really accessible. And all of that really worries me. And I I don't know if I've seen enough in industry yet to sense that there's been a true wake up call. Again, that's why Kantaras work is so important.

I completely agree. And on this idea of powerlessness and feeling almost hopeless in a sense, I think this is something that you and I, Dylan, of probably experience to a certain extent just because we are so inundated in this data ethics field as a part of our research. It is really easy to feel hopeless because you can't necessarily opt out of being a digital citizen in today's day and age. We can't just say, OK, I don't want people to take my data and misuse and abuse it and use it for surveillance or whatever bad things that we heard from Cantrill and we've heard from many others online. We can't just necessarily clicked an opt out button and all of a sudden none of our data is collected and we can't just not use the Internet. So to a certain degree, we kind of are hopeless just because we we have to provide data to people in it to exist in modern society. It's sort of like if you if you want to not have Facebook, collect your data. Sure. You can now have a Facebook account, but then you don't get invited to Facebook events anymore. You can't log into accounts through Facebook anymore. There's so many services that you lose on that. And so when when we try to take action, sometimes there are some negative repercussions that are a little bit unfortunate.

Yeah, just what you're saying makes me think of that one phrase and you might have heard it. You can't fight city hall like you can't fight Google. You can't fight Facebook.

For me as an individual, even an individual with some privilege. I'm not going to go up to the doors of Facebook and demand that my data's back. Also, I still want to use their service. So what do I do about that? And I think what Candia points us to in terms of where the hope might be is that it is a fight of inches and not miles. And it's more of a harm reduction strategy for how do we take the next step. As an industry or as the academy towards making artificial intelligence and the entire tech sector more ethical. And that has to happen step by step, by step by step. It can't all happen at once. And the how we do this on the inside also ties into what candidate was saying about this, having a human in the loop, that we don't just let algorithms run wild. Just because we can do something doesn't mean we should do something. And then there's the question of, well, who is that human in the loop and do we have a diversity of people being represented in those rooms? Because we're not going to have a diversity of needs being met until we have a diversity of voices being heard in those rooms.

No, thank you, Dylan, for bringing up this aspect of hope. I think it's actually really important, even though a lot of what I was saying earlier might resemble this sense of hopelessness. It's definitely very important. And even as Kandil was saying, to make sure that we remain believers as alongside our skepticism that we feel here in alongside our fear, and especially when it comes to the online world and our data that is being collected, used, abused, misused, used for surveillance, you name it. I think that it's really important for us to also remember that we do have power in this, although it bit these big tech companies are using our data. We are important to them and that we provide data. So we are powerful not only in that aspect that they need us, but also we are powerful in that we are in control of what data we're giving. And that's why I love this concept of digital or that's why I love this concept of data literacy, because if we become more aware of the data that we are giving to these companies and the data that they are then collecting on us, we can slowly start to become more aware of what they can do with that data and what that might mean for us later on down the line. So I think that it all comes back to this piece of awareness, and that's really what gives me hope in this.

Right. And it's not to say that Google or Facebook are inherently evil. In order for their businesses to function, they need data and they need our data.

And that's OK. So long as there are ethical guidelines in place, including legal guidelines and development guidelines and other guidelines, and that they listen to those guidelines and adhere to them. The problem comes when you have these companies who are taking people's data without any consent whatsoever and are completely aware that they're doing it. And in doing so are cause. Thing harm, whether it's intentional or not, they're causing harm immediately and downstream. And so this conversation is more to provide a spotlight to those situations and to say, OK, we all need to wake up a little bit. This is happening and we need to reclaim as consumers some of our agency over our own data. And on that cheery note, thank you so much for checking out this episode of the radical a-I podcast. For more information on today's show, please visit the episode page at radical a-I dot org.

And if you enjoyed this episode, we invite you to subscribe rate and review the show on i-Tunes or your favorite pod katcher and join our conversation on Twitter at radical iPod.

And as always, stay radical.

Quickly and accurately automatically transcribe your audio audio files with Sonix, the best speech-to-text transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

More computing power makes audio-to-text faster and more efficient. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript. Create and share better audio content with Sonix. Do you have a lot of background noise in your audio files? Here's how you can remove background audio noise for free. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Automated transcription is much more accurate if you upload high quality audio. Here's how to capture high quality audio.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.