The State of the Union of Surveillance: Are Things Getting Better? with Liz O'Sullivan


Liz O'Sullivan.png

What should you know about the state of surveillance in the world today? What can we do as consumers to stop unintentionally contributing to surveillance? The Facial Recognition industry had a reckoning after the murder of George Floyd - are things getting better?

To answer these questions we welcome Liz O'Sullivan to the show.Liz O'Sullivan is the Surveillance Technology Oversight Project's technology director. She is also the co-founder and vice president of commercial operations at Arthur AI, an AI explainability and bias monitoring startup. Liz has been featured in articles on ethical AI in the NY Times, The Intercept, and The Register, and has written about AI for the ACLU and The Campaign to Stop Killer Robots. She has spent 10 years in tech, mainly in the AI space, most recently as the head of image annotations for the computer vision startup, Clarifai.

Follow Liz O’Sullivan on Twitter @lizjosullivan

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.


Relevant Resources from the Episode:

Follow S.T.O.P.—Surveillance Technology Oversight Project on Twitter @STOPSpyingNY or check out their Website

Follow Arthur AI on Twitter @itsArthurAI or check out their Website

Campaign to Stop Killer Robots

Freedom of Information Act in the USA


Transcript

Liz O Sulli_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Liz O Sulli_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess. In this episode, we interview Liz OSullivan Liz is the Surveillance Technology Oversight Project's technology director. She's also the co-founder and vice president of commercial operations at Arthur Ehi. And I explain ability and bias monitoring startup Liz has been featured in articles on ethical A.I. in The New York Times, The Intercept and The Register, and has written about A.I. for the ACLU and the campaign to stop killer robots. She has spent 10 years in tech, mainly in the A.I. space, most recently as the head of image annotations for the computer vision startup.

Clarify a few months ago, after IBM, Microsoft and Amazon made statements disavowing facial recognition technology and really made headlines and made waves saying that they were no longer going to share their facial recognition, tech and data with law enforcement. We brought on Deb Raje to give us a 101 about what we should think about some of those announcements about whether we should take them seriously and about really what should we know about facial recognition technology and surveillance technology in general. And we thought it would be a good idea to bring in another expert a few months down the line to check in about the surveillance situation in our country, in the United States and also around the world. And so it was lovely to talk to Liz O'Sullivan about many topics in this interview, but especially what you should know about the state of surveillance in the world today, what we can do as consumers to stop unintentionally contributing to surveillance and really after the facial recognition industry had a reckoning after the murder of George Floyd.

Are things actually getting better?

In this interview? You'll hear us talk quite a bit about the Freedom of Information Act. And since it was signed into law in 1967 in the United States, the Freedom of Information Act, otherwise known as the FOIA, or FOIA, has provided the public the right to request access to records from any federal agency. So this act is helpful for those of us engaging with surveillance technology because it gives us a channel to request information about what surveillance technology is being used and what data is being collected by the government.

So without further ado, we are so excited to share this interview with Liz O'Sullivan about surveillance technology with all of.

We are on the line today with Liz O'Sullivan. Liz, welcome to the show. Hi, thanks for having me. Could you start us off by just telling us a bit about who you are and why you do the work that you do? Oh, man, that's a great question. Who am I? Well, gosh, I. I guess I've been working in the in the tech scene for a while now. It's maybe eight or nine years. My whole career has been and I companies but on the business side. So I wasn't nitty gritty doing the ones and zeros like, like all of you folks, but I was sort of on the product side and that's where I started learning about things like bias and explain ability.

And, you know, my career brought me to a company called Clarify, which is a computer vision company. And because it's just not a startup, unless you have 17000 jobs, I ended up transitioning away from sort of the business business side and I ended up working on annotations, which was a really fun space.

And this is kind of a long winded way of getting around to it. But I promise it makes sense because annotations and computer vision company, you start to see the ways that these preconceived notions of bias are cultural differences across continents. You know, we had labellers in India, Africa, the United States, all over the country, and the way that they show up in models where you have, you know, different opinions about what explicit means or even just like a cat. Right. Like as a hairless cat count, as a cat. And all of these, like, preconceived notions that are sort of human, like programmed in the language. But but when they show up in the math, it's really jarring. And you can see that firsthand. So that was, you know, working on certain kinds of security applications, working on content moderation. I mean, we just because we didn't focus on any particular use of application of A.I. computer vision, we kind of got to see it all. And it was like a crash course. And the company was working on a military project which has since become very notorious, called Project Maven. And at first we didn't really know what it was about or that it was military. And then those details started to leak out and then the Google lockout happened. And all of a sudden the whole industry kind of snapped together and said, like, wait a minute, maybe this stuff is bad. And the funny part is there have been so many people who are talking about this well in advance of my awakening. Right. I mean, the people writing books on the topic. And so we came to the table very, very late. But once you kind of have your eyes opened about these kinds of things, it's hard to kind of go back into that box.

And so when I wrote an open letter to the CEO and asked him not to build a certain kind of weapon, this really scary thing, the computer vision makes possible a lethal autonomous weapons system or like a killer robot drone, as you might call it. You know, it was disappointing to hear that that he wouldn't refuse to work on these things or like, you know, maybe kind of thought that the technology was going to save lives in some ways. And like the short end of the the answer, which I've taken way too long to respond to, by the way, is that it truly radicalized me. It made me become very political. And since then, I've been working to make sure that some of these really terrible and scary uses of A.I. are never allowed to come to light. And that's how I got involved with the campaign to stop killer robots at the U.N. level. And I also met Albert Foxconn from the Surveillance Technology Oversight Project, where we work on local politics here in New York City, New York State, to try to ban facial recognition, among other things. Right. Because that's a very common way that people connect with the scariness of surveillance. But it's hardly the scariest, the proposed technology that's out there. So basically, I just it's crazy to think back on these last couple of years since all this went down. But I have been working in and trying to just make a little bit better, a little bit safer for everybody and then outlaw the stuff that's really, really bad.

The help frame this conversation, one of the reasons why we wanted to chat with you is because you are an expert on surveillance and we're curious. So we're we are recording this on September 25th, 2020. As of today, what what should people know about surveillance? We've had various people come on the show and and talk at different times, but especially in the past, say, six months. There's just been, you know, news article after news article about different companies coming out in terms of taking stances. And our listenership is, you know, sometimes unclear about whether those are real stances or whether it's just like squashing like people saying what they should say. So right now, just as you know, as brief as makes sense, I'm sure we could talk about this for a long time. But what should people know about surveillance right now and surveillance tech?

It's such an important question. It's such an important question right now, especially as we head into this election, as we suffer through all of this national unrest because of the systematic racism that's causing this disproportionate killings of black people, the white supremacy that threatens our entire democracy. These are real threats, right? I mean, there's no sugarcoating that. There are there are things to be afraid of in our world right now. And the scary part of it is that the government's single solution for these things tends to be more surveillance. Right. Like more monitoring. We see this every time there's a massive crisis after 9/11, more monitoring, more surveillance with the boogaloo boys, more surveillance. And then now in the United States, the George Floyds murder and all the protests that happened. As a result of that, you know, you have Predator drones flying over the United States, over our sovereign soil to monitor. You know, I guess they would like to say it's violence, but it's not right.

It's it's really just the expression of dissent and protest of things that are part of our society that have been this way forever. It's not new. Right. But I think what people often miss about surveillance is that, you know, it's it's very seldom exclusively coming from the government. It's usually a blend of the private companies and corporate surveillance that enables this incredible amount of access to information. But every single person that comes into this country and abroad that is powered by the consumer tech that we love to use every day.

And so that's a big part of why I do what I do, because we shouldn't all have to walk around with our phones in airplane mode. We should be able to have fun, nice things and technology that allows us to connect to each other, for instance, like social media without fear that it's being scraped and used to compile a database of faces that's then sold to law enforcement. And and the laws that we have in the United States are simply insufficient to protect our civil liberties against corporate surveillance. And the transmission of that data from, say, Google, for instance, or Microsoft or whoever into the hands of the FBI of of the local police. Even the Fourth Amendment is supposedly a stalwart against this kind of thing. But through this little known thing called third party consent, they can get access to it. You give away your right to this kind of privacy whenever you click and it user terms agreement. So what I want to be able to do is to tease those two things apart, to have our private world, our consumer technology that's doing amazing things, keeping families in touch with each other. You know, who doesn't love cat photos? We all love them dearly. But to keep that separate from what is happening from the law enforcement perspective and to force a division between this like civil and military blending that we're seeing increasingly every day. Yeah. So I don't that's a very long winded way of saying, like, things are very intense right now and most people don't realize how it touches their lives every moment of every day. Everything you say or type into your phone could be in a database.

One of the, I think, questions behind the question that I just asked is that I think there was a moment of hope maybe in June or July where there were protests, where there was this public dissent, and then it seemed like there was some set of actions being taken by especially some of the corporate giants of Microsoft, Amazon, etc.. And I guess I'm curious, are things getting better?

Like was that hope misplaced or is is there something that's actually moving here that that might be making things maybe better for the future?

Yeah, I'm going to take a second because I just want to couch this in in a bit of a hedge just to say, you know, like, I don't want to be a buzzkill, but I do want to talk truthfully about these issues. I mean, yeah, it's great. IBM was the first one to come out and say, like, we won't sell facial recognition to law enforcement. That's awesome. They put pressure on several other companies to do that, too. It was Microsoft, think, who was the other one? There was another one that agreed. I don't know some some other big company off the top of my head. But, you know, but I think it's also telling you have to look at this in the context of what's going on globally. Where is the facial recognition market OS? Amazon was the other one. And, you know, I've been I've been studying this stuff for a couple of years, and I don't know that. I mean, IBM's practice is not huge. Right? Like, they were never big on selling enforcement, facial recognition, Microsoft. Definitely not. I think that that was kind of the first time that Microsoft leadership bowed to or gave into employee activism, namely about Seattle, because that's where a lot of them lived. And they were there was a relationship there between local law enforcement and Microsoft and they stopped. I mean, that's incredible. We're exerting our power over these huge conglomerates that have all of the money and all the power in the world.

Any victory is a big one, and we should be proud of ourselves for that. But, you know, when I look at Microsoft in particular, Microsoft Power is one of the most interconnected surveillance networks in the whole country called the domain awareness system and the domain awareness system covers the lower half of Manhattan. It it takes in inputs from all kinds of different sensors, not just cameras. And it maybe is not directly facial recognition, but it certainly allows for the collaboration of facial recognition with the other kinds of sensory input types that Microsoft sort of aggregates in this command center that local police have access to. Interestingly, this is not just a New York thing since developing it in concert with the NYPD, they collaborated on this system together. They've also resold the system. NYPD gets a cut of the profit every time they sell it somewhere. And we don't even know where else this is. This is something that's really hard to use. Freedom of Information Act, which is our main tool for discovering how these tools are used and where and when and how, because it's it's we just don't know who to ask for it. Like it's a Microsoft product. We can't fire Microsoft. But there have been two to IT locations that were reported and that was Brazil and Singapore, not the greatest in human rights track records like Singapore, you know, pretty authoritarian.

And so it's curious to me, you know, we don't hear about this very often, but to me, a lot of it was just a trade off of this small concession, you know, small financially, but huge for PR. And that's when you get, you know, like people in my family, my mom sends me a bunch of texts like, hey, you should check out Brads. He's super ethical. Microsoft is doing all this incredible work. You should be happy. But the reality of it is it wasn't even a dent in the kinds of facial recognition software that's being used out there. I would say the exception is Amazon and Amazon did agree to stop selling the sale of their facial recognition product to law enforcement, where they are selling a lot of facial recognition to law enforcement. Probably not a huge moneymaker for them because it's pretty expensive. But but like those relationships are important for lots of other reasons. They're very much looking at themselves as like a pro law enforcement security type company, as we see with the ring doorbell cameras and these nude floating indoor drones, they're talking about selling. That's brand new. But like as far as the companies that made this decision, they actually do stand to lose something and they came to pressure anyway. And that is a win. That is a huge win for Microsoft. I'm sorry Amazon didn't agree to stop selling it forever.

They just put a moratorium on it for a year, which is telling. Right. It's informative. But that was you know, it was a chip. It was a chink in the armor. It was absolutely a first step towards broader progress.

And I think that's what's so exciting about facial recognition for me, is that it really connects with people in a way that you can visualize it, you can see that it's your face and that you can't change your face and you can't hide from it without extensive plastic surgery. And so people are very invested in curbing this threat. So that's that's what's most exciting to me about all of it.

Yeah, it sounds like there's definitely hopefully some hope for the future, but what I'm wondering right now is taking the lens from these large tech companies and instead shifting it towards the individual. Ever since all of the events that happened in June and in the wake of George George Floyd's murder and all of these companies coming out with these big statements, there's been a lot of fear amongst general society about surveillance and technology, surveillance in general. And so I'm wondering what we should actually be worried about, what is fear mongering and what is valid to actually be worried about? Like, for example, should we be worried about our Amazon purchases and how that is perpetuating surveillance? Should we be worried about our phone calls being surveilled or WhatsApp messages or Facebook photos? What what is it that we should actually fear and what is OK.

Hmm. Oh, man, I really wish I had a solid answer for you there. It's like the unfortunate truth is that there's just so much that we don't know and even the things that they disclose could be parallel construction.

Right. Like so you'll have some some insignificant detail in a photo, like a tattoo or an article of clothing. And, yeah, maybe you could spend the time and do good investigative practices to find out like that. They purchased that face mask, for instance, on Etsy or something like that. And I've seen that happen in work.

So, yeah, your purchases, your online purchases, whenever you leave a review for something like that, could that could absolutely be evidence against you. If you you know, as some have in recent months, like lobbed a Molotov at a cop car or something like that. Not that I advocate doing that, but even just if you're in the wrong place at the wrong time. Right. And you're in a photo next to somebody who's doing those kinds of things or you're in the wrong place at the wrong time and you have nothing to do with that, but just your GPS location on your phone puts you in the vicinity of where a crime has happened within a certain range of time. And that's subject to a geolocation reverse warrant. And that's a thing that happens as well, where like people, cops go to Google and they say, tell me everybody who was in this location between 3:00 and 4:00 p.m. on October 30, 4th, that's not a date, you know what I mean? Over 30 something years before. But yeah. So like these tiny, tiny artifacts absolutely end up being, you know, lynchpins in the in the search for four suspects.

And that, you know, you may think if I'm not doing anything wrong, that I have nothing to worry about. But there's tons of room for error because the technology is not perfect and it's being sold as if it is. And those kinds of advertising claims are very dangerous. But, yeah, absolutely everything from your social media photos, all of that gets scraped from the Internet into a database, unclear by where they use facial recognition on it. So this is just not to freak you all out. I'm sure you all know this is happening, but there is way out.

There is a very clear way out. And it's just to limit the amount of power that the state has.

The balance of power between us and the state has tipped dramatically in the last few years just due to all the new technologies that are out there and how quickly they're improving. The laws have not kept up. And that's what we need to really address in the short term, is fixing them so that we can have social media, so that we can have our cat photos and share them.

And to do it in a way that we know that we're not putting ourselves at risk, or if we share photos of our friends and family online that we're not putting them at risk either.

You mentioned something that I think is actually a big part of this argument here, and it's the phrase, if I have nothing to hide, I shouldn't worry, or there's you know, there's different ways to to phrase it. But I hear this a lot, especially when it comes to the Fourth Amendment in the United States, talking about unwarranted searches and seizures and people fearing the police coming in without probable cause. But then there's this argument that, well, you know, if even if they didn't have probable cause, as long as I'm being a good citizen, it shouldn't matter. Right. So only those who are worrying or those who are doing something wrong in the first place. And this is interesting when it comes to surveillance. Right. Because then we have to ask, well, OK, so should I only post things on the Internet or search for things on Google that I am OK with the police knowing or should I always like should I always be fearing that I am under surveillance or should I actually, you know, treat the digital world as the physical world and trust that I might still have my own sense of privacy and that the government won't be intervening at any given time? What's your take on this?

Yeah, I mean, it's such a hard question. It's such a hard question because the reality of it is that I allows you to take an insane amount of information and pass it and to sort it in a way that makes it consumable to a human being. So where you might think that, like the one photo of you at a protest is safe for you to put it online, it's not because they'll be able to search or they'll be able to use computer vision to filter down or like let's say, God forbid you use a hashtag, you know, like George Floyd murderers, Briona Taylor's cops are free. You know, all of these things are.

They're searchable. They are absolutely searchable, so, yeah, I mean it and it just taints the way that you think about using your own social media, right?

Like my own relationship with social media has changed so dramatically since I've become an activist in the last couple of years. And, you know, I use only encrypted chat. You know, I don't use hashtags. I'm very careful about what I put online. But, you know, the whole I'm not doing anything wrong thing. It just completely ignores the the fact that this technology fails a lot, that it messes up and that it's not perfect. And especially the unfortunate thing is the way that it fails is usually disproportionately, you know, harming black people and not just black people, but people who are visually minorities in any way. Disabled people is another perfect example. There's just not enough research on how this technology works on faces that are, you know, not white, not Norm. So I wish that I could say that like this this online realm is as safe as you are, you know, walking on the street in obscurity. But it's just simply not.

It's interesting what you just said about how your own use of technology and social media has shifted since you became an activist in the space, and I don't know if you saw the the new Netflix documentary, The Social Dilemma, which has a lot of there's a lot in the zeitgeist right now. So we're not going to take any sides on that movie.

But it was interesting in hearing the different folks who were interviewed for that movie talk about how they don't have a social network platform where they have distanced themselves from it. And as I used to work in the hospital and a lot of the physicians I would talk to and especially surgeons were like, I never want to be on a do not resuscitate. Right. Like, I want I just don't resuscitate is basically what they were telling me, because they know from the other side, like how the sausage gets made essentially. Right. They know that how what goes into actually resuscitating someone and breaking of the ribs and things like that.

And it's interesting, I think, how our relationships change to this. But I think my question actually is more around what you had just finished saying around the the racial element to all of this. And I think we're still hearing it maybe more now than we were three months ago, that, oh, no, surveillance technology actually impacts everyone equally. Right. Like surveillance technology, even when it's like working like a surveilling everyone equally. What would you say to folks who are claiming there's an equality in how surveillance technology is being used?

Yeah, I mean, I would say that it's just not that way in the practice of it's like so you're you're imagining this like one hundred percent complete surveillance panopticon that like no matter who you are, there's people on the other side of it with headphones listening in. And that's kind of like a visual way for us to imagine that this is the case. But the reality of it is that you as a as a law enforcement officer, can pick and choose where to deploy your tools or where to look, where to point the lens. And that's figuratively and literally. You have cameras that roll around in New York City, especially, you know, they're stationary, but you can put them wherever you want and you can then have those. And that's exactly what cops used to look at, facial recognition. They use those photos as seen photos. Interestingly, a lot of them are not high quality, enough of photography, even just to feed effectively through an algorithm. And so you get all these really poor matches resulting in some someone getting accused of a crime. And where do you think those cameras go? You know, we see in the research shows time and time again that the cameras are disproportionately deployed in areas of gentrification where you have a lot of people of one nationality background moving into the area of another. And these neighborhoods are usually minority, especially in New York City. We have, you know, a rich history of discriminating against black people and also, especially after 9/11, Muslim people. And so, you know, it's it's multiple fold when you have police departments deciding, you know, based off of instinct where to send patrol cars.

And then that becomes data of arrest records that then gets trained into an algorithm that says send more cop cars to these areas because you'll find more crime. The reality of it is, is that there's probably crime everywhere. But because somebody decided we're more likely to get our quota filled here in a certain area of the country and certain blocks, you know, then those are where the cops go. And so it's a self-perpetuating system that just guarantees that the most vulnerable people in our culture are the most at risk.

And so for people who say, you know, I'm not at risk if I'm not doing anything wrong, you know, you probably are a white person. You're saying that because even if you're not doing anything wrong as a black person walking past a ring doorbell camera that could be used, maybe the person on the other side of that camera is suspicious of you. Maybe they had a package stolen from in front of their house yesterday or that same day. And then they send that photo to a police office, which they can do automatically through those relationships that Amazon has with police. And then they use facial recognition on it, which Amazon also supplies to the police. And then that person gets arrested. Right. Or somebody else gets arrested. That looks like that person, even though they didn't even do anything wrong. So it's just it's a systematic it's a systemic problem that starts with the behaviors of people and the way that we are trained in the way that we were raised and our own natural human fears and suspicions. But it also is is just made infinitely worse and codified into a system that people can point to and say, look, this machine is the one that made this decision, not me as a human. I'm not racist. Machines can't be racist. So there's no race involved here when we know actually the opposite is true, that machines simply reflect the racism that we programmed into them and in.

Doing so are themselves racist when we talk about these systematic issues of oppression, it's really easy to feel hopeless in this space. But one of the things that both Dylan and I love about your work, Liz, is that you are the technology director for the Surveillance Technology Oversight Project, also known as STOP, which seems to be looking to remedy some of these concerns and to really help with these issues. Could you tell us a bit about that project?

Absolutely. So Stop is about a year and a half old. Albert Foxconn, the executive director. Previously his work was at Care Muslim Advocacy Group here in New York and and nationally. And, you know, we've sort of formed around this notion of a police accountability. And so NYPD has the largest police department in the country, long been a big black box about and a testing ground for what kinds of technology is being used. You know, in the wake of 9/11, there's nowhere else where you're going to find a more supportive pop, a public that thinks that, you know, that we do need this stuff. Right, because there is a legitimate concern first for cell phone safety.

But in doing so, the more information that we collect through Freedom of Information Act and whistleblowers and things like that, we can see that they're they're buying technology and they're using it in a way that is very experimental and they almost don't know what they have and what to do with it. A couple of examples. You know, they used they tried to use real time facial recognition on cameras on a bridge in New York City and come to find out that it was effective exactly zero percent of the time in trying to identify people through moving vehicles and, you know, glass windshields.

And so turns out the technology is not quite there yet, as you might imagine. And what happened to that data?

You know, where did it go? Is it being used? Is it being archived for a future date where it could potentially be searched, you know, against crimes in the past? Right. Or is it being sold to private entities that are then maybe even creating open source data sets to share around the world? And I'm saying these things like they're they're also like worst case scenario. But these are all things that have happened. I mean, there was a partnership between IBM and New York City where they gave away a bunch of video data for free. And IBM came out incredibly, you know, on top of that. So the first law that we really focused on as an organization was the Post Act, and it was police transparency law that now and we finally got it passed, you know, this year after three years of trying to press and not just myself, but many, many organizations in New York, Rasheeda Richardson and I, Cloo and I know and like a bunch of other organizations are Brennan Center have been working on this forever. And and so we finally were able to almost unanimously push it through city council.

But it was the first time that an NYPD hates it. Right. You absolutely despise this law. But it just it just it's very simple. It's like one of the weaker police accountability laws in the country just to say, tell us what you're using, you know, let us know. And there's you have to create a policy for what you're going to do with the data. And that's really all that we've we've asked for so far. You know, now we we kind of want to turn and start banning these things. And there are a couple of things we're working on there. You know, it's a first step and wouldn't have been possible without all of the incredible support from people protesting all over the country in support of of racial equality. So, you know, there is absolutely hope. And I think the fact that people have become so awakened to this issue and are writing city council and voting and voting with their dollars and all of these things are coming together. It's a real reckoning for these practices. And it's a really exciting moment.

Who's responsible for regulating surveillance technology, which is a very loaded question.

But one of the things that's been really interesting to trace is this dynamic like that that you've very eloquently described here between the massive corporations out there with the technology and then also the government who is making trying to be not not partisan here, but making certain decisions.

Right. And empowering certain groups over others, certain technologies over others, giving resources, certain places and not others.

And I think there's just a big question out there about who needs to take responsibility for this technology and the development of this technology and maybe who even has the power to say no, because as much as I'd like to believe that any of us can use the Freedom of Information Act to get information or to combat some of the surveillance tech, I think there are some very intentional systemic. Things going on here that disempower the everyday person, and so I'm just wondering like who who has the power to change things and then how do like normal everyday people?

How do we use the little power that we have to change some of these systems?

I love it. I love this question so much because it really is. It could go a million different directions and there's so many different ways that we can stop this. We absolutely can. We're seeing so much progress right now. I'll probably start with the most exciting piece, which I think is local politics, which is generally, you know, it's it's never going to result in like a huge revolutionary nationwide change.

Actually, let me take that back.

It can result in a national change because you have cities like San Francisco outright banning facial recognition. And even more recently, you know, Portland with one of the strongest bands in the country, not just for law enforcement, but for private use as well. So restaurants, retail, they can't do it either. And so, you know, these are places where your voice matters more than anywhere else. You know, when I the first day I became an activist, honestly, and I joined Albats org, we went to my city council person who hadn't signed on to the Post Act and simply by virtue of calling them. And I asked them, you know what, what do you guys think about the Post Act? And they said, what's the post act?

And I knew that we had on something, you know? And so I scheduled a meeting and we all went.

And by the end of the day, they had signed on as a co-sponsor. But that was the first part was like getting that meeting was even just me calling up and talking to a staffer. It wasn't on their radar. They have a lot going on. But because I cared about I live in his neighborhood like we'd met at community events and things. So he was accountable to me and and to his neighborhood. And so we had that power and we exercised that power and it worked. And so it's a lot easier to kind of get through to your politicians on a city level, on a town level, on a state level, even. And then these laws become the template that spreads across the country. And sometimes that's a good thing and sometimes it's a bad thing. And I'll say, you know, in, for instance, California, the CPA or KPA Tunis's, the CPA was modeled very much after the EU law.

And so you can see like this is a template, this is a privacy law. The European Union, the GDP that, you know, was very similar to what came out of California. And that's not an accident. It's just these things work. They've gone through committee.

They've been tested by the public reactions that have come of them. And as you can see, good things happen. So that's a good thing. The GDP is very protective. And then the alternative, though, is where you see lobbying efforts come into this. And again, this is where I see private entities partnering with law enforcement that have a financial gain or opportunity here. And in Washington state, before San Francisco ever banned facial recognition, they were discussing it on the state level.

And there was some really good proposals backed by the ACLU, backed by academics at University of Washington, that and there was a very favorable legislative body that had a supermajority, Democrats, and they had really great public support. But the the lawmakers and Microsoft and Amazon lobbyists disappeared into a room and came out with, you know, a year later a bill that actually Microsoft's policy team wrote about facial recognition regulating that facial recognition. And it was weak and it had holes. And it's it's now spreading to other states as well. And so this is, you know, the way that we see things starting small and then growing. You know, we as a public need to be aware. We need to be involved in city council meetings and just talking to the people that represent us on the local level to help us push that along. There's so much more I could say about this, there's so many other ways that we can all have an impact here. I mean, the other thing that we can do is just like hold big tech companies to account. It's not the wonky dorks like me who are out here fighting that are changing things like, you know, I'm just one person, but it's when we all agree that this can't stand and that we go to social media and that we complain and on Facebook and on Twitter and we have the hashtags that we come together on and say, do you find the police or, you know, stop selling facial recognition when they're when they're afraid that they're going to lose their consumer user base?

That's when change happens. And so we all have power here.

It's just a question of feeling actualized enough and feeling empowered enough to do it.

And that's why I always try to focus on on the hope rather than just the dire situation, although certainly both are the case.

Earlier on in this interview, you mentioned a point at which you became an activist and you became radicalized in some way. And this is the Radical I podcast. So you knew this question was coming? Probably.

But we're curious what radical means to you in this high technology space and whether and I guess to what degree you situate your own story and your own work within that definition?

A little. Good question. I love this because, you know, and the first time I ever really identified with that word or connected with that word was through the campaign to stop killer robots. It was shockingly like immediately after my story became public knowledge and I reached out to the campaign and we started working together. You know, they invited me immediately to come to the U.N. with them in Geneva. There was like a meeting of the Convention on Conventional Weapons that they wanted to to bring me into. And, you know, and so it doesn't feel radical to me to say, hey, maybe let's not give robots guns like guys like maybe we just shouldn't do that. Seems like a pretty decent, pragmatic plan to me. But, you know, according to the other people in the campaign, these are people who have been doing diplomacy forever and working in NGOs and doing passivism. And and they they are the first people to tell me that, like, actually supporting a ban on this, like weapon that may or may not exist to varying degrees right now is considered radical.

So, OK, I guess I'm radical there. That's one thing. But I also take issue with, like, this notion that being radical, that caring about vulnerable people is radical, that we we should put the poor people and the people who are disadvantaged for whatever reason, to the forefront of our policy thinking that shouldn't be radical. It absolutely shouldn't be. That's just common human decency. And so I guess I am radical, but to me it just means like, you know, kind of an alternative viewpoint to think outside the box and say this is the reality that we've been presented, but it doesn't have to be our reality. We do have the power to change that and shape it and however we see fit. And for me, that's a reality of caring about people and not taking ridiculous risks out of competitiveness, out of sheer greed. You know, I think that there's a lot of room for us to grow as a society and to have innovation and to have economic growth, but to do it in a way that doesn't put our entire species at risk.

I guess if that's radical, then, yeah, I guess I'm radical for those of our listeners and just those in the general community, the layman, the technologists and everyone in between, if they are looking to be kind, to make change and to help with everything that you have mentioned today, what's something that they can do today to start on that path?

Another fantastic question. I would say the first thing to do is just to call your city council member. Letters work, emails work. If you're not comfortable on the phone and have anxiety about phone calls, I'm with you. I get it. It's a real thing.

But the phone calls, you can gum up the works with one phone call asking for one meeting and getting even just like two or three friends to cooperate and to launch a little mini campaign against US council. Tell them that you don't want facial recognition. You know, it goes it doesn't go nowhere. It goes into a big database that people are looking at when they're considering, like, how do I get re-elected in my neighborhood? Their jobs are accountable to us. So call city council, start small, just write them an email, send them a chat, send them a tweet, tweet, work. It's crazy. It's so low effort with tweets work. Yeah. And so just like find out who that person is, it's not that hard to write them, write to them and tell them that you have strong feelings about facial recognition or other kinds of surveillance. I promise you, if you live in in anywhere in this country, your police departments have probably at least had that conversation. They've probably at least explored some of the options around it. So if you think that, oh, well, I don't live in New York, this is not a big deal. I just live in Wisconsin. And I live wherever you know, it's affecting all of us, it's not just a big cities thing, it's a rural towns, it's a medium sized towns thing as well. So tell them no, tell them you don't want to do that. And they will. They will listen. Believe me, I've been there. They do.

Normally, I'd probably just end by asking you if you have any any final thoughts. But I actually have a very a question that I struggle with in terms of surveillance, especially having lived in New York for ten years and seeing surveillance even evolve during that time in terms of technology, in terms of having a lot of friends, especially black friends, who were arrested or fined massive amounts of money for turnstile jumping and things like that, stop and frisk, et cetera, et cetera, is can there be an ethical use of surveillance technology, like just the lob to to end the interview?

I guess. But but but I'm really through this conversation.

I've really been struggling with that, like how I would answer that question. I don't know if I, I don't know if I have an answer for that. So I'm wondering if you do.

Yeah. I mean, so I this is a really, really hard question to me where I really had the reckoning about this and continue down my path to become like a very political person was when when covid broke out and everybody turned immediately to tech and said, can we use our smartphones to solve this crisis? Can we have some sort of a marker that will tell you if you've been within certain range of somebody who's had it and like there's no use case that's more pertinent that you can't connect with on a personal level like us not knowing where the infections are is what's keeping us indoors right now. It's painful, right? It sucks. And so the debate, you know, started out with with me thinking like, there's got to be a way. There's got to be a way. The more you learn about it, the less effective you see that it is that these Bluetooth powered APIs have a lot of limitations by the whether by the location, the amount of buildings that are crammed in. You think about how it works in the suburbs because everybody's got a separate house and there's plenty of distance. But in apartment buildings, low income housing, where people tend to be closer together, what kinds of signals like what kinds of accuracy will we even see through these walls? Can the signal permeate the walls? And then even more than that, you know, this is supposed to be a public private partnership. So this API, whatever that Google and Apple put out gets wrapped into an application that's probably not owned by the state. It's probably some company that's partnering with the state. And and they have their own terms. Even if it is just opt in through Google and Apple, they can use that data and sell it for advertising.

And if a police officer wanted access to that information for any reason, if they have probable cause, they can request it. And so what we're talking about is just creating a new database that contains the location of every human being in the country every moment of the day. That's very dangerous, right? And so it's not a question of can can this exist and can it be used on a one off ethical basis, like, sure, we want to be able to prevent crimes from happening. Like, that's that's common sense. The trouble is that in order to to frame it that way, you're ignoring the fact that these systems have to be complete in order to get that data. They have to be very comprehensive where you have like a ton of information collected that's owned by a private company that's owned that's accessible by the state, by law enforcement. And that's just a ton of power where there's not enough accountability over how it's being used, when it's being used and and who gets to own it and use it for what. So I have had to become, you know, this voice on, like, banning things that seem like they have some degree of utility. But to me, the trade offs are just too dire. And, you know, maybe it's possible we could have some very, very strong laws. A very good example of that. As Albert from Stop Look loves to say, is census data. So census data constitutionally is completely illegal to be used for anything else. It lives separately and the penalties are criminal and very, very large.

That modality, maybe if we were to say that these kinds of data are completely protected, like social media data is protected, it's a hugely radical ask, like all of our social media data should just not be accessible by police departments. But I mean, that would be one way for me to feel comfortable about surveillance, you know, with where it's very clear delineation like what you're doing that's public versus what you're doing. That's like with your friends and your private life. So I would say if it exists, if it's possible, I haven't seen it yet in practice and I look forward to that. I would love to see a paradigm that is ethically acceptable. But the the things that we have out there that are in the world right now are certainly not that. And so we're going to keep trying to outlaw them until they're gone.

Les, thank you so much for sharing your expertise and your experience and thoughts on all this. If any of our listeners want to either get in touch with you or take a look at some of your work, where's the best place for them to go?

Probably Twitter. I do a lot of yelling at people on Twitter, at Lyz Joe Sullivan, Liz O'Sullivan, or you can find us at Stop Spying.

And my great thank you so much for coming on the show, Liz. It's been a pleasure. Oh, thanks for having me. Was great fun.

We want to thank Liz OSullivan again for joining us today for this wonderful conversation and this check in on the state of surveillance in our world today. So, Dylan, I'm going to throw the softball over to you. What is your first reaction coming out of this interview?

Oh, I don't know if it's if it's much of a softball. Surveillance is God. It's such a tough topic. And I just have so much respect for what Liz is doing and what the STOP organization is is doing in terms of explaining ability and combating what to me still feels like a really amorphous enemy. I want to say, because I got out of the conversation really appreciative of how much hope Liz was putting into this conversation.

But really for me, just like with a real feeling of dread, because I do not trust the current systems that we're in to self regulate surveillance. And again, just speaking for myself, not speaking for radical I or anyone else, but it's especially after hearing some of the examples that Liz gave and some of the updates about how this technology is being used. I don't feel like I have a lot of individual agency in this conversation. And I don't necessarily trust the local or federal government to necessarily protect me from increasing surveillance or increasing surveillance technology. But one of the things that I really appreciated about this conversation was actually at the end when we started talking about, well, could there be any sort of good surveillance technology? Can it be useful? Is it possible? And for me, it seems like the answer could be yes, if we can trust our systems.

But right now, these systems are not giving me a lot to trust and just saying because I feel like after talking with Liz, I actually had definitely a little bit more fear and anxiety about being surveilled by the government in general, but also a little bit more hope and agency over the fact that we can try to get the government to do less surveillance and we can actually act as citizens, at least in the United States, to really try to shift the power there. And I think what made me a little bit more fearful over surveillance, not in terms of the government, was actually talking about industry surveillance. And so this is something that has been really interesting to me ever since I took that one computer ethics class that I infamously always quote from my undergrad degree. And it's something that I think about every single time I'm using one of my devices. If I'm Googling something, I am wondering, is Google going to be using the search result for something? Could this be used against me? Even if I'm Googling like a flight? Like it just follows me on every single website I go. Or if I'm downloading an app or if I'm messaging somebody, it's something that's kind of in the back of my head. And maybe that's because I'm in this tech ethics and responsible tech space. And I know there's a lot of people who don't think about that stuff because they don't really know the extent of how much we can be surveilled, whether it's through, you know, government surveillance or just surveillance. Capitalism, as was, you know, recently talked about a lot in the Social Dilemma documentary. And I think honestly, when it comes to surveillance capitalism, so that if a listener doesn't know what that means, it's kind of just the idea that all of the things that we do online are being tracked and analyzed in order to sell things to us and promote things to us and grab our attention for longer for monetary gain.

And that makes me very annoyed. It makes me feel like I need to act differently on the Internet. It makes me feel like I am not using the Internet as a tool for myself, but that it's actually being used as a way to exploit my actions for the monetary gain of large organizations that maybe I don't even want to support in the first place. And it makes me feel powerless, like it's not my fault, but there's nothing I can really do about it. And so if you're also feeling this way, listener, you're not alone, because I think a lot of people, especially right now, are really having a reckoning about surveillance in our everyday lives and with our everyday technologies, which is also why I really appreciated that Liz had a little bit of a call. It actually quite a bit of a call to action for all of us that it doesn't have to be. This way, not even just in terms of the government and calling the people who are part of your town hall and your local representatives, but also in terms of large tech companies, like, there are ways to hold these organizations accountable and maybe even more creative ways. And there are with government, because we can deal with less bureaucratic red tape. And so that gives me a little bit of hope, even in this feeling of annoyance that I'm sitting in.

Yeah, I don't want to be a downer, nor do I want to be.

You're not a downer. Done fear.

You're a pragmatist, nor do I want to be a fear mongering, because one of the things we talk about a lot in the show is about like, well, we don't want to just break it down into this utopia dystopia, world view or anything like that.

And when it comes to surveillance, I just I go back to being an undergrad my freshman year and, you know, reading Fuyuko and reading about the Panopticon and this like future of cameras everywhere or this idea that the government is going to be watching constantly or that, you know, technology is going to be used for these nefarious purposes, you know, 1984, that kind of mentality.

And obviously, Orwell and Co, we're doing very different things, but I think it's becoming real. And I see this stuff and I'm like, wait a minute, wait a minute. Sci fi wasn't wasn't all that all that off was what it was guessing here.

And and just I know you do a lot of work with, like speculative ethics and and things like that. And also bringing in science fiction into this, like, does it not feel to you like some of these science fiction fears of even like the 1940s or 1930s, like are becoming real to a certain degree right now?

Oh, totally, yeah, I mean, like Big Brother is Mark Zuckerberg or like Jeff Bezos, right. And maybe it's almost scarier because with Big Brother, at least it was clear that it was something that was like taking over the entirety of the world and it was something that impacted everyone's everyday lives and functioning. And it was like very clear that there was like a huge drastic shift in society based off the way that Orwell wrote 1984. And maybe the subtle changes that we see in our everyday lives in the way that we're surveilled and manipulated online are just so almost like secretive that it's a little bit scarier because it makes us feel like we don't even know that it's happening sometimes. And yeah, I think that to me is is almost more dystopic and sci fi. But ironically, it's like real like that's what's happening in our everyday lives.

Yeah. I have a question for you before we bring this episode to a close. One thing I thought a lot about in reflecting on this interview was Liz's term of being an activist. And I'm curious, do you consider yourself an activist or do you consider us activists?

Oh, what a good question. You know, if you'd asked me this a few months ago, I probably would have said no because I think we didn't even really know what we were getting ourselves into. More recently, I think. Yeah, I think so, because I think this platform is a form of activism in itself, maybe in a new and different way than people typically think of activism and and radicalism. But the name in itself kind of explains it right where radical I and we're here to Activia is word activism maybe.

Yeah, sir. But I like activism. Do activism. No, I think I think the I think the jury's out on that.

I think we're definitely making comments on like the political structure of the world. And that's something that I just kept coming back to as well, is just like is surveillance always political? And I think it is if we define political as like power in action in the world between like groups and groups and individuals.

And so I think in that way we are we are activist because we do have a set of values and we're saying, no, actually government surveillance without any accountability, without any explain ability. Like, that's not good, that's bad and is going to cause harm. And so I think insofar as we're making comments on that, then I think we are activists, too, to a certain degree, maybe to a large degree. But we would love to hear from you as well. Dear listener, do you consider yourself an activist and how do you define that?

For more information on today's show, please visit the episode page at Radical I Dawg, and if you enjoyed this episode, we invite you to subscribe rates and review the show on iTunes or your favorite pop culture. Join our conversation on Twitter at radical iPod. And as always, stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Automated algorithms have improved a lot over the past decade. Powerful integrations with the most popular software allows Sonix to easily fit within your workflow. Sonix has the world's best audio transcription platform with features focused on collaboration. Are you a radio station? Better transcribe your radio shows with Sonix. Let powerful computers do the work for you; automated transcription in minutes. Journalists worldwide transcribe their interviews and short audio segments with Sonix.

Audio to text transcription just got more accurate. Sonix's automated transcription is fast, easy, and accurate. Better audio means a higher transcript accuracy rate. Do you have a podcast? Here's how to automatically transcribe your podcasts with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.