Robot Regulation: What Is It and Why Does It Matter? with Ryan Calo


Ryan Calo.png

What is robot regulation and why does it matter?

To answer this question we welcome to the show Ryan Calo.

Ryan is a professor at the University of Washington School of Law. He is a faculty co-director of the University of Washington Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering. Ryan’s research broadly ecompasses law and emerging technology.

Follow Ryan Calo on Twitter @rcalo

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Ryan Calo_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Ryan Calo_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information, we are your hosts, Dylan and Jess.

And in this episode, we explore what is robot regulation and why does it matter?

Today, every interview, Ryan Calo, a professor at the University of Washington School of Law. He is a faculty co-director of the University of Washington's at Tech Policy Lab, which is a unique interdisciplinary research unit that spans the School of Law Information School, and Paul G. Allen School of Computer Science and Engineering. Ryan's research broadly encompasses law and emerging technology.

Ryan is one of the world's foremost experts on robot regulation, and we've even been told that he is one of the people who has been credited with making robot regulation and robot law a field in the first place.

It is our pleasure to share this interview with Ryan Calo with all of you.

We are on the line with Ryan and today, as you know, we are talking about robot regulation and why it matters. So my first question to you is, what is robot regulation and why does it matter?

Well, let me take that in reverse order. So, first of all, robots are today and have been for some time a very important. I would say transformative technology, right? I mean, we have remade the military using robotics, we have remade manufacturing. And when I say we, I mean the United States, other industrialized nations have made enormous, enormous use of robotics. And there's thought that robots are leaving the theater of war and the factory and space context and other places where they've been very prevalent. And they're on our streets now and they're in the skies and so on. And so it's just it's just a really important set of technologies. And if if robotics is as transformative as both its proponents and and detractors say, then then one of the things that's going to transform one of the things that's going to change is going to be law and legal institutions. Otherwise, it's not transformative. I mean, if something has almost no effect on laws or legal institutions, I would argue that that is not a transformative force.

So what is that effect that it's having on legal institutions? Why is robotics so difficult to govern?

Well, let me just make the usual caveat that robotics are not. A single thing, right, I mean, robotics is is I mean, I often define artificial intelligence as. A set of techniques aimed at approximating some aspect of human or animal cognition using machines. And that is a pretty high level of generality that includes the sort of nineteen fifties ideas about symbolic logic all the way to contemporary classifiers and detectors that leverage deep learning. So similarly with robotics, I tend to think as robotic technology, as being technology that can sense the world. Process what it senses. And then act upon the world. And I mostly use that definition, which is largely out of computer science, depending on your discipline, people define robots differently and we can get into that if you want to. But I use the computer science definition, which is itself based on the sense that God paradigm, an all psychological action theory. And and so I use it largely to distinguish robots from a lot of other technology. So, for example, you and we're talking right now on laptops, laptops, consent. That's why we can hear one another. They have processing power. But laptops generally don't act upon the world, especially not in a physical way. Right. Conversely, my kids have remote control cars that have that have cameras on them. They do. They have actual orders. They act upon the world. They drive around the world. They navigate the world. They sense, but they have no real processing power. They're not robots in the sense that they don't process. And so though each of these things can act, it can be on a continuum. Know, Mars Rover maybe has less onboard processing power than another robot, but it has some I think all three things need to be there. And so when you have a technology that can sense and think and act. That provides a certain set of challenges for the law that I believe our recurrent. But in addition, there are specific instantiations of robots, drones, for example, base delivery robots, driverless cars, surgical robots that that merit context dependent legal interventions.

Yes, I think about this, I almost wonder about what what the options are for regulation and how I would regulate a drone differently than I would regulate a Roomba, et cetera. But I'm wondering, before we get into the kind of how we maybe should think about some of these things or some of the options, can we talk about the history of robot regulation?

Has there been like a history at either like a local, national or international level in the law of regulating robots?

I mean, yes, and it sort of depends on how far back you want to go in terms of in terms of history.

So very quick primer. I mean, this is I can't I mean, and feel free to give me the hook on this if I go too far. But right. When we talk about. Regulation becoming law. There are multiple sources of law, there's international law that is largely a function of treaties and other well-established, sort of accepted human rights. There is domestic law which can be legislative in nature or common law. It can be regulatory in nature coming from a federal agency. And then it also can develop to at state level. Local level. Right. So there's all kinds of possibilities for the law to get involved. And that's just hard law. There's also a whole world of so-called soft law is sort of, I think, of the leaders of studying soft law as the folks at ASU. They're really into soft law. And this that's the idea of things like standards, ethical bodies, anything that's that's not the kind of command and control or the equivalent. It's not hard law in the sense that it's not a legal firm, boundaries. And so and so when you talk about the history of robots, there are a bunch of robots specific laws out there. I mean, I could rattle a bunch off. For example, there are laws that talk about not being able to use a drone to interfere with hunting in some southern states. There is a law in Nevada that is specifically about driverless cars and what kinds of things you have to do in order to test driverless cars on in that state. Right. There are laws governing, you know, the approval of of this or that robot for a particular purpose. Like there's all these laws out there that are that have a history. What I've found so fascinating is.

The the the perennial sense that robots are always in the future, like robots have been in the future for hundreds of years, it's a very funny, unique thing about them.

Like, you know, they always are Harald's of the future, but they always have them. And so when I went back and looked at whether or not there was common Molik case judge made law having to do with robots, I found some absolutely fascinating examples dating back at least to the 1950s examples where courts struggled with. How to characterize a robot, how to categorize a robot, basically, and how to fit it into pre-existing legal categories. For example, there's a case from the nineteen fifties where we're trying to figure out what Karif should attach to robot toys that are being imported from Japan. OK, everyone seen those robots, these all these cool old robots. You know what it means with the like the little lights in them. They're getting they're getting they're getting imported from Japan. And it's unclear how to tax them and it's unclear how to tax them because at the present, they're the law. The tariff schedule makes a distinction between dolls and other kinds of toys. Why? Because of historic trade related reasons with Europe. OK, so how do we know with something that doll versus another kind of toy? Well, a doll represents something animate at the time. OK. And so the court and then the regulators are like, OK, well, we're going to tax this particular way because it's mechanical. It doesn't represent something animate. Right. And the importers are like, no, no, no, no. We want to be we want to be taxed this other way because of course, it does. And so there's this fight and the court has to come up with a decision about like, does a robot represent something animate?

And the court decides, interestingly, that robots do represent something animate because they're mechanical people, but a toy robot doesn't because a toy robot represents a robot.

Ok, so these are the kinds of struggles that courts have this idea of, of whether or not a robot was a doll, it came up so often that eventually they had to amend the tariff schedule and add robotics like robot toys as they're as a category like it was in parentheses, like robots. To clarify this. Another example was following this this case about about the straightfoward robot. There was another case where people were importing a famous robot where it looked like an astronaut and it had like an astronaut's face inside of this thing. But it was still like everything else about it was a robot like, you know, and it had this some this internal panel that would open up and it lights would go off and stuff like that. And and the people that are important were like, look at this. Like, you know, this is a person in here is an astronaut. Like, you can't possibly think there's anything other than a dog. And the cart was like, no, no, no. Although it has human like features, it's still a robot. After all, people don't have guns coming out of their chests. And so it's like and I can just tell you, there is case after case after case involving a difficulty that the court had trying to characterize. So we the law isn't robots are not new to the law. Robots are not new to the law. And even though they're always herald the future, we can always ask these questions about them. Courts and legislatures and so on have encountered them before. I mean, when when JFK was first elected, there was a huge movement to try to get him to do like a big official workshop around the displacement of labor from automation, which he ultimately didn't do. I mean, but there was like these op ed about robots taking your jobs that you could just you change the graphic, what you could just transpose them to today. And it would be the same thing we saw under the Obama administration, you know, whatever years ago.

Yeah, see, it's interesting because I feel like all these examples and different examples that I see in the news, they show that robotic regulation comes after the robot. So usually it goes like robot and then regulation. But it's it's weird because like you are saying, we are constantly thinking about robots being in our future. And so do you see any cases where the law and the regulation comes before the robot comes? Or is it always the case that the law is trying to play catch up with the robotics?

I think it just looks like that. I don't think it's really like that, right, and so I think that if we dig deeper on some of these examples, it just looks that way. It's always look that way to me. But nowadays I'm thinking it's not. So, for example, take drones. We're told that the FAA is so behind and Congress had to nudge them several times an appropriation act and get the Federal Aviation Administration to let drones be integrated into domestic skies. And everyone's like, oh, they're not contemplating automation. They're not how are we going to do delivery robots? And then the companies like we're going to go to other jurisdictions where the laws are more open and we're going to take up drone delivery there.

It has been like 10 years, we don't have drone delivery. It's not because of the law. You I mean, it's because drone delivery is super hard, robots are super hard, you know what I mean? I mean, it's really, really difficult to make drone delivery efficient because you have to deal with the energy ratio.

The more weight you want to put on that drone, the more it adds to the necessity of having a battery or another form of energy. That's hard to do. It has to do with the difficulty of navigating autonomously, because if you're really going to take a drone and put it and have somebody, you'll fly the drone around, you might as well put somebody on a bicycle and deliver that that same package, if you have every single one has to have an operator and so on. And the jurisdictions that have been experimental and the places where we've relaxed requirements both in the United States as an experiment or in other jurisdictions, they don't have it either. Same with driverless cars. How long have we had driverless car legislation, for example, in Nevada? I mean, it's been a long time, right? And they said you can test it here. Here's the parameters and that kind of thing. Other places have even fewer restrictions and so on. We don't have widespread driverless cars. Why? Because it's super hard. Because robots are hard. And so I just don't believe this narrative that the law is always catching up to robots. I think the law is like, OK, will you go? Go, go ahead. And then robots. Yeah, well, thank you very much. And they just can't do it because it's super hard.

And so I think that one of the dangers is to act like whatever robot happens to have gotten it. OK, can I swear in this podcast or. You can leave it out if it's ever going to be on NPR. And so I think I think that one danger is that when whenever a company gets its shit together enough to come up with a particular instantiation of a robot. And then they go lobby somewhere, usually a state legislature, and tell them, you know, this is how real this is how robots are going to work in this in this context, I think that is a big source of a problem and it's a big source of a problem because the law is reacting too fast. It's reacting too fast. And I'll give you an example, which is with with land based delivery, you have Starship saying we have this robot. It's got in certain ways, a certain amount. It goes on wheels that can do this. It can do that. We want a law that says this is OK and then trying to look futuristic or whatever, trying to look innovative. This legislature passes these laws saying you can deliver things with robots as long as they have these characteristics.

Right. Your Boston dynamic. And you've built a spot that doesn't have wheels, but it can walk upstairs and it weighs a little more than what you're supposed to have.

You don't need me. And so and so on. And it does it does something a little bit differently. All of a sudden, you can't even use it in that state unless you go back to the legislature and say, we're really going to need you. Just there was an example with the with the FAA recently. Police are buying these these flying systems that are tethered to the ground. But they don't fit within the definitions, either state or federal, of unmanned aerial systems, and so they avoid the regulation that would attach to drones because they're not technically unmanned aerial systems, because they're tethered as the argument. Again, what matters what matters is the new affordance is what matters is the ability to do something you couldn't do before or to have to do something differently than you did before. It's the capacity that robots and not the largest contingent configuration of those robots that that that matter. And so I think if anything was moving too fast and if the problems are and if there's problems, robots doing what they're supposed to, it's the robots.

I've done a little bit of research in the field of human robotic interaction. And one of the narratives that surprised me that I came across and some of that research is this word that you've mentioned, innovation and being on the other side of regulation. So regulation is getting in the way of innovation. And I'm curious from your perspective whether that is a true statement as we've seen it on the ground, or how do you put those two terms in conversation with each other?

No, that is that is that is a a OK, there's a lot to say about it. I'm sorry. I got a little bit like I'm rambling, but OK. So first of all, of course, if you want the freedom to just do what you want to do, you're going to be like, this is innovation. And any kind of limit on me is going to chill innovation. Jeremy, of course, you're going to say that, who wouldn't say that, right? And yet the actual picture is much more complicated. So first of all, robotics and artificial intelligence would not be anywhere near where they are, although not as far along as some people believe, if not for initial government investment. I mean, so this innovation, you know, where it came from, it came from collecting taxes from people and spending it in a particular way on basic research before there was a commercial incentive to do so. Silicon Valley is built on the military industrial complex that came before it. There's no question about that. My colleague Margaret O'Mara has a great book, and I'm just looking to see if I can spot the title about about this and the origins. But you can also see it in Markov's Machines of Loving Grace and other places. Right. So that's that's the truth, is that this stuff is there's no serious robotic or artificial intelligence application that did not have an origin in NSF or Darbar or something like that. Second of all, there are very well understood ways in which regulation makes innovation possible. And patent is an obvious example. I mean I mean, what is patent? It's a regulation says you can't copy stuff that other people invented for a certain period of time, right? Not only this patent, I'm not an IP lawyer, but I mean, basically everybody knows your patent is supposed to incentivize innovation, deregulation that does that. But in addition, when you can't do something a particular way because there's a patent in your way.

You the literature suggests that that's often what you get innovation because you have to accomplish the same thing in a different way. You see, I mean, and so and so because you can't do it the patented way, so what other way can you do?

And then you go, oh my goodness, this way is actually better, cheaper, faster, whatever it happens to be. OK? The other thing, too, is. People are not going to adopt things, especially in this more skeptical environment. We are now, if they don't believe there's basic safeguards around it, people are not going to take vaccines unless they are convinced that there's a regulatory process that makes that safe. You know, people initially did not want to do banking on the Internet because they were deeply afraid, appropriately so, about the prospect of theft and identity theft. And there had to be adequate safeguards put into place. And they turned out to be security standards in that instance, although eventually also security breach notification laws that made people feel feel comfortable. So, yes, well, there are narrow examples where regulation might be in the way of some innovation in some cases. Right. It is a very complicated picture. And innovation is deployed rhetorically to get out of the way of companies that really are seeking to do regulatory arbitrage because what they say is innovation, innovation, innovation. Then they wait till everybody's taking over. And then they use their access to everybody because they're on their apps to tell them they should sign a petition.

Making Uber retroactively lawful. Just mean and so so it's complicated and is related, I think, to your point, just was about keeping up with technology, right? That's part of that story. So if you think about the David Collingridge, the Collingridge dilemma, it's the idea that the problem with technology is you don't know what it's societal effects will be until it's already entrenched. And so it's difficult to figure out when to regulate technology, because if you regulate it or too early, you're going to show innovation and if you regulate it too late, it will be so entrenched that you will have much less wiggle room. And so you have to find the perfect time and place to to regulate people like Gaia. Bernstein had written about when is the perfect place to go. And so and so on. That is that is only true if putting safeguards in advance were neither feasible nor desirable. But my own view is that having institutional structures, having safeguards in advance that are written at a sufficient level of generality so that they're not picking winners and losers in the system can make everything go go apace and that it's a false dichotomy.

Can we actually unpack this Collingridge dilemma? I've never heard of this before. I think this is really spot on here because I've heard about a lot of robot regulation now from from you say, but also I've heard of the other side where there's a lot of robot harm and we can speak broadly about like A.I. harm in general to due to lack of regulation. So like with this dilemma and not knowing when to regulate, when it's too entrenched or not entrenched enough, like when do we do regulation, when how do we know when it's the right time?

So that that is a that is a great question. So the the Collingridge dilemma comes from a book by David Coleridge's. It's called The Social Technical Control of Technology. And it has been, I think, misconstrued a bit. I don't know, willfully or not, but it's been picked up by the sort of techno libertarians who have read. Reframed it as a pacing problem and a reason that we need permission, less innovation, right? So let me start with the original Collingridge original point, which is original point was that he was he was yet another anti determinist. He was a he was a person who was reacting to determinism about technol technological determinism, which says essentially that somehow technology follows a particular path and has determined effects on on the world, when, in fact, technology is one of the central insights of science technology studies is that is that, in fact, technology has this complex, iterative, dynamic mutual relationship with other aspects of society. And so if that's true, then you don't know what effects the technology will have in advance. The Shameem. And so and and so if you and if you were to intercede too soon, you would be that would be hubris. You would be trying to guess what technology was going to do in the world. And there's no way to do that. However, if you wait too long, then at that point technology will become intermeshed with with the world in such a way that it becomes sort of difficult and your choices are constrained and how you regulate technology that people are used to it, they rely upon it. People have invested in. And so. And so, you know, different scholars have looked at how do you optimize what when to do it, and they've thought about breaking technology into phases, this phase, that phase, and you should do it during the phase two or whatever.

But then they get resistance from the anti determinists who say, well, that's deterministically. Technology doesn't always go through phases like what you're saying. You know, the truth is, is that if you if you if you go too strong in either direction, you're left in a bad place. So I don't mean I don't mean literally, if you if you intervene too early or if you intervene too late, you're in a bad place. I mean, intellectually. Intellectually, if you let yourself think we can't know anything about technology's impacts and technologies like not even a different kind of technology, like just like other kinds of social facts and it doesn't behave the same way, doesn't raise the same issues, it's just totally different. If you find yourself in that heavily into determinists, sort of social constructivist idea, and then then you're going to be paralyzed, you're never going to be able to channel technology because you'll you can never know anything about it without just observing it. So you wind up with these beautiful sort of case studies, if you like. A winner wrote about this, but opening the empty box and opening the black box and finding it empty, you know, you wind up with these beautiful case studies that tell you the intricacies of how technology actually played out in practice. But they're not operationally visible beyond that context, and so you can't use them to channel technology one way or the other. Right. Whereas if intellectually you say to yourself, we need to embrace that and let technology play out, and only in retrospect can we just sort of try to minimize the harms that we see later on.

You end up in this techno libertarian world where, you know, the Internet like hate speech is rampant, misinformation is rampant.

You know, especially the marginalized are being negatively affected et and but we're but we're paralyzed by different kinds of interests who want to preserve immunity for platforms or their ability to do electronic surveillance or whatever happens to be. And we and we're stuck. And all of a sudden, the combination of federal law, the First Amendment and so on makes it impossible for us to have no room to maneuver. So I think both extremes are wrong now. I don't know exactly when the right time is intervene to intervene, but rather, I know that not all interventions are the same. What you shouldn't do is come in and say and reify or enshrine or codify the first example of an instantiation of the technology. Rather, you should ask yourself. What sort of affordance is does this technology create, I mean, like what is this technology permit us to do differently or that we couldn't do before? In the case of drones, it has to do with the perspective that we're able to take on things. You know, we can see things from above in a way that we couldn't before. Maybe we can even manipulate things that we couldn't reach before robotics. Generally, we can we can operate in dangerous environments where humans couldn't couldn't operate. There are cases involving undersea exploration, for example, and there will be cases involving mining of asteroids and things like that. We can solve problems in ways that no human would have occurred to them to solve it. So all of a sudden we're playing go or chess differently because we've brought to bear these technologies. What are the difference in the audiences and what gaps do they create and what opportunities to create? I like to talk about sort of gaps, gaps and levers, you know, like what gaps they create in the law. But what additional levers do they give us to accomplish our goals?

Robotics is interesting to me, especially in reading OP Ed's, and there's a particular emotional valence of either robots taking our jobs in the economic sense or just robots replacing humanity in some way that like I wasn't around when they invented fire. But I don't know if people would have been writing op eds about how fire was replacing our jobs, even if it was like adding a tool. So for you, I guess, coming from a perspective in LA, but maybe just in general, as someone who writes and thinks a lot about these things. Is robotics different, and if so, why would we or should we regulate robotics in a different way than the Internet or fire or another technology? And if so, why?

Yeah, you know, it's a funny thing about technology is, you know, it's whatever's new. I mean, it's like it's what I can't remember his name right now. I wish I could attribute this. I don't like to say things without attribution, but another scholar I read talked about how notice how we talk about our technology sector, but it does not include cars or, you know what I mean, or refrigerators or anything else that are certainly technology. Right. Let alone fire cooking. The chefs are not part of the technology industry, and it's because we associate technology with novelty, you know, and in that way, robots are quintessential technology. I mean, they are they are literally meant to herald, meant to herald novelty. That's their that's their pretty much their purpose and that's their purpose for like I mean, many, many centuries. I mean, this sort of ancient Arabic times. There's like so there's like these hydraulic machines that are in these catalogues that are supposed to be like, you know, wondrous things. They predate the birth of Christ and so ancient Greece and so on. And so but so but robots are different in other ways, too. And they have to do with the fact that we have different we have difficulty. They have a social meaning that other technologies don't have.

We tend to pretend we're a bit hardwired to react to anthropomorphic technologies like robots as though they were really people. And that is special and different and merits particular consideration, but it's also been the source of interpretive difficulty for the courts when the courts have struggled with. Putting robots into a particular place because they exist in some liminal world, and I can give you countless I'll just tell you one of my favorite stories, if I may, but this is one of my favorite cases of all time. I love it so much. OK, so in the nineteen nineties in Maryland, there was this Chucky Cheese. You guys know Chuck E. Cheese, I imagine. Yeah, but for any listener who doesn't know, Chuck E. Cheese is like a kid's pizza restaurant. And one of the attractions of it is that it has these used to have these animatronic that these giant like caricature, like these characters mouse this vaguely xenophobic Italian figure or whatever, a bunch of a bunch of sort of, you know, bunch of a bunch of I just want I also want listeners to know that. But they're both just and Dylan are laughing at my really good jokes and you just can't hear them because they're on mute. But I can see them and they're laughing.

Right. Right. It's only because I spent at least five of my birthday parties growing up in the 90s that going to Chuck E. Cheese. So I know exactly what you're talking about.

Let's not let the record reflect that this is funny. So anyway, so you have you have the you have these animatronic things. And so this enterprising tax authorities went into Chuck E. Cheese. They're looking for ways to have money. And they said, you know what, I think we're going to need to charge a performance tax on the food you're serving here, because there's obviously a performance. I mean, every 15, 20 minutes, these animatronic things come alive and they do like a whole thing. And so a court had to decide whether or not robots could be said to perform. And at issue was a considerable tax on all the food sold within every Chucky Cheese in the state of the economy, and the court sort of hemmed and hawed and looked up these definitions and whatever and decided ultimately that robots could not be said to perform because performance requires spontaneity. And that these robots are incapable of spontaneity. Now, that may have been true of Chucky Cheese robots from the nineteen nineties, right. But it certainly is not an adequate description of all robots all the time. And there are many examples I can cite to where there is the appearance at least is one that you don't know in advance what they're going to do. But the fact is, is it was about that difficulty, like are they performing like people or are they just playing something like a jukebox? But you don't have a situation where people are like, you know, there's a performance because the performance is coming in over the speaker or there's a performance that you gave me like that. It did not raise the same question. It was the it was the category problem. And psychologists talk about the psychologists talk about I mean, my colleague Peter Kohn and others have written about who's a psychologist and a human interaction person have written about the potential need for a new ontological category for robots because they just don't seem to fit in with like Rick. Person, I mean, they're just in between.

It's interesting, I imagine part of the difficulty in, for example, is let's keep using this example of regulating Chucky Cheese robots is that we don't actually know how to do it because it is so novel. And so I'm wondering, like, who is making these decisions? Is it the people who specialize in Chucky Cheese robots? Is it a lawyer who's particularly fascinated and Chucky Cheese like who is actually making these decisions and what makes them capable of making these decisions? Or maybe what makes it difficult for them to make these decisions?

Well, OK, so that's a that's a great question. So in the case of the common law or in the case of statutory interpretation, which is what both of those cases I gave you about tariffs and about performance tax, we're about it's really a court. That has to that has to make the decision and courts are pretty adept at that. I mean, there is a there's a whole structure in place that allows for bringing in external expertise at the appellate level. You can have amicus briefs from people that work on the issue. And so there are mechanisms to get judges the expertise that they might need in order to resolve a particular issue. Right to perfect judges certainly make decisions I wouldn't make because they have, in my view, an outdated mental model. Indeed, I think well, while it may be true that the robots and the Chucky Cheese example. We're not spontaneous, I don't think it's true as a category that the robots can't be spontaneous. I think the trouble comes in when you have lawmakers who don't have adequate expertise or you have regulatory agencies that don't have adequate expertise. They have a lot of expertise in what they do aviation. You know, they have a lot of expertise in Food and Drug Administration and they have expertise in certain things, but they don't have expertise in the way that robots work.

And I'll give you a couple of examples of I think I'll give you an example of a time when I think you know well. To me, a rather dramatic example of why agent of how agencies have struggled with cyber physical systems and in fact, how they've kind of grown up to some extent, is you remember when there was that Toyota sudden acceleration. Problem so so there was this issue years ago, though not that many years ago, where Toyotas were people who drove a certain kind of Toyota were reporting that all of a sudden the car would accelerate. And some accidents were blamed on this and they thought it was. And so Toyota said, well, no, this is a human error. Like we made the. We did something about the pedal that sticks or are we put the carpet in there in such a way that it's causing this and so it's just we've got to fix this, but it's mechanical in nature. But people told Congress that, no, actually, this is like maybe a software bug and that's a big deal. If there's a software bug that's making Toyota's accelerate, that's a huge deal because there's millions of them. So. So Congress goes to the Department of Transportation, understandably, and says, hey.

You know, we need to know whether or not the sudden acceleration is from is from the code.

And the Department of Transportation, like I mean, I to figure this out. I mean, like we like, how are we going to figure this out? So what they end up doing is they end up going to NASA.

Ok, they're going to be going to NASA and they're like, you, you folks work on this, you know what I mean?

And so imagine that for a moment. You don't I mean, imagine that you go to NASA and you're like, could you take a break from putting robots on Mars for a moment and look at this Toyota for us, you know, and so NASA did. NASA looked at the Toyota, looked at the code, looked at everything else because they build these systems. And after months, they ended up larger clearing Toyota of there being a software bug. You see, I but I mean, that's not a sustainable model. You can't have NASA look at every everything. And and I think similarly with the Boeing problems with the Boeing jet, another, it was just so difficult for them to figure out where was the problem? Where was it because because that expertise is is is lacking. But I think it's getting better. I mean, they're having to hire more and more and more people to do this. And so when the when the over accident autonomous Uber killed that woman in Arizona.

Right.

The National Highway Transportation Safety, Doty was able actually to do a pretty thorough assessment with. But they did need the input from from Uber itself. There's a lot of cooperation from Uber itself. Anyway, my basic point is just that I think there's a dearth and expertise of a particular kind. And I have, in fact, argued that we ought to have an agency that the whole purpose of which is to help other policymakers make wiser decisions about robots and artificial intelligence. And I've argued there I have a Brookings piece from years ago that's called the case for the Federal Robotics Commission. I gave it as a talk at the Aspen Ideas Festival years ago and so on. So this is ideas I have about like having adequate expertise because it's needed.

As we move towards wrapping up the interview, I'm going to, I guess, ask you a question that's connected to what you just said, but also as a blatantly unfair question.

What does ideal robot regulation look like?

It doesn't exist, I guess. Yeah, no, so I really think it's context specific. OK, I mean, so, so often you hear about artificial intelligence that you can't regulate it because it's not a thing. You know what I mean? It's you know, and you think of it as like it's a genie in a bottle and you open that bottle up and you just apply it to a different. So I'm sure you've heard it on your podcast because the people that you interview don't think that way. But I've heard other people talk that way about A.I. is that if you just apply it to everything and it just makes it better. And so this is there's this mental model that it's like a magic genie in the bottle and it wouldn't make sense or be desirable to regulate such a thing. Right. But but there are a bunch of ways in which the existence of A.I.. Exposes the breakdown in regulation or law that we can and should fix. And one example that I think is not obvious is.

The interaction with anti hacking while. OK, so so, for example, if.

If you want to, the way that we find out whether artificial intelligence that's been deployed in the real world is harming people. OK, so is making them unsafe because the threshold for object detection has been set too low on a number. So it hits somebody crossing the street or it's conceivable because it doesn't work for people of color. It disproportionately recommends longer sentences or makes or causes the police to believe that there's going to be an outbreak of violence in a black community when it isn't. The fact the way we find these things out is because researchers kick the tires on these systems. You know what I mean? Like journalists, sometimes lawyers, academic researchers look at these systems and see what how they work and see what their impacts are yet. There is always the concern or the sword of Damocles hanging over these researchers that the company is going to get mad and going to threaten to sue them. And in fact, if you think that that sounds like ridiculous, I mean, just think about the other day, Facebook sent the cease and desist letter to NYU over its political ad scraping tool. I mean, basically saying stop scraping our our system, even though the way that the researchers had architected it is they created a browser tool that people voluntarily opt into. And then that shows the researchers what the SO is like. The users themselves are giving a view into the system not. And so nevertheless, Facebook is a cease and desist letter, presumably on threatening them under the Computer Fraud and Abuse Act.

So I believe that the Computer Fraud and Abuse Act and other laws like it ought to all have exceptions for research to hold these systems more accountable might be better for the whole ecosystem. Right. There's another another quick example about same topic is. You know this, but there is something called adversarial machine learning to I'm not sure, maybe you don't, but I assume you do. But adversarial machine learning for anyone who's listening, who doesn't know, is the idea that you would take a train system that's supposed to make decisions or classify things in the world and purposely perturb images or change the world so that so that you tricked that system, that you make the system see something that isn't there, for example. My colleagues showed that you could slightly perturb a stop sign in such a way that none of us would recognize a big difference. But you put these stickers on the stop sign and all of a sudden a driverless car perceives it as a speed sign. I mean, so that's that's an example that comes out of University of Washington, so if you do that, that is scary because you have deployed systems that will then potentially be tracked. Right. But it's not happening. Under the traditional understanding of hacking, because you're not bypassing a security protocol, you're not breaking into the system. But by exceeding your authorization or having authority, breaking into the system and changing things, but rather you're just giving it a stimulus.

That makes it behave in a way that you want and what that so said another way, as systems become smarter and smarter and smarter. They can be tricked. Even if they can't be hacked.

And yet our definitions of anti hacking, the ways that we try to prevent malicious hacking, define hacking as bypassing a security protocol and getting into that system and trading things are damaging things, whereas this is just a way in which you're just tricking systems by understanding how the bombs work. So I've argued and my colleagues have argued in a paper called Is Tricking a Robot Hacking. That we need to change security standards so that it becomes inadequate security to release and Heyy, in the world, that's too easy to fool.

These are concrete changes to the law that are absolutely necessitated by artificial intelligence.

You see, I mean, there's no other way to think about them as that then as that, but they're not regulating A.I. persay, they're just changing laws and legal institutions and precedents to adapt to the prevalence of A.I., which in turn has certain audiences that didn't exist before, certain negative experiences and positive influences.

Well, clearly, there's so many more questions that we could explore on this topic with you, but unfortunately, we've reached the end of our time. So for those listeners who might be policy makers, researchers, roboticists and everywhere in between who are looking to take this conversation a bit further, where is the best place for them to go to maybe see a little bit more of your work or to get in touch with you?

Well, I'm easy to find. I mean, I'm just a law professor, University of Washington, and could just look me up. I'm on Twitter at our Callo. But the place I really want to direct people interested in this conversation is the We Robot Conference We robot. It is the annual robotics and Law Conference, the premier conference, at least in North America right now, if not more broadly. And it's where technologists, roboticists get together with law people, law professors, policymakers and talk about these issues. We do it every year. The 10th anniversary is this fall.

Check out we robot.

We'll be sure to include all those links, as well as many more in the show notes. But for now, Ryan, thank you so much for coming on the show and talking about all of this with us.

Oh, it was fun. Thanks for having me. I really appreciate.

Just the question that started us off today was what is robot regulation and why does it matter? And I think Ryan gave us a wonderful set of explanations and descriptions about what robot regulation is out in the field.

And now I'm wondering for both of us, what do we think?

Why does it matter? So let's start with you. Why does it matter?

Yeah, really thrilled me this awful there, aren't you? The robot regulation? I'm not going to speak to why it matters for everyone, because I think Ryan just outlined some amazing information for all of us that was really good breath description of why this matters in many different domains. But I think one of the biggest things that stood out for me in this conversation was that robot regulation is something that is temporal and has temporal importance. And when I say that, I mean that the timing of our regulation is really, really important for making sure that we don't cause harm to society. And I just keep coming back to this Collingridge principle that he was talking about in that if we start regulating too soon, we don't know what our technologies are going to be harmful and in what ways. And if we start regulating too late, then it's already super entrenched in society and we trust these systems too much. And it's so embedded that it's just too late to do anything. And I never really thought about that before with tech regulation in general, is that I guess I just always made the assumption that, like, you know, the earlier that we begin to speculate about the harms of technology, the better. But after this conversation with Ryan, I'm realizing that it matters that we think about when we're thinking about regulation. It's kind of meta, but I never really thought about that before and I thought that was really interesting. What about Geodon?

So as a social scientist, why I think robot regulation matters is because I think it really feeds into how we think about ourselves as humans and also as a society, like questions about what it means to be categorized as a robot versus as an android versus as just a cell phone versus a human versus a dog versus all these different categories that we've constructed into our lives that we create meaning out of. And that question of regulation persists in all of it. Right. We've created the law in order to put boundaries, especially moral boundaries, around things that are right and wrong and things that can happen and things that can't happen. And so when we talk about like robots and when I think about robots as a category, I can't help but think about humans as a category. And then questions of like, well, what is not human, but also like looks like a human or talks like a human. And then if it walks and talks like a human, is the human like those kind of bigger questions around robotics? And that's not to say that all of these ideas are just like highflying either. Like, I think the reason why this matters is because there were real immediate consequences to how we are defining these categories, especially the category of robot.

And I think that plays out economically in terms of the workforce, in terms of how we think about information, how we think about like the category of human, how we treat different people. And it doesn't really this is so wide reaching. And that's something I really appreciated about this conversation with Ryan is just how big of an umbrella robot regulation is for something that you would think is so specific and time and place are just like has emerged within the last, you know, even 70 years or or whatever. It's it really, I think, impacts a lot more than we think it does.

Yeah, totally. And I think you just reminded me of another big take away that I was actually thinking in the background of our entire conversation with Ryan.

And I think it's that in the ethics and just responsible tech space, this is something you and and I've kind of been discussing a little bit too recently. There's a lot of lip service that happens. And so there's a lot of, like vague statements of action that we tell our community, like, well, there's a problem with A.I., there's a problem with technology, there's a problem with robots, regulate it or regulate them. And that's kind of just where we leave it. Like I don't really say like and how or and what that actually means and how we do that. And I'm in this class right now that's taught by one of my PhD advisors, Casey Fuselier, about technology, ethics and policy. And for the first time, I'm kind of like diving into the weeds of what it actually means to regulate technologies and realizing that in certain case studies, there's always like a loophole in every law you could create. There's always some. Ma'am, that's going to be done to some people based off of the principles that we base our norms and our regulations on, and there's never really like a perfect answer for anything legally. And so it's such like a granular and complex system. So to hear Ryan actually speaking to what some of these ideas for regulation are and what some of these laws historically have been and how they've played out over time, I think was just like really nice to finally hear instead of just like the vague let's regulate it more of like the intricate, I guess, unpacking of the how and the what that actually means as we look at these different areas and domains of A.I. and technology and where we look at like the industry and now we're looking at law and we look at, you know, the academy, it's almost disheartening to me to see, like, such similarities and some of the issues that we're that we're looking at, especially in terms of like who is making decisions that impact, like wide swaths of people or I guess robots in this case.

But like thinking like systemically, it's it's kind of like it's a little it's still a little scary to me to be like actually the same.

And this is something that Dr.

Tim that Gebre said when when she was on the show, we're like actually that like academy over here on one side of the fence and industry on the other side of the fence is a false dichotomy because you have the people in the academy who are being employed as consultants to the industry to then make decisions. And then those people are also going to go be experts on like the Congress panels, which will then inform the briefs that create the policy. And it's just like its systems or systems to a certain degree.

And there are certain people and certain, I guess, identities which are being represented in these decisions to a much larger degree than other certain identities.

And that is, you know, we talk about power a lot on the show, and that's kind of, I think, at the heart of it. And law is all about power and exercising power.

And so I think just one thing that I was thinking about during this conversation with Ryan is just like how those systems of power are enacted and then also like who who is there, who are creating these systems of power and recreating these systems of power?

Absolutely. And of course, this was only one example of systems of power that we've explored on this show. And we will continue to explore how power is distributed generally, unevenly and inequitably throughout the ways that we create our technologies. But for now, that's it for today's show. So for more information, please visit the episode page at Radical Eye Dog.

And if you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite podcast to catch our new episodes every week on Wednesdays, join our conversation on Twitter at Radical iPod and as always, stay radical candor on that one.

Yeah, I got some fire in my belly. Got some robot regulation in your belly. In my belly. I want to get that checked out today about cracking the whip up for a vibrator.

I kind of like I.

I think it needs to be regulated, and that's a good.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix takes transcription to a whole new level. Sonix is the best automated transcription service and supports over 40 different languages; transcribe your audio files today. Rapid advancements in speech-to-text technology has made transcription a whole lot easier. Are you a podcaster looking for automated transcription? Sonix can help you better transcribe your podcast episodes. Automated algorithms have improved a lot over the past decade. Easily organize and search all of your transcripts after they have been transcribed and polished by your team.

Let powerful computers do the work for you; automated transcription in minutes. Colleges and universities use Sonix to convert their lectures, classroom sessions, and research recordings to text. More computing power makes audio-to-text faster and more efficient. Here are five reasons you should transcribe your podcast with Sonix.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.