Killer Robots and Value Sensitive Design with Steven Umbrello


Steven Umbrello.png

What is Value Sensitive Design and how can it inform the development and deployment of killer robots and autonomous weapon systems. On this week's episode we welcome Steven Umbrello to the show.

Steven Umbrello currently serves as the Managing Director at the Institute for Ethics and Emerging Technologies. His main area of research revolves around Value Sensitive Design otherwise known as (VSD), its philosophical foundations, and its potential application to emerging technologies such as artificial intelligence and Industry 4.0.

Follow Steven Umbrello on Twitter @StevenUmbro

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Steven Umbrello_mixdown.mp3: Audio automatically transcribed by Sonix

Steven Umbrello_mixdown.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information. We are your hosts, Dylan and Jess.

Speaker2:
And in this episode, we interview Steven Umbrella about value sensitive design and killer robots.

Speaker1:
Steven Umbrella currently serves as the managing director at the Institute for Ethics and Emerging Technologies. His main area of research revolves around value sensitive design, otherwise known as Vesty, its philosophical foundations and its potential application to emerging technologies such as artificial intelligence and Industry 4.0.

Speaker2:
This was a really interesting conversation for us about some topics that we haven't covered on the show before, such as autonomous weapons and value sensitive design. For some context, value sensitive design is a theoretical framework that has existed in the field of human computer interaction for quite a while. It was developed by Batea Friedman and Peter Kahn at the University of Washington starting in the late 1980s and was originally created as a way to incorporate human values throughout the entire technology design process.

Speaker1:
Stephen's work is groundbreaking in its application of value sensitive design to how we can think about autonomous weapons systems. And for these reasons and many more, we are so excited to share this interview with Steven Umbrella with all of you.

Speaker3:
We are on the line today with Steve Umbrella. Steve, thank you so much for joining us now. Thank you. And today we're talking about a topic that we have not really breached on the show, which is autonomous weapons, killer robots, that maybe the more colloquial term. And let's just start by talking about you and maybe defining some of our terms. So when we talk about killer robots or autonomous weapons, what are we talking about? And then how have you gotten involved in this work?

Speaker4:
So I guess that what we're talking about is essentially the the cornerstone of the philosophical language I take when we're talking about killer robots. And we can get into that a little bit later. It's the fact that I don't like broad brush and is exactly what is a killer robot. But I think when somebody says a killer robot, what usually comes to mind is usually the same image. It's like a ground based humanoid anthropomorphic shape Terminator. So that's kind of the idea of the killer robot. And essentially it's a robotic system that employs artificial intelligence, but that also has lethal capabilities in most senses. But of course, there's many different types of killer robots. There's offensive types, which is the one that usually comes to mind. But there's also defensive types like sentry guns on top of battleships, for example, that stop incoming missiles. So there's a broad range of what we would call killer robots. But usually we're thinking about the lethal side, the humanoid ground based, those kinds of systems. How I got into it, well, I guess you could say my broad field of expertise is engineering ethics. So I, I was trained in analytic philosophy. That's essentially my my field of study.

Speaker4:
And more into the applied side would be ethics. Engineering ethics was something that I always had kind of a pull towards, particularly in studying things like responsible innovation and more specifically value sensitive design, which is a design approach for how can we begin to think about and actually implementing human values in design. So designing for human values rather than those things being relegated to afterthoughts. Right. Or being design post hoc. Ad hoc. Right. But actually being the cornerstone of design and one of the most difficult areas, I would think, in approaching technology that way would be within the military domain. That's because we can already begin to imagine the military domain is somewhat closed off. Right. Things like stakeholder's is not really a language that we use when we talk about military or military technologies. And we can obviously get into the what we would call the unholy alliance of the military industrial complex. Right. Which is also a cornerstone of my approach to understanding how we can begin to design autonomous weapons for human values, something that seems a little bit, you know, contrary to to say in a single sentence.

Speaker2:
I think that when some people hear the word killer robots, they disassociate it from things that exist already today, it becomes this like dystopic vision of the future. But as we've discussed before with you, Steve, killer robots, it seems like they are actually the same thing as autonomous weapons, at least as we understand it. Are they the same thing or are they different? What is the difference between those two?

Speaker4:
Well, yeah, like I mentioned at the beginning, so autonomous weapons, as we would we would say the more technical term for killer robots. But killer robots essentially focuses on a certain subtype of autonomous weapons because a weapon platform being autonomous can be both offensive and defensive, whereas the operative term and killer robots would be the more offensive type. Right, that they are lethal autonomous weapons systems, a subtype of autonomous weapons systems. Right. And lethal lethal weapons platforms come both in air aerial, ground based naval. We can imagine artificial intelligence being used as a means to to render any of those three types of weapons platforms to be autonomous. And depending on the goal of the system, it can be lethal in the sense that it's offensive or defensive and that it's protective. Right. A system that may be considered an autonomous defensive weapon system would be like Israel's Iron Dome protects from incoming missile strikes.

Speaker3:
So I'm I'm curious because I think it's only for myself, the first time I was exposed to some of these topics has been within the last few years and deals with more articles that have come out on it. But I get the sense that there's more history here around autonomous weapons than I might be aware of. And so what is the history of autonomous weapons? Have they been around for a long time or is this really new with the invention of artificial intelligence?

Speaker4:
You could say that it's new because of artificial intelligence, but the trend towards greater autonomy, I guess you could say, or independence, they're slightly different in in the loneliness of those terms. But, of course, one of the biggest trends we're seeing in in warfare, really, particularly within the last 20 years since the beginning of or the ramping up of the wars in the Middle East, particularly in Afghanistan and Iraq, is this trend towards greater aerial warfare, which is important because aerial based warfare is a force multiplier. Right. And aerial supremacy, it's very hard to defend against that. So there's a trend towards that. But there's also the secondary effect, which is that, well, that particular type of warfare, particularly when the opponent is not capable of defense against it or having a similar type of air superiority, protects the operator, in this case, the pilot. Right. And we're seeing this trend towards hesitation of boots on the ground. Right. We want to protect our troops. This kind of like this modern military doctrine. We don't want to expose them to unnecessary, unnecessary harm. That's great. Makes sense. If we can do it in a means that extricate them from direct danger, let's do that. And we've seen more than a thousand percent increase from George Bush to Obama and the use of semi-autonomous drones.

Speaker4:
And that completely removes the operator from direct harm. And of course, we know that there's psychological harm still things like PTSD for drone operators. So it's not like it's extra extricated entirely from harm. There's still that psychological harm and that's real harm. But it's, of course, of a different type than the direct harm that a ground ground soldier would face in an engagement. Right. So we're seeing that this ever greater distancing is become Distel, the operator, from the engagement, and we see that with drones. So it's not hard to see why artificial intelligence systems want to be employed in what is already a trend towards pulling away from direct engagement of soldiers. And of course, there's a whole host of risks with this, like reducing the threshold for war because we don't have that potential for direct human casualties. Right. So we're less hesitant to to begin an engagement. There's a whole host of issues, of course, with greater and greater autonomy. But we can at least rationalize why there's this trend towards greater greater autonomy of these types of systems. It's kind of like getting the job done at minimal cost, both economic as well as the human cost in terms of casualties.

Speaker2:
When I hear you explain some of your work on what's called value sensitive design, which we can get into the definition of that a little bit, too, for our listeners, I feel like bringing human values into design in my head seems antithetical to autonomous weapons, like it just doesn't seem like those two fit together. So I'm curious. First, could you explain to us what value sensitive design is just kind of the one 101 one and how the heck we're supposed to put those into put that kind of design into something like an autonomous weapon?

Speaker4:
Sure. So value sensitive design is often described as a principled approach to designing technologies for human value. So that's that's kind of the most common line you'll see, particularly in academic papers that say we're going to use value sensor design. This is what value sensor design is. Now, value sensitive design is this design approach that was developed in the early 90s by Batu Friedman, Daven Head and David Hendry and colleagues at the University of Washington. It developed within the field of human computer interaction, and it addresses this need that technologies are often designed towards the most common values, which happen to be economic values in capital, capitalist Western societies. It's not hard to see why that would be the forefront of the value that we're designing for. But there's a sense of design takes that there's other values at play and that these values are not mutually exclusive of one another. And in fact, that when we come into things that are value tensions, we often hear the privacy versus security, for example. Right. Designers who take up value sensitive design argue that that's just bad design when we face these type of moral dilemmas. Right. That we can actually design a way, moral dilemmas through salient design. And that's and the value sense of design comes equipped with at least 17 different methodologies that are adopted from not only the history of the design of work, but from the social sciences and humanities, whether they be direct engagement with stakeholders using surveys or multistakeholder interviews, code design that comes from participatory design and universal design.

Speaker4:
So it's kind of an amalgamation of the last 30 years of participatory stakeholder centric design towards designing technologies with an explicit orientation towards designing them for human values rather than human values. Come in. And kind of as an afterthought, you think of your iPhone. Your iPhone was mostly almost exclusively designed for people who obviously have the capacity for sight, whereas the accessibility function in it was kind of an update was added after the fact. It wasn't already designed with that in mind. It was designed ex post facto after the fact. So that's essentially a value sense of design and it has been applied over the last 30 years to a host of technologies, whether it would be things like even advanced nanotechnology care robotics. Right. Like assistive robotics. It's been for energy transition technologies. And with over the last three decades, it's been primarily taken up by the technical universities in the Netherlands. And that's also not hard to see why the Netherlands is essentially an engineer country doesn't exist naturally. Right. So it's highly dependent on its technologies. And we can see more clearly that technologies and society construct one another. So the values that we design a technology for today emerge and manifest themselves into the future.

Speaker4:
So if we want to think long term, if we want to think multigenerational, the design decisions that we make today permeate themselves into the future. So one of the fundamental premises of value sensitive design is that the designer must take responsibility for the responsibility of others. The design decisions I make today will support or constrain certain important values that may emerge in the future that you take, for example, nuclear technology. Right. Nuclear energy. The design decision to make that to use that type of technology is a somewhat condemns many generations into the future of figuring out how do we deal with the waste products of our ancestors. So it brings a host of issues with it. And it's it's about avoiding short termism in favor for more long term ism. With regards to the military, this is, of course, a difficulty, like I mentioned at the beginning, with the military itself is somewhat opaque, very similar to an artificial intelligence system employing machine learning. Right. Although there is this, I guess you could say, more of a mandate by the public for more transparent practices within the military, especially after the Second World War. And because we're seeing this closer, closer connection, particularly in the West, between industry and the military itself in terms of whether it would be blank checks, you know, no bid contracts by by the government to the military industrial partners.

Speaker4:
We have to we have to begin by the fact, yes, we are designing technologies for death. That's fundamentally what they are. They're designed to kill. But that doesn't mean that simply by the fact we're doing that, we permit wanton killing in war. Of course, there will always be casualties and war. That's kind of comes as a as a natural consequence. The war is in and of itself regulated in a certain sense. The Geneva Conventions, one of them international humanitarian law, another. There's already international binding treaties that determine how new types of weapon platforms can be even looked into some research and then therefore subsequently deployed. Right. So it's one of the reasons why I'm generally against what would be a blanket ban of what people would call humans out of the loop technologies, which means fully autonomous weapon systems, because that's another nuance we have to talk about. When we talk about autonomous weapons systems, there's different levels of autonomy when we're talking about these kinds of systems. So we have to try to avoid these overgeneralizations when it comes to autonomous weapons systems. If we really want to get to the heart of these issues and actually have real binding treaties at the international level.

Speaker3:
The word binding that you just used is. Interesting to me at this massive scale that we're talking about in terms of the legal, but then also the almost cultural understanding of what these things are and then also how we regulate them. And I think the best way of asking this question is to say, look who's values like when we talk about value sensitive design, whose values? Because if it sounds like in this like utilitarian model where you say the US government would be like, well, we want less of our folks to be to have their boots on the ground, to be in harms way. And if we, quote unquote, break a few eggs or whatever along the way and there's casualties and all that, then, well, at least we protected our people because those were our values. And so how can we especially, I guess, at the international level, but maybe more specifically, how can we kind of square whose values we're applying into that design?

Speaker4:
Yeah. So generally, when I talk about value sensitive design for artificial intelligence systems, which I say is markedly a different type of value sensitive design than for things that do not employ artificial intelligence. And that's mostly because many of how of the values that we want to embody may be disembodied in the future when it's employed. And that's because because of its opaque nature of machine learning, mostly many of the values we want it to embody may be unforeseen or even unforeseeable. However, aside from A.I. specific values which we can take, for example, the high level expert group that may provide a good starting point for thinking about value specific A.I. We have the AI for social good factors which may be considered norms, right? How designers should actually go about putting this into practice and they're actually framed like that. They say designers should do X, right? So the very imperative. Right. They're framed like norms when it comes to higher level values. Right. And particular technologies that are cross domain, which means they go outside their localities. A technology like an autonomous weapons system is essentially designed for being outside its locality. We don't really deployed in our homes. We deployed internationally. So it's already an international technology. I would argue that the laws of armed conflict and then the subsequent rules of engagement are already sufficient for governing certain types of human out of the loop weapons systems, meaning fully autonomous weapon systems. So you brought up actually a good point. It's like whose values in terms of like we protect our troops that of course, that will always take place.

Speaker4:
Right. Particularly in asymmetric warfare. However, that doesn't mean that they are extricated from following the laws of armed conflict. And of course, that doesn't mean you will follow the arms of armed conflict. But if you don't, there's consequences for not following the laws of armed conflict. It's a it's an entire discussion, an entirely different discussion in the cases of we can probably just between us three already bring up certain cases of certain very large countries breaking those laws and not paying any consequences for them. The United States being a nice culprit of that on multiple occasions. But that doesn't mean that the laws themselves are meaningless or pointless or should it be there in the first place. That's more of an issue of actually enforcing existing law. Right. But I say that the law, how it is written, so the the letter of the law, as well as how the law is interpreted, the spirit of the law is already sufficient to govern most types of out of the loop weapon systems. And maybe we should actually discuss for for the view. What is it? What does that term mean? And that's more of a technical term in the loop, out of the loop, on the loop. People who are somewhat interested in this discussion, who may be listening to this may be have at least heard some of these terms being thrown around because they actually do use them in more popular articles on autonomous weapons. Just I'm not sure if you want to dive into that.

Speaker2:
Definitely. Yeah. I mean, if you could give a brief explanation of what it actually means to have an A.I. system, have a human in the loop or out of the loop or on the loop. I've actually never heard of that before. I don't

Speaker3:
Know if you're gonna call it that one, but

Speaker4:
There's all three. So there's the I guess you say three broad types of systems. One is human in the loop. Another is the one, like I said, on the loop. And then there's out of the loop. Right. And so being in the loop refers to weapons systems that only engage individual targets or specific target groups that have been selected by the human operator. So this refers to like semi-autonomous weapon systems, like an armed reaper drone whose operator at a distance usually right. Selects a target to engage and then clicks about button, releases a fire and forget missile, for example. Right. So that's human in the loop there, literally, like in there, the crux of how the system works, right? Then there are a human supervised or human on the loop autonomous weapons systems. And this is in which operators kind of like have the ability to monitor and hold a weapons target engagement, which means that a system can. Select and engage a target, but it allows enough time for a human supervising agent, so like an operator to veto that kind of a gauge. So the the system will do will select and engage it. But like there's like a a lag or delay period that says, like, you want to veto the strike. Right. So we can see that the level of autonomy here is like increased relative to being in the loop, where it's essentially a function of the operator to choose and engage a target.

Speaker4:
And then there's the I would say the more clear case of out of the loop. Right. Which is exactly what sounds like these are the weapon system that once activated, they can select and engage a target without further intervention by a human operator. And this is really what we what we're referring to when we're talking about killer robots is the full autonomy. And it's often the vector of autonomous weapon system that advocates for Şaban focus on. So what you essentially see between these different loops and their human process relations are like different levels of decision making abdication with regards to target selection and engagement. Right. In fact, there there are more nuanced understandings of technical autonomy and not just these three. To date, we can distinguish at least five different kinds of autonomy and we can go from the lowest level of autonomy to the highest level of autonomy. And that's kind of correlated directly with the least amount of controversy to the most amount of controversy. So do you want me to go through, like, quickly, what those five levels of autonomy? These are the five levels that we're talking about, technical autonomy.

Speaker3:
That would be that would be really helpful. And also, if you could talk about autonomy in general in this context, because I feel like sometimes when we say the word autonomy and this is her definitely. Yeah.

Speaker4:
Yeah. And I think that that's what you just said. There's one of the main issues with the proposal for a ban on autonomous weapons systems is essentially a misunderstanding or a conflation with different understandings of what it means to be autonomous. So I'll go through what is the five levels of autonomy and the relatively uncontroversial in terms of how we define these five levels. They're pretty much taken. So the first and the lowest level of autonomy is where the human agent engages and selects the target and then subsequently engages with it. So that's the human in the loop that that we mentioned. And the human here is in possession of what you would say, full autonomy. And the autonomous weapons system functions as a de facto extension of the operator. Right. Kind of like a prosthesis of the operator. It does nothing really on its own. It can fly on its own again, however, on its own. But it doesn't do any of the real function that the system is designed for, which is select and engage in targets. Right. So then we can move up another level from the lowest level of autonomy. And this is where a human engages with and selects a target and subsequently engages with it. The human here is in possession of full sorry. I just want to get my thoughts clear. The program selects a target, but the human operator chooses which to engage with.

Speaker4:
So the autonomous reaper drone from the sky, for example, could select five targets. Right. They can make a recommendation to the operator and the operator can evaluate that recommendation by the system. It's like, here are these five targets. And this, too, is a kind of human in the loop, given that the human still has full discretion of the target selection, despite the system giving potential alternative targets. Right. So that's the second level. We can still see why that's human in the loop. Right. The third level is where the program selects a target and the human must approve before the attack can take place. It's a little bit different than that on the loop, one that we originally mentioned. And here we're moving we're moving towards the on the on the loop domain. Right. Since the selection of the target has moved directly within the realm of the program itself and not the human right, then there's the fourth, which is the second highest level of autonomy is where the program selects the target. And the human operator has that restricted period of time to veto the engagement. And this is also the on the loop. And then, of course, the highest level and the fifth level of autonomy is where the program selects the target, engages the target without any human involvement. And this is the one we usually talk about when we're thinking about killer robots.

Speaker4:
Right. Like fully autonomous weapons goes out in the battlefield engaged with the target. It's almost as if it was a soldier. Right. So one of the central premises on which the proponents of a ban on autonomous weapons systems base their case relates, I guess you could say, to the concern that certain increased levels of autonomy may result in an accountability gap in an event of recalcitrance, in the event that it does something that we didn't want it to do. Right. This disembody of a certain value that emerges. Right. So we don't know who's to blame of the system. Does something that may be unexpected and unwanted, like, for example, kill a civilian when the rules of engagement are very clear that no civilians were supposed to be killed in this engagement. Right. And the mission forbids that. Right. That's where the rules of engagement come from. And aside from it going against general international humanitarian law. Right. And the rules of armed conflict. So my research focuses more on this notion of autonomy as being less problematic than how it's often conceived by those who are advocates for a ban. And that's not to say that I don't think there are good reasons for banning autonomous weapons system. There are like a contribution to the dehumanization of war.

Speaker4:
Right. It's deleterious effects on human dignity or whether or not even there's a necessity for them to be lethal as the technology gets better. The latter of which is actually a really interesting point that a colleague of mine, Nathan Wood, argues, which is that if the advocates for those people who support autonomous weapons systems are correct, that the systems are more ethical given their potential speed, precision efficacy, then it's that very capacity of being so good at what it does, so efficient, so far, so precise, that legally obliges these systems to not use lethal force rather than lethal force under the laws of armed conflict. Or more simply put, all things being equal. If you can arrive at the same consequence with both lethal and non-lethal force, then non-lethal force must be the chosen option. So we kind of face this weird dichotomy between technological advancement and the legality of lethal force right off the bat. Right. And that's actually not the only legal peculiarity when it comes to autonomous weapon systems. The fact is that both the advocates for autonomous weapons and those who advocate for a ban seem to underestimate, like I mentioned before, the force of the laws of armed conflict as they have and currently are understood. For example, if you want me to go into it, I can give like some case studies, like thought thought experiments for them

Speaker3:
Before we do that, which we would love to do. I just had two quick clarifications. Yeah. So when you the first one is when you break down these five different categories of autonomy, is this your model or is this like what they're talking about in the design?

Speaker4:
This is essentially a generally I guess you say there's a consensus between accepting this five levels of autonomy. Noel Sharkey was the one who broke this down, I think back in twenty fourteen in the discussion of the autonomous weapon system. So it's like here are the five levels of autonomy. We're kind of OK with one to three. We're a little bit if you're number four, we're definitely against number five. Right. And my central argument was against that. That type of autonomy is not necessarily problematic. And that, in fact, maybe it's that kind of level of autonomy that may increase our level of meaningful human control rather than decrease it against our intuitions.

Speaker3:
And then the second clarification quickly. Is when we talk about fully autonomous weapons, is that the reality that we're living in now, like is that still a big future or is that here now?

Speaker4:
There currently does not exist a fully autonomous, lethal, lethal, fully autonomous weapon system. OK, so the killer robot, literally the defensive kind, we kind of have that already. Right. Ones that will automatically take down incoming missiles or aerial craft, but not the offense of kind. No, we do not currently have that yet, nor do we have any international binding treaties or legislation that is directly focused on that type of weapon system. The Level five offensive weapon system that currently does not exist. That is a hotly debated topic at the international level. And there's currently inter-state discussions on moving towards a certain type of prohibition or legislation at that level.

Speaker2:
Yeah, thank you. And it would be great now if you could explain some of those case studies about the tensions between this legality of autonomous weapons versus lethal warfare.

Speaker4:
Yeah, so. Like I mentioned, those who are both for and against autonomous weapon systems, so the ones that are against are usually pushing towards a ban. Underestimate what I mentioned before, the force of the laws of armed conflict, both the letter of the law and the spirit of the law. So how it's actually interpreted in practice. So, for example, in a recent paper I co-authored, we took a closer look at one of these peculiarities of one of the laws of armed conflict. And of course, there are many and it's called orders to combat. So it's French and it means essentially out of combat rights. It's a status. When is somebody out of combat? Right. And when they're out of combat, they are no longer a legitimate target of attack. Right. So this notion hinges on the legal technicality of when an individual or group is in the power of the opponent, when they are in the power of the opponent, then then this statute holds. Right. We argue that as autonomous weapons systems become more sophisticated and increasingly more capable than flesh and blood soldiers, it will increasingly also be the case that such soldiers, the opponents, whatever, whoever those opponents will be, will be in the power of those autonomous weapons systems which fight against them. And this implies that the soldiers ought to be considered out of combat and not targeted.

Speaker4:
And in arguing for this point, we draw out a broader conclusion regarding combat status, namely that it must be viewed contextually with close reference to the capabilities of combatants on both sides of any discrete engagement. So I can draw this out using kind of like these case studies or a thought experiment. So part of the legal understanding of what it means to be out of combat is that the agent is in the power of the adverse party. What exactly does this understanding of combat status mean for autonomous weapons systems that are likely to be many implications. But what we contend is that in light of the widely varied and potentially dynamic situations in which autonomous weapons will be deployed, of course, you know, war is a very dynamic, changing scenarios, right? Context. These systems must be capable of responding to changing and contextualize evaluations of an enemy's status as out of combat or not out of combat. So autonomous weapon systems should be treated individually, given that the context of their use, aerial, naval, ground based, are substantively different. So like I mentioned, no broad brush. And and even with these broader categories, different types of autonomous weapons systems will come with their own capabilities and limitations, something that will potentially change when an enemy is deemed out of combat. So, for example, a lightly armored autonomous drone may often encounter enemies who are neither defensive nor powerless.

Speaker4:
While a heavily armored autonomous assault platforms are ground based, system will likely encounter individuals fitting both of those description. So this in turn requires that such systems possess a level of technical sophistication high enough to allow for calculations which are sensitive to many contextual factors, which will impact the relative strength and power of all the belligerent groups. So although this is not technically impossible, the viability as well as the necessity of this is beyond technical plausibility right now as we're talking. Right. And still, what this betrays is that out of combat status is fundamentally tailored by the entities making these evaluations, given both their capabilities and limitations. So a foot soldier makes different evaluations than a tank commander who makes different evaluations than an autonomous turret sentry, which is a defensive autonomous weapon, all of which will likely make different evaluations than a Reaper drone. So here are two cases that we kind of know will help tease out why this is extremely important when we're talking about the legality of autonomous weapons systems. And so imagine a fully autonomous reaper drone designed to neutralize an insurgent leader. We call this case high value targets. We have a high value target, an insurgent leader, and we're going to use a drone to to take him out.

Speaker4:
So the commander of a forward operating operating base, alongside his tacticians, legal professionals and other experts determines that the most efficient plan is to neutralize the target via an aerial strike. Right. It's not beyond the bounds of possibility. We're doing that all the time. And that's such a strike is lawful because he has his legal professionals next to him to determine proportionality. So the commander has a fully autonomous reaper drone outfitted to undertake the mission. The drone is then tasked with taking off, arriving at the target's location, confirming that the target is present, confirming that the target is not in the vicinity of many noncombatants, which would render the strike disproportionate and therefore not legal, releasing its payload and then flying back to base. OK, however, suppose that while en route to its target, the drone passes over a company of heavily armed enemy combatants who are isolated in the hills. So despite the fact that such a group is heavily armed. They're offensive and defensive capacity against the Reaper drone is functionally irrelevant, and in this case, the hostile party is out of combat, must be considered out of combat, despite them being heavily armed. They are both offensively and defensively incapable of engaging with this type of system and therefore they are not a lawful target by the Reaper drone.

Speaker4:
So now let's change up this exact case to play with our intuitions a little bit. OK, so let's imagine that the same base commander, instead of sending a reaper alone, decides to deploy a team of Navy SEALs to neutralize the target and that they are to travel using ground vehicles. So in this case, a high value target with SEALs, an anonymous Reaper drone is deployed to provide close air support for the SEALs. It's a common practice, right? But all other factors are the same as in the previous case, with the SEAL team now encountering the same armed company of many enemy troops. So in this case, the reaper should arguably not view the enemy combatants as being out of combat because those same enemies can now inflict casualties on the SEALs and thus are not powerless and therefore not legitimate targets for the Reaper drones. Yet in this case, the Reaper drone, plus the SEALs right forms an even greater asymmetric advantage over the enemy combatants. But that's not really the relevant factor that determines out of combat status. It's not simply whether or not one is able to defend oneself, but rather whether or not one has the power to affect one's enemy. So in this case, despite the advantage held by the SEALs and the Reaper together, the enemy troops are nonetheless able to inflict casualties, whereas in the previous case there are powerless against the reaper and are therefore arguably to be deemed out of combat.

Speaker4:
So taken together, these points demonstrate that certain classes of people are not to be treated as out of combat a. Rather, it's contextual factors that change the status of the same group of individuals, all other things remaining equal simply by changing the other actors, machines or human involved in that scenario. So for autonomous weapon systems, this means that it would be nonsensical, if not technically unfeasible, to create a blanket method for such systems to determine whether enemies should be classed as out of combat. And moreover, this this would be even the case for specific types of autonomous weapon systems, because any given combat scenario is marked by its dynamism, something which must be reflected in the ways that autonomous weapon systems operate in order for them to accurately determine whether or not enemies are out of combat. So given the current technical obstacles for such nuanced programming and autonomous weapons systems, we would contend that commanders should hold the final say regarding the rules of engagement and also the adequate standards of due care in these types of engagements, because we're just not at the technical level right now in order to program these types of systems, to be able to make these types of very discrete contextual determinations.

Speaker2:
Well, let's talk about a system that maybe is at that technical level, so a system that is getting closer and closer to this like full autonomy that we keep talking about. And we've been talking about the difficulty and the complexity of regulating warfare in general with or without autonomous vehicles involved. And now let's talk about the complexity of just regulating A.I., because this is something that we can't ignore in this entirely complex complex that is. And so I guess I'm going to take that back again to delve into that. This is something that we cannot ignore in this complex structure that we are explaining here. And so I'm curious because some of the difficulties of regulating AI and tech historically has been because the tech industry at large is hard to regulate. This is due to a lot of things, you know, the inability to understand why A.I. works or how it works, the inability to audit, the inability to place responsibility on a machine or understanding which person that responsibility should fall under. And so I'm wondering, how does the tech industry fall into this giant puzzle and do they play a role? Do they have a role in regulating or a lack of regulating these kinds of systems and things like projects like Google and Google's MAVEN project that have received a lot of hugely negative press about technology companies existing in the military infrastructure and pipeline? Are those a part of this conversation as well? What should we know about all of this?

Speaker4:
Well, OK, so super complex. I'm pessimistic. So that would be a good way to frame it. The. You kind of touched on it, but didn't say it. There are also one of the biggest lobbying groups right now, very similar to Big Pharma and to the military industrial complex like Big Tobacco was. So the social structure itself of these types of conglomerates, these massive corporations. In and of themselves resist regulation, whether that be self regulation or external regulation, and we can see that type of corruption at all levels. OK, so they just from because all technologies are fundamentally social, technical. So they're not just discrete artifacts, but they're influenced by and influence the social factors around them. In this case, the social factors are pushing against any of that type of regulation in favor of a certain type of value, which is there. Their continued, I guess you say, well, continuation into the future. So long term sustainability, not the sustainability, the virtuous sense of sustainability, but sustainability of economic profits over the long term. So that is the value that is essentially pushing them. And we've seen that by essentially their words and actions. So it's not unclear and it's not unobvious. It's very obvious essentially what they're doing. And I'm thinking Google and Amazon and so forth. Where do they come to play? Your question, I'm assuming, was far more general right now. Where do they essentially come to play with an autonomous weapons but within A.I.? Because, of course, AI has to do with autonomous weapons more generally.

Speaker4:
I would say that. Almost all these companies start off already on the wrong foot. When it comes to the design of these systems, someone who is a proponent of value sensitive design already begins with the values of stakeholders in mind. We are designing for indirect and direct stakeholders, not only the direct users, not only the value, the economic values of the people designing them, which would be the designers, the management, the executives. Those are important values. We can't pretend like they're not important values. We want to actually have those values in the design. If we want the long term sustainability of these structures that produce these technologies, but not at the cost of the of the other stakeholders who are no less important. Right. And other stakeholders could also include the environment, for example. That's a stakeholder, of course. Right. That has impacts and has impacted by everybody. So value sensitive design begins with this. When it comes to I would argue that one of the necessary steps into the salient design of A.I. for human values is that we incorporate from the get go a design process that makes that mandates full lifecycle design. Like you mentioned, A.I. is fundamentally but not necessarily opaque. OK, these are design decisions that permit the continued use of opaque systems. And of course, there are certain organizations looking at transparent machine learning. But transparency in and of itself is not is not a good in and of itself.

Speaker4:
But it can be a means towards an end, like something like inexplicability, something a little bit more broad, and that actually has meaning behind it. So there's already a push towards that kind of understanding of of artificial intelligence from the get go. So we want transparent and therefore explicable systems. But I would argue that full lifecycle design, lifecycle monitoring be a fundamental part of these types of systems. Like I mentioned, taking responsibility for the responsibilities of others. Does that mean simply creating a product, introducing it and fire and forget we deployed it and it doesn't matter what types of values that it embodies or this embodies into the future. These types of systems learn based on their environment. So they're highly contextual, which makes prediction of what they will do in verification extremely difficult, which in and of itself kind of mandates that. And it may be costly, but it is what it is. We need to monitor how these systems embody in disembody values over their entire lifecycle, at which point when it triggers some sort of this embodiment of a value or a disvalue that we don't like that was unforeseen or unforeseeable, that triggers a state of redesign. So this is the corporations or the groups of individuals who create these types of systems taking long term multigenerational responsibility for these technologies, given the potential impacts that can have on society.

Speaker4:
If only we thought about this when we were thinking about employing nuclear reactors. Right. Because like I mentioned from the beginning, the very fact of their creation condemns generations into the future who did not have a choice in the employment of these types of systems 50 years before to multi generations of caretaking. Right. Of shepherding this extremely hazardous byproduct, of which we still don't really understand how to maintain artificial intelligence is not that much different in terms of the catastrophic effects that it can have as it becomes more pervasive in society, which only reinforces the need for full life cycle monitoring. Right. And that being designed into the system from the get go being a goal from the beginning that we need to be able to monitor these systems across their entire lifecycle to make sure that it continually embodies the values that we want and the change in values over time and to minimize its dis embodiment of this values that we don't want right as they emerge. So there has to be this kind of not only reflexivity interactiveness, but kind of modularity and all the systems that even allow them to be redesigned once they become pervasive in society. But as you can see now, this is not really something that's that there's a concentration on. And that's not hard to see. Why? Because there's really only one value that's being centralized are now being designed for, and that's profit.

Speaker3:
I'm really glad you brought up the comparison to nuclear weapons as well, because you see right in the after World War Two from from then on, right, you have a lot of movements around the world saying, no, we need nuclear disarmament complete. Let's let's get rid of this. Let's take down the nuclear proliferation. Let's not go back to a Cold War arms race between massive nations. And so in this autonomous weapons, I think the part of the frame of the curiosity that I have in this conversation is that a lot of your work seems to center around, OK, we have these things like they're going to exist now. What do we do with them and how do we design for them? Is there any room in this to be able to now walk anything backwards to say, oh, actually, we are not going to give money to this or we're not going to continue to grow this industry? Or is the cat already out of the bag? And we just need to figure out what we're going to do with it now that we have it.

Speaker4:
So one of the dangers I mentioned with regards to the proponents of a ban is that it's overly generalized. Right. So I kind of already made a case that these types of systems are they themselves have to be able to make discrete, contextualised decisions and determinations in order to remain legal. Right. But also because there's so many types of these types of systems that could be employed. Right. That a blanket ban would do more harm than good. And why? Why? Because there's already a few states that have an explicit orientation towards the design of these systems. China, Russia, the United States are three good examples of countries that have made it clear that they have intentions to design or they're already designing these types of systems. Now, a blanket ban will result in what it will result in, very similar to the same thing that happened with the ban on land mines. The United States simply said, we're not signing and that's the end of the story. What is the consequence if they continue to use them and they will continue to use them? OK, so that's, I would say, would be the worst of all possible worlds. And I lay that at the feet of a blanket ban and overly restrictive ban, something that stifles innovation.

Speaker4:
Right. I prefer a more middle road. I say if we can determine which types of systems to be bad ones that cannot come under meaningful human control, ones that cannot make these types of determinations despite the level of autonomy, the level of autonomous is the wrong place to focus on. It's how that autonomy is actually brought into play that if that's going to be the definition on which a band is built, and that is one of the arguments that's put forward. The main argument essentially this is the level of autonomy that must be bad. Let's take a step back, because simply by putting a ban into place that focuses on that will result in the exact opposite of what we want. So I'm not saying the one side is just develop whatever you want wrong, OK? We don't want them to be recalcitrant against the laws of armed conflict and international humanitarian law like. No, of course not. If we ban everything right, then the countries who are developing these technologies simply just won't sign it and therefore we'll use them anyway and will not be governed by that prohibition. So what we want is to find the middle road that allows for certain types of these systems and therefore to appease the countries that are already investing a lot of their financial and labor into it, towards these types of systems.

Speaker4:
Right. To continue to explore how they can be used. Right. So I say we leave a door open. Right. And we argue that, yes, they can be developed towards these types of values being international humanitarian law and the laws of armed conflict, which already give us a nice, long standing body of values that must be followed in armed conflict. Right. So we kind of already have the bricks that we can build the house with. Right. But if we're too quick to say that. No, all these types of systems, the furniture in the house, they have to be banned, then the house is going to be built anyway. And we won't be able to determine how the inside looks at all. We want to be able to make some rules for the how the inside looks. Right. So we have to be careful of stifling innovation all at the same time making sure that whatever innovation does take place, it adheres and is aligned with the existing international regulations that have been agreed upon, being the laws of armed conflict and international humanitarian law.

Speaker3:
Steve, obviously we could talk about this for much longer. And there's so much richness and texture in your analysis and your scholarship. And for folks in the minute remaining, if for folks who want to follow your work or want to get in touch with you, what is the best way that they can do that?

Speaker4:
Aside from my institutional affiliations, which you could find all my contact info there, you could just follow me on Twitter at at Stephen Umbral, at which I post announcements, papers, anything like that, that may be of interest on this topic.

Speaker3:
Wonderful, and we'll make sure to put that and also links to any of your papers that we brought up during this conversation into the show notes. But for now, Steve, thank you so much for joining us today.

Speaker4:
I appreciate the invite.

Speaker1:
We again want to thank Stephen for joining us today. Just what do you think? Hmm?

Speaker2:
Dylan, I have a lot of thoughts coming out of this interview and I think I want to start us off by saying first that as a podcast and as an organization and also just as individuals, we wanted to start off by saying that we do not condone or advocate for war in general. And so this conversation was a little bit tough for us. And I think we I can probably speak for both of us when I say we had to sit in a little bit of discomfort listening to the use of some of these technologies like killer robots and autonomous weapons systems. And so that was just the first thing I wanted to mention was that a lot of the stuff that we talked about today made me pretty uncomfortable. But I think it was a healthy discomfort because I was very interested in what Stephen had to say about how values can play a role in all kinds of technologies and even the kinds of technologies that we we typically think from the start have no place in values sensitivity and are maybe inherently evil or inherently bad. So that was something that challenged me quite a bit in this interview. And I think it was it was a good thing, I'll say that. How do you feel about it? Done.

Speaker1:
Yeah. I mean, I you know, when I used to work at the United Nations, I used to work a lot with the with the Quakers. And their whole thing was, you know, nuclear disarmament and moving away from weapons and weapons of mass destruction specifically. And I think what we're seeing as I and technology, although technology has always been part of warfare, broadly defined, like it's for the reasons that, you know, Steven was mentioning, it's continuing to be a hot topic and a hot button of how we're increasing the efficiency and effectiveness of how we kill again, we broadly defined as well. But coming from the United States context, like the billions or trillions of dollars that comes out of the U.S. budget every year just to do this development is it's it's staggering. And so I greatly appreciate Stephen's perspective on this, that like, well, this is happening. So, you know, we have to do something with it. So it might as well be like more value sensitive and less as long as we're intentional about it. But, you know, for me, there's definitely is this a place that we should draw a line in the sand and say this technology should not be used to make our weapons better and our capacity? I mean, let's let's call it what is our capacity to kill better and more efficient? And so that's that's where, you know, that's kind of where I went on this. And I think, again, is just that I like as as an organization and as, you know, radical. I think that we have to take a stance saying that we do not condone war in any forms, especially how technology is being utilized to forward a war. First perspective, because as Stephen mentioned as well, you know, we are continuing to be in some level of an arms race on this, and I don't think there's an easy way out of it. So I do think it's really important to talk about these things. I really appreciate Stephen's work. I also think it can be a really slippery slope, but maybe that's better than ignoring what's happening completely.

Speaker2:
Yeah, it's this weird tension because I feel like on the one hand, it sounds like you and I are both in agreement that if it was up to us, we would rather this technology just not be made in general. And it would be nice to not have to worry about killer robots and autonomous weapons. But we're now in a situation where it seems like it is being created regardless in the world by different countries, including our own. And so we need to talk about how we do that in the best way. And I'm latching on to one of the words he used on which was efficiency, because I feel like value sensitive design is a really interesting take on how we can optimize our technologies and how we can design for our technologies in a way that is maybe challenging the efficiency and accuracy narrative that we tend to think about when we when we design for AI and machine learning technologies. And so I really thought it was interesting, Stephen's breakdown of the five different levels of autonomy, because if it were up to me, I would say, OK, we'll give everything to the hands of the humans and let them make the decisions because they're going to be the ones who know what it's like to kill another human. And they're going to be the ones who can empathize and sympathize and be compassionate towards other humans and I guess comply with the rules of warfare, which was also something new to me.

Speaker2:
I had no idea that warfare had all these rules and regulations. But that being said, I also see why there's issues with. Having humans make all of the decisions as well, because humans make a lot of mistakes, and so it makes sense to try to automate away some of these human decisions that have historically happened in the past. But then again, computers make mistakes, too, and algorithms make mistakes, too. And so I'm I'm stuck in this this space of, I guess, maybe confusion and worry and fear a little bit about what is the right course of action here, because I guess if we're going to be taking the stance in this episode for, you know, playing devil's advocate, if we're going to be taking the stance that this technology needs to be created regardless, people are going to be killed in warfare regardless. What is the best way to do that? It's very hard for me to sit here and say here is the most value sensitive way to kill people. That just it feels wrong to say. And yet that's that's the position we're in. So what do we do with it?

Speaker1:
I did appreciate the breakdown of the different levels of autonomy and also I think this is another area which we see in a lot of different domains of A.I., which we've talked about before, where the public understanding of what's happening in the public narratives around this and mythos behind it, this being, you know, autonomous robots or killer robots or autonomous weapons, where the understanding of like the Terminator right out in the world, it's it's different than what's actually happening on the ground. And also what's happening, you know, really in our world is just as terrifying, I think, partially because it is a human in the loop. Right. Like these are decisions being made at a very high level by people who are. And I don't. I don't believe that all of these people, like when we talk about Big Taculli, you know, I don't believe these people are necessarily evil, even if some of these decisions I think, you know, actively the stakes are so high. Right. Like human life and the taking of human life. But I don't think people are necessarily. Even so, the question is for me is like a moral philosophy for philosophers like, well. Is even the question of how are we look, if the premise is we're going to use technology in order to kill better, right? That for me, the reason why it feels wrong is because that, I think is a faulty premise. I think if our premise is, well, how are we going to make, you know, the world better or something like that? And then, you know, perhaps warfare might be a part of that. Like, I don't necessarily believe that. But if that's our premise that we're starting with and I think we reach a different technological conclusion in terms of the development of our tech.

Speaker2:
Well, that's interesting, because I wonder if maybe for the purpose of this episode and also in line with some of Stephen's work, I wonder if maybe we should try to shift that premise a little bit and maybe instead the premise is actually in the event that people are going to be killed and humans are going to be killed by one another in this world, because warfare is something that humans do as of the present day. What is what can we do as humans and as technology creators to make sure that we do this in the most ethical and moral way? And so maybe it's it's an awful premise, honestly, like, I hate saying it out loud, but maybe maybe the premise is actually taking into account the fact that we can't really avoid that people are going to be killed in warfare is something that still exists because I guess value sensitive design for autonomous weapons is not it is not the technology that is trying to stop warfare. That is not the duty. That is not the task at hand right now. The task at hand is to try to figure out the ways to do this without harming innocent civilians and while complying with the laws and the regulations and the rules of warfare that already exist, which I guess is an extensive list. And so I feel like that reframing helps me a bit, even though it still makes me feel super uncomfortable and uneasy, just based off my own personality in this.

Speaker1:
And there's no there is no easy solution. These are systems of economics and military growth that have been, you know, evolving for a long, long time. And these are in traditions that. I don't think that we necessarily agree with, but they are part of the world that we've inherited and so there's a lot of resources going to these things. And so, you know, in the name of education. I think episodes like this are really important and in the name of also trying to make changes again, really want to give a thank you for to Stephen and his team and the people he worked with for pushing this forward, because I do believe it's better than the alternative. I guess I just still wish that we could center this conversation in more of a what is a radically new way of thinking about this stuff as opposed to just starting from. Well, war is going to happen because, you know, evolutionarily people need to fight or whatever. And so, like, we need to protect our own and like kill the other people because otherwise, like our own are going to lose their lives. And there's some truth in that. And also, if that's our premise, which is a more like. Conservative self like us versus them premiss, then like this is where we're going to end up is continuing to be in this Cold War, you know, part three or whatever situation of continuing to build up arms. And so I wish there was an alternative and maybe right now there isn't. But I like to hope that there might be in the future.

Speaker2:
Yeah, and speaking of radical reframing and trying to dismantle some of these systems, we both spoke after this interview about one of the topics that we were hoping to talk about a lot, but we just didn't quite have time for in this episode, in this interview. And that was dismantling these techno industrial military complexes that exist and how the technology industry fuels a lot of the military's technology and some of the ethical concerns and issues with that and how we can possibly dismantle those systems. And unfortunately, we we ran out of time for being able to talk about that in this episode. And we're not going to get into that right now. But for anybody listening, if you do do any work in that realm or if you are interested in hearing us do an episode or an interview on that topic, specifically to take this a step further, please let us know, because it is something that we do care deeply about and we don't want to ignore that topic while we're talking about this as well.

Speaker1:
For more information on today's show, please visit the episode page at Radical.

Speaker2:
I dig. If you enjoyed this episode, we invite you to subscribe rate and review the show on iTunes or your favorite podcast to catch our new episodes every other week on Wednesdays, you can join our conversation on Twitter at radical iPod. And as always, stay radical.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including world-class support, powerful integrations and APIs, enterprise-grade admin tools, secure transcription and file storage, and easily transcribe your Zoom meetings. Try Sonix for free today.