Episode 2: Can a Robot Ever be Moral?

Featuring Dr. Tom Williams

thomas-williams.jpg

In this episode of the Radical AI podcast Dylan and Jess interview Dr. Tom Williams.

Tom Williams is an Assistant Professor of Computer Science at the Colorado School of Mines, where he directs the Mines Interactive Robotics Research Lab. Prior to joining Mines, Tom earned a joint PhD in Computer Science and Cognitive Science from Tufts University in 2017. Tom’s research focuses on enabling and understanding natural language based human-robot interaction that is sensitive to environmental, cognitive, social, and moral context. His work is funded by grants from NSF, ARL, and USAFA, as well as by Early Career awards from both NASA and the US Air Force.

You can find more about Tom on Twitter @williamstome

You can find out more about the Mirror lab at: MIRRORLab.mines.edu

Tom Williams_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Tom Williams_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical. AI, a podcast about radical ideas, radical people and radical stories at the intersection of ethics and artificial intelligence. We are your hosts, Dylan and Jess.

Just as a reminder for all of our episodes, while we do love interviewing people who fall far from the norm and interrogate and radical ideas, we do not necessarily endorse the views of our guests on this show. We invite you to engage with the topics introduced in these conversations and to take some time to connect with where you stand with these radical ideas. We also encourage you to share your thoughts online on Twitter at radical a-I pod. In this episode, we interview Dr. Tom Williams. Dr Williams is an assistant professor of computer science at the Colorado School of Mines, where he directs the mine's interactive robotics research lab or the Mirror Lab for short. Prior to joining minds, Tom earned a joint p_h_d_ in Computer Science and cognitive science from Tufts University. Tom's research focuses on enabling and understanding natural language based human robot interaction that is sensitive to environmental, cognitive, social and moral contexts.

And as always, as a sneak peek of the interview that's going to follow, Dylan and I do a little segment we like to call Loved, Learned or Leave. And here we discuss some of the major topics that are brought up by our guest that we either loved or learned and any topics that we might have left behind. So this can be topics that really challenged us in some way. So for me, it's something that I loved, aside from just this entire interview, was Tom's definition of radical. It was really unique compared to some of the definitions we've heard before him, something that I learned. Well, also, again, many things, but something in particular that I learned is that robots can interact with humans in a good or bad way, meaning they can act in a moral or immoral way. But how do we decide what is good or bad? Because people's morals differ. It turns out, Tom explains, that there's this method for robots to guess what someone's moral values are based off of the way that they act so that they can mimic those values when they interact with them. Which was so crazy to me because I've never heard of that before. Something that I would leave behind says something that particularly challenged me was discovering that there's this tradeoff between stereotyping and human trust. So this means that robots really need to gain humans trust when they're interacting with them. But sometimes to do this, they have to stereotype. So an example of this is if female robots are interacting with humans in order to gain their trust, they actually need to be less harsh than a male robot would be. So this was something that challenged me a little bit and that I wasn't expecting to hear. What about you?

Done it just I share your love of Tom, General. He's a cool guy. I've had a lot of respect for Tom ever since I first met him. I got a chance to go out to his lab and talk to him and see how he leads his lab and his department. And it's something that just really impressed me. He's very approachable for a guy, you know, as brilliant as he is. He's just he's very chill. And for me, I really that resonates with me. So that's something I actually really loved about this interview, is that the interview really jumped around from this. These really cool moments of connection to these really deep topics of gender and morality and role ethics and and all of these things. And I just I really just love Tom's approach to teaching and his obvious passion for the subject that he teaches.

It's it's really cool for me to see something that I learned is more specifically about role ethics when I study moral philosophy in terms of robotic human interaction. I generally see two camps either talking about more of like a cat camp where you're talking about what should happen or more of a virtue ethics camp where you talk about a virtue and then you see if the robot's going to do that.

And I'm really excited to hear folks thoughts and reactions to Tom's understanding of role, ethics and morality in terms of robots, because we don't always think about robots as moral entities. And Tom's making an argument not only for that, but for a different type of morality than we may have seen in the West before, which I think is fascinating. Something that I'd leave behind is this topic of what the heck do we do with morality?

Like this is something that really challenges me is when we're talking about. Robots, how do we deal with a multiplicity of definitions of morality, and what I mean by that is if I have a definition of what is moral and just you have a definition of what is moral. And then we have this robot that we're trying to train through algorithms, you know, whose definition of morality do we take? And especially if it's only one particular culture that's in the room defining what's moral, then, are we just reinforcing a possible bad morality? Like how do we make sure that robots are are actually moral and not just moral in a very limited sense?

Yeah. And these questions of what it means for a robot to be moral, what it means for code to be moral and what morality even is. These are themes that are going to keep coming up throughout our interviews as we talk with everyone in this A.I. ethics space. So without further ado, we are so excited to present our interview with Tom Williams.

Tom, welcome to the podcast. It's great to have you here.

Thanks for having me. Absolutely. So as we get started, I'm wondering if you can tell us a little bit about who you are and what you're doing right now and your journey to this point in your life.

Sure. So I am an assistant professor of computer science at Colorado School of Mines. I direct the mirror lab there. So I. Which means I lead a research group where we study human robot communication in sort of all of it, all of its facets before minds. So I grew up in upstate New York in like a small like twenty five hundred person town and went to Hamilton College, which is a small like less than two thousand person college that is in that like twenty five hundred person town and actually got involved in in research while I was there with a woman, LeAnn Hershfield, who is also from that small twenty five hundred person town in upstate New York doing HCI research. I then ended up going to Tufts University in Boston, which is actually where she had also gone. Where I got a joint PSTN computer science and cognitive science because I'm really interested in not just sort of how we can design robots that can interact with people through natural language, but also understanding the underlying and cognitive processes that allow humans to engage in natural language, understanding both because I think it's really important in in order to develop those robots, but also because it's it's interesting in its own right. And then after after I got my p._h._d, I came out here to mines and and now sort of fun, fun story, things coming full circle. Liane Hirshfield, who, you know from my from my hometown who got me started at doing research at Hamilton College. She is now out at Sue Boulder and we're collaborating together again.

That's awesome. So the one thing that I know about Hamilton College is that there's a love of ultimate Frisbee is is what I have heard about Hamilton College, only because when I went to Sarah Lawrence College, we played against Hamilton College in Ultimate Frisbee. Did you have anything to do with that ultimate?

I did not there. I know that there were people who did play ultimate. And I I'm sure. Yeah, I'm sure. I know there was others community there, but I, alas, was not part of that scene. I was more part of like the. I think it's okay. Acappella singing the scene and less and less less on like the physical activities side of things.

Are you a singer?

Yep. Yep. And so I actually have a my wife and I are starting to help stir up a group out here in Denver as well, which we've got about we have 10 members, about half of which went to our same college in upstate New York. And I think maybe seven out of the 10 people in the group are in general are like East Coast transplants.

Right on so on that as along with singing, has robotics always been a part of your life? Like when you were a kid? Is that something that you were already interested in?

No, not at all. I in fact, it's sort of funny when I was applying for grad schools. I like explicitly didn't look in robotics. I was like robotics. No, that's not an area I'm interested in. But I you know, what I was really interested in was more cognitive science and and cognitive systems and language processing in a way that's really cognitively inspired. And well, it turns out that robots are like the perfect use case like application domain for that. Right. If you're interested in developing a cognitive system, well, it and you be embodied it it really in order to be really like a human, it basically needs to be on a robot. And so, yeah, when I got to grad school and, you know, started doing research in in human robot interaction, it was like, oh, well, of course, of course, robots.

And then as part of being in that H.R. AI lab, I really got to develop a love of of robots on the on the scientific side. Right. So I still like I don't know much about like kinematics or motion planning or things like that. But I think that robots both as a platform for cognitive systems as well as as well as a platform with lots of really interesting like HCI challenges is just a really fascinating research area.

Okay. So we've heard like HCI and H.R. I bounced around a few times now as wonder if you could maybe unpack what both of those are just for the the. Yes, and then also maybe the differences between them.

Sure. Yeah. So human-computer interaction is just a, you know, the study of how people interact with computers. And that which is a really cool research area because it's really interdisciplinary. Right. So it's both. You have both people who are just designing interactive systems, which you could could just be sort of interface design develop or could be development of fundamentally new devices that allow for interaction in different ways. But it spans everything from that sort of very tool oriented, all the way to people who do very qualitative research, more sort of ethnographic studies and using those types of techniques from anthropology and sociology, things like that, to study how people interact with competing technologies in general, which could be anything from individual sort of pieces of individual interfaces or tools all the way to to blogs and things like that. So it's a it's a very sort of broad field that brings in insights from from anthropology and sociology and cognitive psychology and of course, computer science and Asia eyes. Human robot interaction is sort of a subset of HCI that specifically focuses on interaction with robots. And it's similarly broad. You also have, you know, this this focus on the interaction of of cognition, social psychology and philosophy and design and computer science in mechanical engineering. And so within computers, within human robot interaction, people again focus either on the very tool oriented how can we design interactive robotic systems? But also I'm studying. Well, what types of robot designs are actually good at promoting successful interaction? What do successful attraction even look like? And how can we best study the effects that interactive robots have on on people? And where are these robots even used?

So for first of all, I love your lab. I think your labs awesome. And this is this is a leading question because you talk more about your lab and some of those major research questions that you've kind of founded in your lab.

Yeah, absolutely. So our lab focuses on, as I said before, human robot communication. So specifically how humans and robots can communicate through natural language. And we approached this from a bunch of those different perspectives that I mentioned before. So part of that is, of course, building these interactive systems, but part of it is also doing human subjects studies to identify what types of interaction designs are going to be most effective, as well as more philosophical work looking at the ethical implications of some of these technologies. I like to couch the type of work we do by framing it in terms of four different types of context. So the work we do focuses on enabling or studying human robot communication that is sensitive to either environmental context, cognitive contexts, social context or moral context, which is sort of a subset of social context.

So a lot of my work especially does my dissertation focused on environmental context. So robots, unlike maybe chat bots or other types of A.I. dialogue systems, really need to be aware of their larger environmental context. They're necessarily situated in some domain. So if if I describe a as some object in the world that the robot needs to go pick up like a a mug, well, the robot needs to identify exactly what mug I'm talking about. This is different from if you're talking to Siri and you might say play such and such a song. Well, it just needs to look up that song in its database of songs. Right. Whereas with a robot, if I if I want to talk about, you know, a mug, the robot isn't necessarily going to have like a specific name that it assigns to every mug in the world. So you have to say, well, pick up, you know, the purple mug and even in this in this context. Well, that helps you pick out which mug I'm talking about here. But there may, of course, be other purple mugs, you you know. But, you know, you can roll them out because they're sort of far away and not in this in this specific context. And a lot of the work I did in grad school was looking at allowing robots to understand and generate descriptions of of objects and locations, especially when they when they don't assume they know of everything that's in the world. So if I say they're about my my office is, you know, the room across from. The breakroom on the third floor of Brown Hall at Colorado School of Mines. Well, maybe you have been to Mines campus once and you maybe you think you know where the computer science building is. But you've never been in it, right? You shouldn't. Just like throw up your hands and say, I don't know what you're talking about. Instead, you should say, Aha. Well, I know this.

I understand this part of what he was being talked about. I don't know if any of this other stuff. So that must be new things I didn't know about beforehand and thus used this as a learning opportunity. Which makes sense because I'm probably describing the location of this this room to you, because I think that you don't you have never been there before. Right. Otherwise it could just say you've been to my office before. You know where it is.

This gets into as well. So modeling the what you think people know about what you think they don't know about. And this gets into the cognitive context, which is both modeling what you think they know about as well as what you think they're sort of thinking about now. Right. So if I say like that mug I was just talking about. Right. The fact that I say like that mug accused you.

Oh, I should be thinking I should be looking for a searching for a mug in my memory that is familiar to me because he said that was so that means that understanding and generating these types of descriptions requires this type of sort of very fine grained memory mumbling. And then in addition, we focus on not just sort of what do you know about what are you thinking about now, but what your overall level of of low cognitive load is in terms of the number of things you're thinking about, how visually distracting your environment is, how auditory distracting your environment is, and using this to decide what types of communication strategies use both either what verbal strategy is or whether to use verbal strategies or augmented reality or things like that.

The social context gets more into reasoning about the human intent behind behind their utterances and in in determining what that intent is, especially in in contexts where your social context may may influence the way you communicate your intentions. I'm a great example of this is with indirect speech acts. So if I say to you, do you know what time it is? You know, by like cultural convention that you should just say yes.

Right. This isn't really intended as a yes or no question. Yes. I'm framing it that way just so I say. Do you know what time it is? Because we both understand that, you know, like you are able to tell time. Right. And so I'm just highlighting your ability to tell time in hopes that you can infer that what I'm trying to get out of this interaction is for you to tell me the time. Right. And so robots need to be able to understand these different sort of social conventions of the way that they might be sensitive to social structures. So if I if I say to one of my students, like, you know, is there any coffee left, this might be maybe my student in his. Oh, Tom wants me to go get get him some coffee. Right. Whereas if one of my students says to me, is there any coffee left, then maybe. I think that they're probably not trying to give me an order, but are instead maybe they're honestly inquiring about whether there's coffee. Not that I give my students orders to bring me coffee, but just as an example of how a social structure might might inform the how these types of utterances are interpreted.

And so this is all grounded in sort of politeness norms. But then we also look at moral norms as well.

Understanding how deep this sort of moral implications of robot language, how certain language patterns that a robot might use, may or may not communicate things that do or do not not align with with this sort of moral context.

So the same way that if I say like, do you know what time it is, it could mean a whole host of different things. Sometimes when when robots say things, they might accidentally be saying things in a way that that implies they meant to imply one thing. And yes, that one thing is implied, but it also implies a whole host of other things, some of which may not be things they actually believe or agree with or want to communicate. And so how can robots avoid that? And then not only can how how can robots sort of avoid saying things that are, you know, morally inappropriate, but when they when robots receive commands to do things that they think are maybe morally inappropriate, how can they detect this? How can they respond? How can they respond in a way that's polite, but maybe not too polite? How can they respond in a way that is going to reinforce the moral norms that they think are important to their community while not, you know, pissing off or pissing everyone off?

Yeah, I definitely want to dive a little bit deeper into these social and moral norms. As you mentioned when you were explaining what h are I. Is that one of the one of the pieces of this field is and seeing what a successful interaction looks like. And I'm curious what you think based off of your research in these social and moral contexts, what you think a successful interaction with a robot would look like? Is it for me something that's coming up this like is this something that has to do with the Turing Test was successful. B, a robot comes across as a human or what's successful just mean an accurate or a non-offensive interaction. So what what based off of your research, does that mean for you?

Well, I mean, I think part of part of what makes this really very challenging and interesting is that there's competing metrics of success. Right. And often they are tests. They're context dependent. So often we we view just a sort of task completion as the metric of success. Does the robot allow people to accomplish their tasks better? You're deploying this robot into some environment so that it can allow people to do things better. Does it allow them to do things better, however? The robot is also going to have some impact on their environment beyond just allowing people to accomplish their tasks.

And sometimes the thing that's going to a best allow people to accomplish their tasks is not going to be what is sort of best for best best for the general community in that context. Another way to look at this is it not only. So in order for a robot to accomplish its tasks, one of the things it needs to do is it needs to get people to like it. The reason for this is that if people don't like a piece of technology, they're not going to use it. Or not only might they just not use it, but they might misuse it or abuse it. Right. You might have people harass a robot. Robots, which will prevent them from getting things done. You might if people don't like robots. There have been examples of like nurses, like locking medical robots in closets so that they can't get out because they're annoyed with them. Right. So if you want the robot to actually be able to do what is supposed to do, then people are going to need to be willing to adopt it. But you have to be careful with what cues you use to get people to like the robot and want to use it. Because those cues you use might not always be sort of more broadly beneficial. So an easy example of this as well. You could have a robot that just sort of like tells you what you want, what what you want to hear, which could be great in terms of getting you to like the robot and want to interact with it.

But that might be counterproductive for it doing its job or for it integrating itself into into the broader society, eh? So one example of this is why I said we want robots to be able to reject inappropriate commands. Well, if I give you a command and you you reject it, that's I'm probably not going like that very much. I probably as a human, I want you as a robot to do everything I tell you to do. Right. But clearly, if this means that the robot is going to do it, go and do whatever I ask. Even if it's something that's inappropriate, then that's not great. And maybe we shouldn't be building robots that do that. Another example is so my students like Jackson has recent recently been looking at how gender norms impact people's perceptions of robot severity or harshness. So if if you tell me to do something that I'm not supposed to do. My response to you needs to be calibrated to the severity of the violation. So if you tell me to like give you a hint in a game and I'm not really supposed to.

Maybe I should just say, wow. Are you sure? I don't really think that's appropriate here. I probably shouldn't yell at you and say, oh, my God, you're a terrible person. How could you ask me to do that? Whereas like, if you ask me to, like, go, you know, steal some of his wallet or like punch them, then I probably shouldn't say. I don't know. That doesn't sound very good, right. Because that doesn't really communicate like the severity of like what you asked me to do the chip. So the challenge here is determining how harshly then you should respond to an inappropriate comment. And what Blake has found is that people's perceptions of of the severity of the robot's risk. How harsh the robot is in its response and how appropriate that harshness is. Changes based off of the robot's gender performance, the gender performance of the person who's being rebuked. The gender performance. The deep reported gender of the participant themselves creating some some tensions where if you just want to design a robot that is going to be the most well-liked and people are going to most approve of its of its responses, then. Well, people don't like seeing female presenting robots, issuing harsh rebukes because there are gender stereotypes against a bit women being inappropriately harsh, which means, well, you have a choice between either designing a robot that plays into that stereotype and then people are going to like more. They're going to be more willing to adopt adopt that piece of technology. Or you can design a robot that's going to challenge that stereotype and be maybe less well liked, but not sort of helped perpetuate these gender stereotypes. But you can't do both, which means robot designers like, well, what is our metric of success? Well, it's task performance, but it's also how well people like the robot. And it's also designing robots that don't like cause damage to our society and helping to perpetuate stereotypes. And depending on which of these different these different metrics of success you're using and how how you sort of rank these different metrics of success, you're going to end up with different robot designs.

So, Tom, where I first. Was exposed to your research was about this question of politeness. And then this question of robotic persuasion and that gets to what you were just talking about a little bit, too, is that the question of what's at stake here? And I wonder if you could unpack that a little bit more, because it's one thing to talk about, you know, the technical aspects of robotic persuasion and making a robot, you know, a better use value. Some more people use it. But it's another thing that it seems like there's more to your research. There's more at stake in terms of these moral questions of the world that we're creating with robotics right now. I'm wondering if you can talk about that a little bit more.

Yes, sure. Absolutely. So robots are sort of unique, among other pieces of technology in terms of they have a lot of persuasive power. So an example of this is some other experiments that my student Blake has done with respect to clarification requests. So in robotics, they're in in the area on research, on human robot communication, a sort of common task that people try to get robots to do well is clarification or requests. So this is I sady I say to the robot, pick up the mug and it says, oh, well do you mean that purple mugger, that blue mug? And so here the robot needs to identify that the command was ambiguous.

And then when it identifies this ambiguity, the appropriate response is to ask which of these different options is the correct one in a way that correctly disambiguate them from each other, and that allows the person to then issue a clarification so that then you know which which is the correct mug to pick up.

And then you can go and pick up that mug. The challenge is when you move from those sort of purely task oriented domains into domains that have that have moral dimensions to them. Suddenly, these types of very simple, normally benign dialogue interactions suddenly have a lot of moral dimensions to them as well. So as like a various silly but very clear example, if I say to my robot, oh, you know, go punch Jim.

If simply using these same algorithms from before employing them in the exact same way, well, what would the robot do? You'd say, oh well I know of more than one, Jim. This sentence is ambiguous. I need to ask for clarification. It would then generate something like, oh, you know, do you mean Jim Jackson or Jim Jones? Right. The problem here is it's it's not. What the robot should really be doing is first reasoning about whether it's whether it's OK to perform this action at all. Because even if the robot once you say once you tell the robot, which Jim you mean, even if it would refuse to do that command, the damage is already sort of done right by saying, do you mean Jim Jackson or Jim Jones? It implies that punching one of those gems would be okay. Otherwise, why would you bother asking? And in fact, if I say so, if if I say go punch Jim and the robot says, Oh, you mean Jim Jackson there, Jim Jones? And I say, Jim Jackson. And the robot says, Are you crazy? I can't do that. I can't go punch him. Well, that sort of implies that if I had said Jim Jones, it would've been OK. Right. And so the robot needs to understand the the implications that it needs to understand the fact that when it says something to in order to communicate some intent, there is a wide variety of things that the robot could could be implying. And it needs to proactively sort of look out for that simulate. If I were to say this, what what would the human actually take away from it? And does do these different things, I would be implying. Do they comply with with the moral norms in this scenario or not? Right.

Bye bye. Generating this this utterance. Would I be violating an enormous myself or encouraging violation with norms? Because what we found is that when robots generate even these like very, very simple clarification requests that seem to morally problematic, people afterwards rank the actions that the robot is asking about as more likely to be permissible.

So simply by asking for clarification, the robot is not only implying that would be willing to perform one these actions, but people are taken away from the robot that at least sort of within this narrow experimental context, this action maybe is slightly more okay then than they thought it was because they're taking cues from the robot as in terms of what is morally appropriate or not. So this is a lot of persuasive power on the part of robots is in part because robots, unlike lots of other pieces of technology, have a lot of sort of perceived social agency. People perceive robots as it's very easy for people to perceive robots as human like. Right people. People give their cars names. Right. And cars are very clearly not intelligent in any way. Right. But it's easy for people to after more, fire their cars and and view their cars as if they're being, you know, troublesome or disagreeable, etc..

And it's so much easier for people to do that with robots that are designed to look like us, that talk, etc..

And this simply by viewing viewing robots as as a human like this causes people to view them as fitting into into the social environment in ways that you wouldn't, you know, that you wouldn't view other pieces of technology as fitting in. And thus this gives gives robots a lot of persuasive power, which, you know, with great power comes great responsibility. So if because robots have this type of persuasive power, they need to be really careful about what they do or do not appear to condone, because it can have a significant an outsized impact on the moral environment of the humans with which they're interacting.

So just one clarification question. When you say because you even you use the word, you know, understand and robotic intelligence and I'm wondering for that that term understanding is something that I'm trying to trace in my own research as well. What do you mean by it in the world of of robotic human interaction? What does understanding look like?

Oh, sorry. Sorry. Can you just clarify? The way I was using understand to understand who is doing understanding of what.

So when the robot understands. So if the robot is understanding the social or moral context that it's in, what is what do you mean by understand?

Yeah. So I guess there's a larger philosophical question of like, do robots understand anything? Our robots really intelligent. Can we can we can we really assume that can should we really be treating robots as if they are as if they truly understand anything? And I think here when I say understand, I mean that the robot has the appropriate internal representation, that if it were communicated to a human, then it the you know, the human would be able to verify that this is the appropriate information to have.

So for understanding of the social or moral context, well, this means having the appropriate internal representations to that encode the important pieces of of that context. I think it's less important whether or not we fizi whether or not we philosophically believe that the robot is actually understanding that context so much as whether this allows the robot to perform actions in such a way that other humans would perceive the robot as if it understood that that context. So Dan Dennett, who's his a philosopher of mind and consciousness, argues that there are three different stances we can take towards technology. You can take the physical stance where you reason about the technology in terms of the sort of the physical way at work. So if you are working on a robot, right, you might view it as, oh, well, the kinematics of the arm are such that it is going to move in this way. There's then the design stance, which is reasoning about what the robot was designed to do. You could say, oh, well, the robot is programmed to, you know, move its arm out like this. It's less about like the physical trajectory and understanding the physics of the motion and more understanding.

Oh, well, it's designed to extend the hand like this. And then there's the intentional stance, which is reasoning about the robot as if it has its own intentions, saying, oh, the robot is trying to shake hands. Right. And you can take the same perspectives with respect to the robot, with respect to a single robot. It just depends what you're trying to do with it. Again, a robot designer might, while they are working on the robot, be reasoning about it from the physical stance, whereas when they are then observing its behavior, when it is out in the wild, even if they know full well how it is programmed, it's really easy to think of it as if it has intentions. Even if you even if you know the behind the scenes, it is programmed to do X, Y and Z. And he it's really, as I said before, it's really easy for us to anthropomorphize robots, and that means it's really easy to perceive robots as if they have intentions, which is then going to cause us to reason about their behavior in certain ways and is going to change have influence on how we perceive other things in the environment.

Right. So regardless of whether or not a robot actually has intentions. Right. Regardless of whether or not a robot.

By saying, do you want me to punch Jim Jackson or Jim Jones, regardless of whether or not we philosophically believe that it actually has the intention to even the intention to distinguish between these two alternatives? Right. We can't help ourselves from taking the intentional stance. And just as humans perceiving the robot and thinking about, oh, what is the robot doing, it's trying to inquire about the the about these two options for this particular reason. And this implies something about the social moral context, which maybe the robots, maybe you like. Objectively, if we sit back, we shouldn't say, oh, well, we should and we should infer these new facts about our social moral context because the robot has said this utterance. It doesn't matter what we like objectively should or should not do. We as humans do perceive the robot to have intents, and we do perceive it to have social and moral agency. And thus we do make these inferences about about our context. And so the robot, whether or not as as robot designers, we want to program the robots to influence their environments. They they are going to influence their social, moral environments, which means we do need to be sensitive and sensitive to that influence when we are designing the robot, its robotic systems.

Yeah, actually playing off of that idea. So intentionality aside, whether or not this robot has complete moral agency or is just entirely acting off of the code that the designers are putting into it. I kind of want to get back to something that you were mentioning earlier when you were talking about the persuasive power of robots and deciding between punching Jim Jones or Jim Jackson. You mentioned that robots have to have moral norms and moral appropriateness encoded into them. So I'm curious what that actually looks like, because there's definitely different philosophical frameworks that those norms and appropriate and inappropriateness can be encoded in different ways that those can be encoded. So what are the ways that you've done your students have done in your lab and why? What are some of the tradeoffs you've experienced there?

Yeah. So there's there's a lot of different perspectives on this there. So there's some camps that argue that we should be taking like a very data driven perspective and using techniques like inverse reinforcement learning to try to infer people's rewards. Reward functions. That is their value, their underlying values from their actions. Right. So if we observe people's actions, then we should be able to infer the values that are guiding guiding those actions. And that's how we program values and morals into robots by allowing them to infer them from human demonstrations.

One problem with this is that not all humans are moral and yet and maybe observing humans, especially depending on what what environment you're you're observing them in, might not be the best, might not be the best idea. I think unexampled think places where this goes wrong is Microsoft's Htay, which was a chat, bought that like five years ago, was unleashed on Twitter. And it learned from humans how to research, how to respond to, to to language. And within about five hours. Trolls on the on Twitter had hijacked it so that it was just spouting like racism. And I Semitism. It was just awful. They had to shut it down like the day they launched it because it learns so quickly to parrot back things that we would agree do not represent the best of human morals. So alternatively, I guess the other camp would be to try to encode these in specific to codes and explicitly often as sets of DEONTE norms, as sodium Dianetic norms would express typically in some sort of logical calculus, think things that are UBB obligatory or forbidden or permissible. And if you are able to encode the set of states that are are forbidden, well, then you should be able to avoid those states. In our lab, we we air more towards the towards that second perspective. We do do some work on Autum, on learning of of norms through data. So, for example, my student reachin when has done work on politeness norms where we have sort of a two state. This is inspired by work, by the sont therapy where we have this. Sort of a two stage process which refers tried to elicit nor politeness norms from humans. And then we. And then we learned from data.

Exactly how confident through but should be of those norms in different contexts. But the norms themselves are all end up being sort of hand coded in terms of how they're logically specified, just sort of. Because that's what we can do with technology today. However, for both moral and social norms, we end up not just trying to focus on the the encoding of norms themselves, but rather how those norms fit fit into the robot sort of social and relational context. So the moral reasoning approach that we are taking specifically is grounded in confusion, role ethics, which focuses on not just sort of what is right or wrong, but instead it it primarily focuses on the roles that agents play in their relationships with others and what actions are viewed as as beneficial. With respect to those with those rules. So from this perspective, a robot, instead of starting by thinking about, well, these are the norms I should not violate. Instead, it starts by thinking, well, in this context, who am I interacting with? What is our relationship and what is my role in that relationship? And critically, you can have multiple you can play multiple roles in a single context in a relationship with a single person. So my graduate students in some contexts I am their advisor. In some contexts I am their instructor. In some context, I am their friend. In some contexts, I am more than one of those things. And each of those roles brings along with it different responsibilities. I have a responsibility to make sure that my my students learn in the best way possible. I have a responsibility that my my students are staying sane and healthy.

And so different ways of communicating and different actions I take might be viewed as beneficial in different ways. In in terms of promoting those responsibilities that I have now, how do you judge whether or not an action is actually appropriate or beneficial with respect to some relationship or or role? Well, that might need to be accorded as a norm because those are things that we don't. What defines what is what a a teacher should do is something that we agree on by cultural context. We observe what teachers do. We communicate to each other, what teachers should do. It's normative. And this is just something that needs to need to be learned by itself, cultural conventions. So ultimately, we we do still have some sort of norms that might say, oh, well, when you are a teacher, then you should not do this or you should not encourage this. But we don't sort of go to those norms first. We only get to those norms.

Once you have already considered your larger social and relational context and it sounds like in your lab, the norm is that the students refill the coffee, whereas you ask them to refill the coffee. Is that that's that's exactly. That's exactly right. Now, I'm kidding. But as you know, you're on a show called Radical a-I.

Yes. And so we're curious. We obviously think that you in your lab bring a radical voice to this conversation. But I'm wondering from your perspective where you see your work as radical and what that word radical even means to you in this ethics context?

Sure. So, I mean, as somebody who is technically born in the 80s, I guess radical has to at some level mean very cool and awesome and great. But of course, radical also means, you know, a far reaching or on the fringe is a sort of a different perspective. I like to think of Radic of radical in the way that radical is used, the way that. Nick Harkaway uses the word nomen in his book Nomen, so a nomen, I don't know, have you? Have you heard the word nomen before?

No, but for our listeners. Tom is currently at his home recording and is holding up the book nomen.

The book NOMEN GNB OMON. So a nomen is is technically the part of a sundial that sticks up and like casts the shadow. But in his in his book, Harkaway really uses NOMEN to mean sort of anything else or like sticks out. And so I hear a snowman sundial isn't just. Telling the time it's not just casting a shadow, but it's sticking out as sort of a right angle from everything else and demonstrating an additional sort of dimension that you wouldn't that you wouldn't sort of know existed if you were just looking at the flat plain of the sundial and allows you to draw new inferences, allows you to craft new theories based off of this new this new dimension that is physically popping up, putting out. And I think that work that is radical should do the same thing. It should stick adding some new dirt in some fundamentally new direction that not only allows you to, you know, ask new questions, but sheds light or I guess sheds shadow on the rest of what you already knew that that that provides additional context and depth to what you already knew.

And so in our labs work, I think that this sort of most radical things we are doing are in two areas. One is in this area of of moral communication, in part because, well, if you look at the traditional landscape and human robot communication, it's all traditionally been about task based, like language understanding and how you can best interpret what a human wants you to do so you can go do it. Whereas when you're when you're looking at moral communication, the focus is instead on how you can interpret what the human wants you to do so that you can say, no, I'm not going to do it right in understanding when and how you should be rejecting commands and not just how to do what you're told. In addition, I said that the other area of our work that I would say is radical in that way is the work we're doing with augmented reality, where a lot of the work in human robot communication has traditionally been focused on communication through language and physical gestures. But well, not all robots have arms, right? If you have a drone, you're probably not going to mount like an arm on it just so it can like point to things in the environment that's lot like possible or practical. So instead we are looking at well, in mixed reality environments, when the robot is interacting with humans who are wearing augmented reality headsets like the hollow lens. How you how robots can intentionally trigger visualizations in those headsets to communicate and gesture in fundamentally new ways. By either by having virtual arms or or by simply drawing circles around things, annotating information, their environment in ways that is sort of sticks out. It's a fundamentally new dimension that you're able to play with in order to influence your communication.

Tom, as we move towards wrapping up, I'm wondering if you have any last advice either. Maybe let's make this personal for your students. Right now, I'm probably going into the end of the semester, actually, as we're speaking. Do you have like some big piece of advice from your time in this field that you would share with the audience, but also specifically with your students right now?

Oh, geez. Well, I mean, right now is is hard. I you know, I would say to, you know, my students right now, like, make sure you sleep and spend time on things that are not work and make sure you have creative outlets. Whether that is I mean, for me, that's like singing and playing Dungeons and Dragons for other students. It might be other that, you know, first, you know, my students, it might be other things. But making sure you have creative outlets. Right now, I think is particularly important. But I think like in my general advice that I would I would normally give is just to make sure you read and read as broadly as possible. One of the challenges of working in a really into interdisciplinary area, whether it's human robot interaction or cognitive science or any of these areas that are that are really at the intersection of a bunch of different fields is that there's always going to be more more to read that can ever be read. Right. Which which can be daunting. If you are working in sort of a narrow area of machine learning, you might already be feeling like this because there's every day there's two, three papers going up on archive. But it's it's even I think it's even it's even worse in in really interdisciplinary areas because suddenly you not only have to read everything on your own sort of narrow technical method, but suddenly every everything from the field of philosophy and anthropology and social sciences is suddenly like possibly relevant to what you are doing.

And so on the one hand, you will never be able to read everything that is relevant. But on the other hand, you should be doing your best to try to to read as much as possible and just seek out a work from these other types of perspectives on from these other disciplines, from what you normally read, because that's going to that's gonna be what allows you to make your your work radical. Right. When you read like news papers that are best paper now, best paper winners or that are really classic papers. And you think, wow, this is a really like, unique, inventive approach. It's something like that completely changes the way we think of this problem. You know, it sticks sticks out is going in a completely new dimension that we didn't we weren't thinking about before. Those types of ideas do not just spring into people's heads. People have them because they because they read the stuff that they wouldn't otherwise have been reading. And just the more you read, the more the more I. Fertile ground for new ideas in directions that people haven't haven't looked in before. Will will be available to you. So don't be scared. There's a lot to read, but do your best to read it anyways and just always be reading as a p_h_d_ did it myself.

Thank you for the advice, Tom. It's always helpful for our listeners if there is a best way for them to either take a look at your work or to see what the mirror lab is up to. Is there a best place for them to go to for that?

Yeah. So the best place to go would be to go to a mirror lab. Am I. R r o r lkb dot mindstate e._d._u which is our lab web site. You can also follow us on Twitter at Mira Lab. I will also use this as a brief opportunity to plug for our new Mines Robotics program. So Mines has new masters and p_h_d_ programs and and and certificate programs in robotics.

And so if you are interested in learning more about about what we do. Yes. Visit my my Web site. Follow me on Twitter. But also look for lookout for information about the Manch robotics program. And the other faculty who are involved in that, because there are a lot of other faculty at mines working on in SIM in similar areas who are also doing some some radical work. And so you should read their stuff, too.

Just got to read. Just got to keep reading is what I'm hearing from you. Tom, thank you so much for joining us today and for a wonderful conversation. Thank you for having me. This has been great.

We want to thank Dr. Tom Williams for joining us today. And let's cut straight to the core of it. I love more than anything else in this interview.

Tom's definition of what is radical. And I especially remember his imagery of this sundial. I think he called it the the nomen of what sticks out, what that sundial puts shadow over. And I'm wondering for you just what you thought of his definition of what is radical.

Yeah, same. I absolutely loved it. I think that usually people think of radical as something that has to be entirely new and that it can't really play off of things that already exist. But he sort of explained it as something that inherently plays off of things that exist. It just adds this new dimension to things that you already knew. You can ask new questions and add more context and depth to the things that you already knew, which I just think is a really awesome idea. And it's really tangible. It allows us to do something with it. It's actionable.

And the place that he really called us to be actionable is in this question of morality and what the heck do we do with robots and morality?

And I think, as I mentioned in the interview, one of the things that I've always appreciated about Tom's work, ever since I first started reading his research was his research on politeness and on persuasion.

And there were so many times in this interview that I started thinking about, well, what if we take persuasion to its natural conclusion? Like what if we really bring it to that dystopia of robots controlling humanity? What do we do with with persuasion? Do we want a robot to be? Persuasive. And then there's the other side of it, which is that if the robot's not persuasive at all because we don't like it, if we don't listen to it, then it's actually not useful to us. But I still don't exactly know where that line is.

And that's something that I think Tom's lab is continuing to look at as whereas that line of persuasion for our interactions with the robots that we're building. Just what does you think about this question of morality and robots?

As Tom described it, so many he's just sort of play off what you just said. I definitely agree.

There's there seems to be this interesting tradeoff between robots needing to be persuasive and trusted by humans and then robots also needing to get their tasks done. And sometimes those things are in conflict with each other, which is really interesting because I love any mention that you could create a robot that just tells you exactly what you want to hear, which, by the way, if that robot ever comes out, totally getting a shameless love that idea. Know that you could get a robot that tells you exactly what you want to hear, but it doesn't necessarily do the tasks that it needs to do. Right. So I think there's a lot of tradeoffs, not even just when it comes to determining what would be a successful interaction between a robot and a human like we talked a lot with Tom about. But in determining what would be a moral interaction with a robot in a human. And one of the biggest tradeoffs for me in terms of these moral or in moral interactions was kind of something that you mentioned earlier. How do we determine what is moral or immoral? And this is something that's going to change between cultures geographically and also between people and within people to cause you. There's a lot of people in this world of philosophy who have argued the fact that even within someone there are philosophical and moral frameworks can change not only throughout time, but even throughout the day. And depending on the context and context was this word that came up so much with Tom and with his research is what context are robots interacting with humans in not just in terms of what is appropriate and what types of actions are moral, but also the context of the roles that we are playing. Like he was saying, if he is a teacher vs. a friend versus an advisor, those contexts change the way that he interacts with his students quite a bit. And so how do we encode those contexts into the way that a robot is interacting with somebody if it's so susceptible to change?

And this is what's so exciting to me about Tom's work is.

What do we do with roles in general? Like, it's not like morality has been deciphered. Like we haven't figured out what morality is in the quote unquote, human world. And so then we get to this topic of like, well, you know, for me, what is my role? Does my role change? And if you look at like sociology or I like my role or social science, I guess in general my role does change whether I'm talking to a friend versus whether I'm talking to my mother versus when I'm talking to my professor, like my role adapts and how I live into that space adapts. And like I could make a particular kind of joke with you. Perhaps, you know, I would be very different than the joke that I could quote unquote, get away with when talking to my mother. So there are just there's different bounds of how I show up in the world.

And so then we take that and we put it on robots.

And the question comes up, what is the role of a robot in our society right now? And the answer is there isn't one role. Right. Like the robot has to know what its role is, but its role is going to change, just like our roles change in our morality and our ethics.

And I'm curious for you, as someone who actually does know programming, which is something that I think about what programming does.

But I don't always do a lot of programming for you, like when you're putting numbers in and all of that. Do you feel connected with questions of morality when you're coding?

I'm trying to feed you. That's a big part of my research right now is to take these ethical, societal questions that are very qualitative and then deduce them down to numbers, which is the unfortunate reality of coding is that you have to take these really granular, complex concepts and get them reduced down to a zero or a ones that a machine can understand them. And that's especially true when you start talking about ideas like value conflicts, value tensions and the way that morality and ethics changes between different groups and in different contexts, because you can only encode one specific thing. Sure. You can encode maybe that ethical values and systems are changing between time and space and between each interaction. But you can only do so much to predict what those interactions are going to be. And at a certain point in time, the robot is going to take over. The machine learning algorithm is going to infer and make a lot of assumptions and that's where the danger comes in. There's no way to predict what every possible role the robot is going to be in. There's no way to predict what every possible ethical context the robot is going to make an interaction or an action or a decision in. There's just a lot of unknowns.

There is. And that's something. That's why I asked the question, the follow up question about this concept of understanding. Because for me, it's very similar and tethered to this question of morality in robots. What does it mean for us to say that a robot understands something, especially when we take some of what Tom said, which is that it may not even matter if it understands and like the classical intellectual human's sense versus if it just appears to understand a certain level.

Maybe it's just the exact same thing.

But there's a real question there of robot rights, like there's a question there of theory of mind, like does the robot have a mind? And if the robot has a mind, even if it sees zeros and ones, then like, well, what do we do with that? Do we allow it to become president?

You know, it might make better, more objective decisions at the very least, if it's all database or we see that in like the examples of judges. Right. Like people don't trust judges who are just A.I.. And at the same time, like they're going to be more objective in terms of taking in that data.

So my greater question is about like, what do we do with that theory of mind when, as you say, eventually what a robot is is like zeros and ones and an algorithm maybe embodied in some way. But we're still making moral decisions in that point. So what what do we do with the mind of the robot?

I just don't I'm going to answer with a very extra central thought for you. So take this idea of a robot getting into theory of the mind concepts. Take this idea of a robot and asking if a robot really has autonomy and consciousness and understanding of what it is doing or if it is just inputs and outputs. Maybe a robot is just inputs and outputs and it's just coming across as this facade that it understands entirely what it's doing. And it seems to us that it is fully conscious, but there's no way for us to know what if that's just humans? What if that's all we are? Maybe, maybe we shouldn't judge a robot's zeros and ones if if that's just our synopses in the way that we work to.

Maybe the line is much thinner than we think. And this is something that gets me going as someone in the religious studies and sociology world, because then I start thinking about this concept of God and different beings through time and how we have treated humanity in a very privileged way, where we've lifted up humanity sometimes with the denigration of animals and other beings and the earth itself and all of that. And perhaps we can think about this robot as just a new being on the scene, but doesn't change because we've created it.

I don't know. I'm not sure. Just but that may be all the time we have for this episode, even though we're leaving many questions unanswered.

And there will be more questions to come, I'm sure. And possibly more answers. Well, I guess we'll have to stay tuned to find out. So for more information on today's show. Please visit the episode page at radical a-I dawg. If you enjoyed this episode, we invite you to subscribe rate and review the show on i-Tunes or your favorite pod katcher. Join our conversation on Twitter at radical iPod. And as always, stay radical. So I can like.

Quickly and accurately automatically transcribe your audio audio files with Sonix, the best speech-to-text transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Better audio means a higher transcript accuracy rate. Are you a radio station? Better transcribe your radio shows with Sonix. Automated transcription is getting more accurate with each passing day. Five reasons you should transcribe your podcast with Sonix. Manual audio transcription is tedious and expensive. Automated transcription can quickly transcribe your skype calls. All of your remote meetings will be better indexed with a Sonix transcript.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2020—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.