Atlas of AI with Kate Crawford


Kate Crawford.png

What is the Atlas of AI? Why is it important? How is AI an industry of extraction? How is AI impacting the planet? What can be done? To answer these questions and more we welcome to the show Dr. Kate Crawford to discuss Kate's new book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.

Dr. Kate Crawford is a leading scholar of the social and political implications of artificial intelligence. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at Microsoft Research in New York City, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris.

Follow Kate Crawford on Twitter @katecrawford

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Kate Crawford_mixdown.mp3: Audio automatically transcribed by Sonix

Kate Crawford_mixdown.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
And.

Speaker2:
Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information, we are your hosts, Dylan and Jess.

Speaker3:
In this episode, we interview Kate Crawford about her new book, Atlas of A.I. Power, Politics and the Planetary Costs of Artificial Intelligence Atlas of A.I. It was published by Yale University Press on April 6th, 2021. And for those of you listening to this episode on the day it airs, that was indeed yesterday. So if you want to buy your own copy of the book, you can find a link in our show notes, as well as a link to a website that helps you find books at your local indie bookstore.

Speaker2:
Dr. Kate Crawford is a leading scholar of the social and political implications of artificial intelligence. She is a research professor of communication and WTS at USC Annenberg, a senior principal researcher at Microsoft Research in New York City and the inaugural visiting chair for A.I. and Justice at the Ecole Normal Supérieure in Paris.

Speaker3:
When we started this podcast almost a year ago, we came up with a list of people who we were considering dream guests. So these were people who we would be unbelievably ecstatic to have on the show if we were ever given an opportunity to. And one of the first people to land on this list was Kate Crawford, because she has played such a huge, incredible, insurmountable role in both Dylan and I as work and scholarship and research. And starting from when I first heard a speech of hers in twenty seventeen, she was a big reason why I landed in this field in the first place. And so having her on the show now to talk about her new book, which is a culmination of so much of the incredible work that she has done for this field and for this community, it feels so full circle for us and we feel so unbelievably lucky to have been given this opportunity. And so we are just so excited to share this amazing conversation with Kate, with all of you. And we are so excited to hear what all of you have to say about her book.

Speaker2:
So we're on the line today with the one and only Kate Crawford. Kate, thank you so much for joining us today.

Speaker4:
It's such a pleasure to be here. Dylan and Jess, I've been listening to you all throughout the pandemic. I feel like you've become like my audio buddies during this time. So it's lovely to talk with, you know.

Speaker2:
Absolutely. And today, it's very exciting because we're talking about your new book, Atlas of A.I. And so the first question we have for you is what is Atlas of A.I.? And maybe it would be helpful to unpack what you mean by Atlas and perhaps what you mean by it.

Speaker4:
Absolutely. Well, the idea behind this book was really to explore how artificial intelligence is made. And I mean that in its widest possible sense, essentially thinking about the economic, political, cultural and historical forces that shape what we call artificial intelligence. So I think when some people hear the term A.I., they often focus on the technical side that might be algorithms, neural nets, or you might think about specific technical platforms like Amazon's Alexa or Google's cloud. And I always think of Russell and Novick's classic textbook. And I think they use that description that I, as an intelligent agent, that takes the best possible action in any situation. And I think if we if we focus on the way these definitions and doing work and setting the frame, they really sort of point to how I should be understood and measured and valued and governed. And of course, if I is defined by consumer brands for corporate infrastructure, then ultimately marketing and advertising have predetermined horizon. But if we also see systems is somehow more reliable or rational than any human, that's of course, what you would assume. If it can make any best possible action, then it also suggests that we should be trusting these systems to make high stakes decisions and health and education and criminal justice and you name it. So I think in so many ways these perspectives leave out the big story and they're the stories of the human labor that's needed to make the systems work, the vast amounts of energy and mineral resources to build infrastructures, and of course, the histories and structural inequalities of the data that's used as though it is somehow just neutrally presenting ground truth.

Speaker4:
So it's been clear to me, you know, in the five plus years of writing this book that I really had to dive deeper into AI at a much bigger scope and ultimately explore the materiality of how artificial intelligence is made. And the reason I like this metaphor of an atlas is that I think Atlas is a very unusual book. Say they help us look at things at different scales, if you will, so we can look at the scale of a planet or we can zoom in and see a city or a mountain range. And for me, that's the right metaphor for thinking about how it works. The aim here is to sort of leave the abstract sort of nowhere of algorithmic perspectives and ultimately to to put it in specific. Some ways the places where people and institutions are making choices and looking at how power is consolidated and operationalized through these systems. And it's interesting to sort of think about one of my favorite writers, in fact, about technology, the physicist and technology critic, Ursula Franklin. She's she's a favorite of many of us I know in this space. But but she has this lovely line where she talks about how maps are really designed to bridge this gap between the known and the as yet unknown, and that these testaments of sort of collective knowledge and insight. So for me, creating an atlas was also a way of really giving a nod to all of the scholars and activists and artists who really shaped the way that I think about artificial intelligence as well.

Speaker3:
And in the field of AI ethics, I guess responsible AI, which you have been incredibly foundational in, we tend to assume that everyone has a similar definition of AI without really questioning that very often. And I thought it was interesting. In your book, you mentioned that you thought artificial intelligence is neither artificial nor intelligent. And so I'm wondering, Kate, what do you think that artificial intelligence is?

Speaker4:
Well, it's interesting because I agree with you that it's so commonly an assumed term and it's really not. I think it's it's this definitional work is incredibly important. And certainly for many years when I give talks or teach, I really sort of start by defining artificial intelligence in different ways. First is obviously to think about the technical and to look at the the histories of different technical approaches over the last 60 years. But then I look at the social practices that go into making AI and that means who's in the room? Who's making decisions about what the system will do? Who was it optimized for and who benefits from it? And then also to look at the infrastructural questions, to look at what are the backend infrastructures that make this possible? What are the economic infrastructures that make this possible? And what are the sorts of large ecological processes that are used to make these sorts of systems work? So for me, it's really trying to widen that lens way out in terms of how we understand what artificial is and artificial intelligence is. And I think when you do that, you find that it's neither artificial nor intelligent. It's it's both embodied in material that's made from these natural resources, human labor, logistics and histories and classifications. And it's not intelligent in the sense of anything like human intelligence. It's not able to really discern anything without enormous sort of data sets or predefined rules and rewards and depends entirely on these much wider set of political and social structures.

Speaker2:
That was something that just and I picked up on were pretty fascinated about. In your analysis in this book was around optimization. And I'm wondering if you could unpack a bit more about who are these systems optimized for and then perhaps who are they leaving out?

Speaker4:
Well, I mean, the interesting thing is that we have to think about that through the entire supply chain. You need to go all the way back to the minors who are actually extracting the minerals that these systems will be built from. You need to look through the corporate histories in terms of who's been building these systems and who runs them. And then you also need to to think about the engineering workforces that are there to essentially construct systems that serve populations that all too often are imagined to look and act just like them. So we have issues throughout the entire pipeline of our production where we need to think about who are these systems for and whose interests do they serve. Certainly one of the the most dominant trends, and this was something that I did a lot in the book, is to go back through the histories of who funded artificial intelligence from the beginning. As as we know, the Department of Defense in the US and militaries around the world have been very powerful funders of the priorities behind A.I. And I think in some ways it really shaped the way these systems engage with the world, from surveillance priorities through to ideas around targeting and scoring and ranking. These are priorities that have been traditionally associated with militaries and policing. But also we have to think about the influence of capital and certainly in the last 15 years, the increasing push of these systems into commercial logics, into being large engines of capital. That really means we have to look at the ways in which they serve advertising priorities. They sort of profit priorities. It's it's very difficult these days to point to systems that are operating, operating really outside of those logics.

Speaker3:
Another thing that you mention in this book that seems to be a little bit of a recurring theme is you you call I or the industry of A.I. as an extractive industry. And so I was wondering if you could speak to what you mean by this extractive industry and how it might have shaped the stories that you chose to tell in this book?

Speaker4:
Well, it's interesting because certainly, you know, the creation of contemporary systems is entirely reliant on exploiting energy and mineral resources for the planet, cheap labor and data at scale. And this is something we might know conceptually or we might put it out of our minds. But I really wanted to see that at work and to come to understand how those patterns of extraction function, how they connected to each other, but also how they work in both space and time. And what I mean by that is it actually means visiting those different locations around the planet and seeing what happens there, but also tracing the ideas and the histories and the materials from the past that are used to shape current technical systems. So that meant going to actual sort of mining side. So I'll give you a couple of examples. So I traveled to Silver Peak, Nevada, which is the last functioning lithium mine in the United States. And, you know, here is where you can really see one of the sites sort of in this ancient desert landscape where, you know, huge amounts of lithium is being extracted to create lithium ion batteries. And we believe that sound very sexy. But lithium ion batteries are sort of the backbone of have a lot of consumer devices work from the iPhone all the way through to Tesla electric vehicles. But again, we are dealing with a crisis of availability in terms of how much lithium ion we imagine is possible to still extract. In fact, I was just reading a very recent report that came out a couple of weeks ago now that is saying that if we manage to move to best practices in terms of recycling lithium, which currently we are not too great at, I have to say we could be fortunate enough for these reserves to last until just after twenty one hundred.

Speaker4:
If we are not so highly careful and recycle these resources, we could be looking at 20, 40. So that's an extraordinary time horizon to be thinking about one of these kinds of core mineral components to the planetary computation networks that we are so reliant on. And the more work that I did looking into things like rare earth minerals, looking into the energy expenditure costs of things like large language models, which others like to make Kembrew and Margaret Mitchell have written about it is really extraordinary to think about how close we are to running out of some really key resources. And indeed, the Biden administration has released a report looking at the sort of the security risks of being reliant on particular rare earth minerals that in many cases were extracted from other places around the world, specifically places like China and Australia. So looking at those global extraction dependencies in I was was a really profound part of the journey for me in doing this, doing this book and doing this research. Obviously, then, too, we have to think about labor and labor extraction again, how that works. We are familiar with things like Amazon took workers. But I also wanted to see the essentially the kind of almost factory infrastructures that I companies rely on. And so one of the sites that I went to was to go inside an Amazon fulfillment warehouse and to see the experience of work for people who really the logistical connective tissue, if you will, between robots and boxes delivering at your door.

Speaker4:
You know how these are people who are being paid in some cases. Fifteen dollars now, but that is much less than factory work at that level of difficulty used to be paid as recently as 20 years ago. And to really go into those places and to see the human costs to to see what it is physically, to go through the stress of sort of highly repetitive work, but also on the algorithmic management systems, things like the picking rate that really are there to to pressure every worker to try and make sure that they're collecting all of the things off shelves and packing them into containers in the minimum possible time. So for me, going to these sites and spending time in these locations was a really important way of understanding extraction and how maximum value is being extracted from human bodies and labor, how it's being extracted from the earth, and also, of course, how it's being extracted from data in terms of the large scale training sets that I've certainly spent many years now studying and looking at. So it's it's really for me, it became this unifying, I'd say, a unifying metaphor. But it's far more real and material in that it is indeed the way in which. Self becoming this sort of super extractive industry in the same way that mining really emerged as the sort of industrial extraction powerhouse of the 19th and 20th centuries.

Speaker2:
So how did we get into this mess is my question. And I guess by this, I mean, I think there is a myth out there. We hear a myth out there that I is this either post material space or it's the space that doesn't have materiality behind it. And obviously in your work, you're claiming otherwise that there's almost it is by definition, material space. So how do we how do we square that? And then also in the history, how did we get into this mess?

Speaker4:
Well, I mean, it's interesting, too, in a way in which artificial intelligence is a product and a reflection of late capitalism. So it's you know, it's almost impossible these days to talk about AI without talking about capitalism, but without the drive history on that. Let's just look at the ideologies that I think are operating that have really brought us to this point where we don't look at those material consequences. And I think there's two really significant ones. Certainly the big one is this idea of Cartesian dualism, that sort of mind and body as separate and that therefore you can create something like disembodied intelligence that has no relation to material forms. And that is is very much this kind of core idea that Ellen Ullman has. This has this fantastic line where she talks about this this myth that the mind is like a computer and a computer is like a mind, has infected decades of computer science. And it's ultimately become an original sin for the field that we really think about these systems is somehow just being disembodied intelligence that we ultimately aren't connecting to the earth for the human bodies at all.

Speaker4:
And I think the other thing that that's really problematic here is, is the ways in which we assume that intelligence itself is something that is is easily created in his ways without being drawn from existing data. And I know this is something that that you've looked at a lot in this podcast, but the way in which I think we have been blind to the sorts of world views that gets smuggled in by the data sets that we scrape off the Internet, that is has been another, I think, very serious problem for the field, which we're really just seeing the fruits of now, that by using data sets that essentially just represent extremely normative, racialized and gendered logics of the world, we didn't really think hard enough about what that would do to the technical systems that are built on those large scale datasets. So I think these are some of the slippages that have happened, particularly in the last 15 years and development that have brought us to the sort of critical junctures that we're at today.

Speaker3:
Yeah, and you mentioned at some point in the book that computer scientists have fallen into this pattern of thinking that the computer speaks or thinks like a human and the human thinks like a computer. And I'm curious if we are rejecting that deterministic viewpoint, as it seems to be in this book. What what do you think of A.I. and its ability to be intelligent? Do you see I as an intelligent thing,

Speaker4:
You know, I mean, I think you currently have that conversation without looking at the history of intelligence itself and how this term has been used as the premise for everything from, you know, profound programs of eugenics through to the sorts of IQ tests that were designed to really favor people who came from the most privileged backgrounds and to work to the detriment to those who didn't. So there you know, the term intelligence itself is intertwined with ideas of of class and race and gender and has been used to exclude as much as it's been used to try and lionize and prioritise particular types of of social activity. So I think that since I'm much more interested in thinking about what these large scale technical systems are good at, what can they do where other forms of optimization and prediction are going to be helpful in addressing the very real challenges that we face in the 21st century and we aren't that useful. And to do that, I think we need to strip away a lot of this mythology around intelligence and to be looking with clear eyes around where these systems work for us and where they fail us.

Speaker2:
One of the things that I really loved about your book was the different levels of scale that you were working at. So maybe more on the local side, but then you also. Went out to the planetary side, how can we think about scale in terms of building maybe more just systems of AI or a scale, maybe by definition a problem in us building those models?

Speaker4:
I love this question for many reasons, but certainly, I mean, from the perspective of writing this book and wanting to bring in different Schola forms of thinking, I was really inspired. And really I'm not sure if you know, this this fantastic film and book that was produced by Ray and Charles Eames called Powers of 10, which is all about scale and how we might look at the planet strongly recommend for pandemic viewing. You can you can find it on the Internet very easily. But it's sort of me has always offered a way of trying to look at systems from different perspectives. But it's interesting that scale has also been a major fault line, I think, through the way that technical systems are built and how they're imagined to work in the world, which is that so many of the errors and problems that we're finding is because something was designed to scale over a mass population rather than thinking about variations and experience, about difference, about how people actually experience the world differently. So in that sense, the way in which technical systems are used to scale is, I think, one of the core problems that we need to do far more thinking about and we'll work on. And it's actually something that I've really been enjoying sort of working with colleagues like Michael Mantello and Esau and Mike Anthony at USC Annenberg, working specifically on this this concept of scale and where I think it trips us up in technical system design from the perspective of social science and studying science and technology. I think it's really important that we need technical systems at different levels of scale, that we study them, that we go to those places and understand the way that scale levels actually intersect. And I think for me that the real watershed moment in in my past that really helped me sort of shift the way I was was researching these questions was actually when I was doing a project called The Anatomy of A.I.

Speaker4:
with four D'Angela, we started that gosh, I think it was over five years ago now where we really wanted to look at the life cycle of a single Amazon echo to to go from birth to life, from how the sort of components were mined and smelted and shipped all the way through to the US tips where these devices are discarded generally in less than four years. And they end up in these sort of toxic waste dumps in places like down in Pakistan and in doing the the really extensive research behind that project, which was, you know, I'll be honest with you, extraordinarily difficult like it. You're trying to dig into supply chains that are intentionally kept opaque. You know, a lot of mining companies don't want you looking into where these minerals come from or how people are paid or not paid to produce them. Same goes for studying deep learning systems. It's extremely difficult and again, in many cases, proprietory. I mean, good luck getting a look at the sort of large scale training sets that, say, Facebook or Google use. So in doing that project, it was really extraordinary for me to realise that this was just looking at a single device. What would happen if we change the scale and looked at the entire industry? And that was really one of the big sort of central motivations behind creating Atlas available. So scale is important, but it's also really difficult. So I think researchers also have to struggle with and work with that that need to look at how technical systems affect us at so many different scales every day.

Speaker3:
Let's dig into that motivation a little bit more, because one of the things that both Don and I really loved about this book was that you brought so much of yourself into it at times and you were sharing stories of things that you went and visited and saw to uncover some things like these inequitable supply chains. And so what does this book mean for you? Why this book and why now?

Speaker4:
Oh, I mean, this is this is such a good question. And of course, there's always a lot of telescoping of time and saying why now? Because books are so long. But, you know, if you told me five and a half years ago that when this book came out, it would be during a pandemic and that we'd be where we are with the tech industry really moving ever deeper into the interstices of work and education and health care, I would have been very surprised, I'll tell you that. But there are some things that don't surprise me that are certainly consistent patterns. That have simply become intensified since I first began this project, so I'm going to to really answer that question means to sort of dig into so many of the concerns that motivate each one of the chapters of this book. And I'll give an example of one of them. When I first started writing about the issue of bias in large scale technical systems was back in 2012, would you believe it? And back then, you know, I would remember sort of giving talks and writing papers and people would say, do you really think this issue of bias is a thing? I mean, the more data we have, the more objective systems become. We know that. So, I mean, this issue is simply going to go away in a couple of years. Why are you looking at it? If I was I was always kind of really mystified by that response.

Speaker4:
But obviously in the in the nine years since we've seen that problems simply become more and more extreme as these systems had more and more data. So certainly those sorts of presumptions, I think, have been upended. And it's always been clear to me that this term bias itself is actually unhelpfully narrow and I think easily captured by industry to say, oh, well, you know, it's bias. And we've we've simply, you know, technically addressed that. We've we've fixed it all. We've collected more data and it's no longer biased anymore. These, I think, a fundamental misunderstanding of what's going on. And in many ways, it comes from the term bias itself. It means very different things in statistics, as it does in law, as it does in sociology. So I think in many ways we're speaking at cross purposes when we use a term like that. And and it simply doesn't go far enough into the logics of how technical systems are built and constructed. So, I mean, I gave a I gave a talk at NeuroPace back in twenty seventeen called the trouble with Bias, which for me was was really about suggesting that we needed to look at this much bigger question of classification. And for me that's been a big, big motivation in the book over many years and sort of looking at how classifications work and how classifications have always worked for centuries to essentially centralize forms of power, to create social categories in which people can be counted, understood denominated.

Speaker4:
And then how did how does that actually translate into technical systems and what work is being done when classifications are accepted at whole cloth? And then we see that with racial classifications, with gender, gender binary classifications, which, believe it or not, astill still in these technical systems to this day, which always is utterly extraordinary to me, but much more much sort of, I guess in a granular level, how people are being categorized into emotional categories. As we know, Beckman's six emotions, which are being read by emotion detection systems, despite the fact that we have so many studies saying you simply cannot detect internal emotional states from external or facial expressions. We also have systems that are classifying you by character and by personality. This was something that became really clear and sort of working with things like the image that data said. It's extraordinary how many categories are about your moral character, your worth as as a person. So for me, that's that's one of the big shifts that in this book I was really focused on, which is we have to move this debate on. I think it's it's stuck in a repeating loop where we're really not addressing these these core questions. So that's one example of sort of the motivating forces that have sort of turned into this.

Speaker2:
This is not to drive too much more into the philosophical, but I'm going to what you're saying makes me think of like FICOs work on us being disciplined into categories. And I'm just curious about categories in general, because they're obviously being embedded into our artificial intelligence systems. What can we do about categories? Because they're also a way that we make sense of the world as humans. So are there elements that you think categories can be helpful as we talk about A.I. and justice, or is it something that we should possibly distance and start of in terms of the categorization?

Speaker4:
Well, you're absolutely right that this this takes us back to Ficco. It takes us to the feminist phenomenologists who've written about this for many years. This is the term that Sestina uses and others is this idea of the epistemic machinery that catagories become the epistemic machinery of how technical systems, quote unquote, see the world. So it's not something that we can avoid. It's not as though you're going to sort of create category less ways of understanding or if so. We are yet to do it in the way that machine learning works today, but I think certainly how we start to engage with the work of categories is now something that it's extremely urgent. So I'm thinking here of the ways in which, again, because I spend so much time looking at training data sets, they often build on each other. So they often just import a previous training set and say, oh, we'll just use that as our ground truth. So I'm thinking here of image net important, wouldn't it? And of course, wouldn't. That itself was also shaped by things like the Brown Corpus, which was produced back in the 1960s. So you have these genealogical layers of categories that sort of build on each other and each other. And these are in many cases, I think, profoundly shifting sands to be to be building on. I mean, so many of the words, if you if you go back and look at image it these sort of wood categories come from sort of very old and traditional forms of English speech that might have been more common in the 1930s and 40s.

Speaker4:
But look, completely bizarre today. There are sort of was a Trollope and Slattern and Jezebel, which, you know, again, we use now is kind of like a fond affectation. But it certainly was not what it was used that way in the 1930s and 40s. And again, these sorts of the way, that language is a product of culture. It's always changing. It is constantly in flux. But identical technical systems fix language into place as though that is a constant. We see this problem again now with three in the way that if you put in a word like a Muslim, one of the most common responses you get is text that's associated with terrorism. And this is something that Sulin has done work on as well. And for me, these are the sorts of issues that we strike again and again, which is how are we essentially crystallizing artifacts of the past language, artifacts, image, artifacts, and then using them to create the systems of the future. And that as a way of making knowledge as epistemic machinery is what we have to do far more contending with, because if we don't, we're going to repeat the errors of the past. We're going to take in whole cloth those structural biases which are racial and gender and cost, and take them as though they shouldn't be perpetuated when absolutely we should be doing the opposite.

Speaker3:
Now, bridging from the present and the historical pieces of this book and I guess the earthly pieces of this book and into the future and into space, you actually end your book in your your KODE chapter on space and what is happening in A.I. technology and in the A.I. disciplines specifically around space travel and what it means for humanity. And could you speak to

Speaker4:
That a little bit? Absolutely. I mean, you know, this was for me, one of the one of the kind of really curious experiences of writing this book was I was looking at the ways in which the sort of capital logics of the enormous amounts of money that has been generated by a very small handful of people. Where is it going? And so I was looking at sort of the real sort of tech billionaires, the Elon Musk's the Jeff Bezos says, where are they now? Investing their money? And it is extraordinary to see how many of them are really investing in outer space in creating a new, entirely privatized space race. Now, for me, that is problematic on many levels, but certainly it looks to an underlying ethic, which is that the planet has really reached the end of its use by date and that we have to start looking to either forms of mining and extraction on asteroids, which is certainly one of the things that many of the space companies that are researchers are now currently doing, but also the sort of, I think, mythic idea of living on Mars that muskies is extremely wedded to without any of the sort of deep science that has been done around the extraordinary difficulty of space travel, of let alone living on different planets, the the degree of which you'd have to abrogate the responsibility for the problems here, that we wouldn't be spending those many billions of dollars on addressing core issues like climate change, labor, justice, core sort of food security questions that we now face on this planet.

Speaker4:
To me, it's absolutely mind blowing. And so I really looked into, you know, how some of these billionaires sort of have have come to. This way of thinking, and for me, Blue Origin was was one of the most interesting sites to study. This is, of course, Jeff Bezos of Space Company. And Jeff Bezos has said and in many talks that he was inspired by Eugene O'Neill and Eugene O'Neill sort of vision of humans living in these sorts of floating space satellite colonies. And that, to me, is ultimately coming up this idea of growth. Can we maintain the sort of growth that we've seen over the last century going forward? Personally, I think that is profoundly unethical. I don't think we can this idea that we must maintain growth has brought us to the precipice that we are now on. But this is this is the belief and I think the great hope behind so many of these sort of space mining and space colonization initiatives. Whereas in so many ways, we have to really question that idea of constant engines of economic growth because we know who it serves.

Speaker4:
It's serving by far and away the few and not the many on the one planet that we currently have. So, you know, to me, going to those places again, I went and visited Jeff Bezos is space based out in in West Texas, where he has a reusable rocket landing base. And that was that to me was again, one of those extraordinary landscapes to think about where these ideologies are by attacking us, because you have this huge Permian Basin. It's sort of massive plain in Texas. And I was sort of standing up on a on a on a mountain ridge, sort of looking down at the space base, standing on public land. And it's the most ancient landscape that is already suffering from extreme drought and and so do the other sorts of climate pressures we're on. And then when I go back in the car to taking photos of this space, I had this moment of looking behind me and realizing I was being followed by these sort of black security vehicles. And it was it was like I done all the research. I knew this was public land. I knew I was standing where I could say. But like, I was clearly like being ushered away and there were actually tailing me and it really quite sort of threatening way.

Speaker4:
And I pulled over at one point and I thought, what's going to happen here, you know, here in this car and wait and see what happens. And they just sat there right behind me and nobody got out of the car. I didn't get out of the car. And I was like, OK, I am just going to start driving again. And they escorted me all the way out of this huge desert valley. It was like it was extraordinary. And for me, it was another sort of moment of thinking about this relationship between public and private commons, between who these infrastructures are being built for, who's welcome and who is not welcome there. And I think that a lot about these these visions of space, you know, who is going to most benefit from these these visions and what does that mean for the rest of us? So for me, ending the book, there was a really important part of the journey was not just for me to go out to the sort of outer reaches of where capital is reaching, which is, of course, in outer space. But to think about what is this vision of the world and who will be responsible to and ultimately how do we reground that responsibility in our communities and on this planet?

Speaker2:
One of the things that struck me in this conversation is that we're talking about a lot of myths or stories or narratives, whether it's like the partisan math or whether it's this concept that capitalism can keep growing and unbridled like GDP. And that's how we know we're going to be, quote unquote, successful, whether we're talking about, you know, the democracies and and how Asia is impacting those. And I guess my question is, what then do we do with those myths? Do we try to replace them with new myths that are more equitable for everyone? Or do we just continue to challenge them? Like what? What can we practically do? What can listeners, I guess, do with those myths that may be actively harmful right now?

Speaker4:
Ultimately, I think it's it's really about, you know, why are we building these systems? Who benefits and who was harmed and grounding these questions, empirically grounding them and graphically doing the work of actually understanding whether a system is, in fact, concentrating power into fewer hands. If you are right now building technical systems and you are able to say no, this tool actually challenges existing concentrated forms of power, then I think you are actively working against those sorts of sort of mythic structures. But for me, I think the most optimistic sort of experience of writing this book and speaking to so many researchers and activists and artists and people who are actually working on these issues has really been to see the ways in which these core political movements, movements for climate justice movements, for labor rights movements. The data protections, which have always been separate and they've always been very siloed, are in some ways being brought together right now by this conversation around artificial intelligence, because everybody has a stake in it, because these systems are touching so many people's lives and people can see sort of the downsides and the ways in which their lives being negatively impacted.

Speaker4:
So to me, that's actually a story of coalition building, of how do we actually create new political coalitions to really address these core underlying logics. And I think in some ways we do have to look at sort of the logics even more than sort of the the mythic structures behind these systems, because the logics are actually very plain to see. We can look at how they are funded. We can look at who they are serving. We can look at how the systems work. I think really going to that level with a much clearer eye and looking at that at different scales is so much of what is needed right now. But it's a really important question because again, for me, this is one of the big challenges we have seen this profound centralization of power through the deployment of a planetary scale. So how are we going to work against that and actually preserve the sorts of core values and tenets that we want to see in terms of justice, equality and democracy more generally?

Speaker3:
So now that Atlas of AI is officially out all over the world, for all of us who are going to be running to our local indie bookstores to try to purchase it as quickly as we can, what is it? If you were to speak to all of your future readers at once, what would you hope would be one of their biggest takeaways from reading this book?

Speaker4:
Well, I love that you mentioned independent bookstores, just that that makes me happy immediately. So I think really my hope is that we can shift this conversation about artificial intelligence away from this narrow technical focus to really look at artificial intelligence as deeply interconnected at this planetary level between chains of the environment, labor and data. And once we do that, once we sort of make that scale shift, I think we can really start to ask this question around. What does justice look like in relation to technical systems? What kinds of expectations can we have about the way that these systems should serve us and not us setting in? And then finally, you know, how do we think about really addressing the kind of profound inequalities, not just in our own communities and our own countries, but the global inequalities that are really fuelling the way that these technical systems are currently working in the world. So for me, you know that that would be the great hope. And certainly if I in my experience, I've spent so much time in archives as well in the last last few years, is really trying to sort of think about the ways in which artificial intelligence can be different.

Speaker4:
We can design different sorts of systems. And the ones that we have today are really a reflection of the very sort of I think in some ways of small groups that have been designing technical systems coming out of, in many cases, computer science and engineering training that was really separate to these broader social, economic and ecological concerns. We can shift that. We are at a moment where we have to shift that and it means really breaking the silos down and creating a much more holistic way of looking at the way that these systems work and affect us. And that is, again, from the very beginning when we first begin to construct these systems all the way through to once they're actually deployed in the world to go back and see how they're affecting people. Do people want these systems, all these systems actually serving us? So for me, that's the question of really what kind of world do we want to be living in rather than simply focusing on what sort of technology do we want to build?

Speaker3:
So, Kate, I couldn't have had this conversation without at least making a brief mention of the fact that you have played a huge role in my own work and in the role of many women in the responsible tech community and also men and Dylan and other people that I have come in contact with as well. And so we just wanted to send a sincere and huge appreciation and gratitude your way for doing all the work that you have done for our community for so many years and now summarizing a lot of that work in this book. So thank you so much for sharing your wisdom and your thoughts with us today. And thank you so much for coming on this show to talk with us.

Speaker2:
It's absolutely true that this podcast would not exist without your scholarship and the foundation that you've laid so well. We have you on the line. Thank you so much.

Speaker4:
Oh, that is just so profoundly lovely. Thank you both, Jess and Dylan. And I want to thank you for the work that you've been doing with this. So you've really brought together a community and made it possible for so many people in this field and beyond to really care about the work that happening from so many leading researchers. And that's thanks to you. And frankly, I couldn't do the work that I do with other communities that I'm in. And everything is always a reflection of those sort of wider collective conversations and incredible communities of support. So thank you. It means so much and keep doing this. Extraordinary.

Speaker2:
I think it's pretty obvious from the end of that interview, but Kate is such a huge hero of ours and like we said, just such a reason why this podcast exists in the first place. So we would like to thank her for that wonderful conversation. And one of the things that I was really reflecting on after that conversation before we recorded this outro is just how that interview tied everything together, how Kate's book tied this first year of the podcast all together, all these different interviews in which we've looked at so many different topics and this atlas of A.I. that Kate has brought into the world, it just feels like almost a culmination for us in some ways, which is kind of weird to say, but it's just such a landmark of where this field is. And we're you know, we've been lucky enough to be part of the community telling the stories of this field as it's changed, you know, even within the last year. And so, as I just said in the intro of this episode, it's an honor to be conversing with Kate about all these topics because all these topics have become part of our lives, much like all of our guests have become part of our lives and some really amazing and and unexpected ways. And so like to be talking to this again is just like this dream guest of ours as just like was incredible to be in conversation with this brilliant, brilliant scholar.

Speaker3:
And what a symbolic way for us to wrap up this first year of the podcast and of this project together is by ending on such a high note with somebody that we never could have dreamt of having on the podcast originally. And now we are not only talking with her about her new book, but we're recognizing that all of the topics that she's bringing up are things that we have been able to speak with other amazing scholars in this field about on this show. And I love that we got into labor in a new way than what we've already seen with people like Vina Duvall and Mary Allegre in Santa Monica. And we talked about bias and even shouted out Su-Lin Blodgett, who recently had an episode with and we talked about some new topics as well, like space, which I think we definitely need to do an episode on after having had this conversation with Kate, because now I have so many thoughts about space and

Speaker2:
Just it's funny, right? Because I think when we were talking about, like, what are some of the topics we want to cover also? And like one of those brainstorming sessions before we launched, it was like, we want to talk about space, but like no one's ever going to want to talk about space with us. So, like, who can we talk to who is like an expert on this? And lo and behold, Kate Crawford was the one to bring space into our lives. The environmental impacts of artificial intelligence and the economic systems behind them is a topic that we've touched on. But we haven't gone as in-depth as we did in this interview. And it's so important. It's so key for the, you know, future of life on Earth and especially for us to be in touch with that materiality that Kate brings to the table. And, you know, the example that I used earlier in the conversation about how I've heard a lot of people, both in academia and industry, talk about artificial intelligence as if it's this thing that exists in the ether that is disembodied or it's just like this this myth or just this engineering problem. And to Kate's point, no, this is actually really based in minerals. It's really based in computing power. It's really based in microchips and where those come from, which is from the earth. And then also the downstream impacts, pun intended, because it's about streams. But this is this is real. And this is stuff that we need to start paying attention to. And if we don't, there will be real consequences for all of us and also for humanity going into the future. Just what stood out to you about this planetary conversation?

Speaker3:
Yeah, I think when we were having the conversation with Kate about the planetary impact, I was actually taken to a space that I haven't thought about in a while. But back when the cloud was like a big buzz word that everybody was talking about, but nobody really quite understood, myself included. I was one of those people who thought that the cloud was this like ethereal, intangible thing that housed all of our digital infrastructure and data that just existed in a realm beyond what we could actually touch or feel in our physical world. And I learned quickly that that was not the case and that the cloud is actually just a bunch of server farms that exist very much in the physical world on Earth and have a lot of very negative impacts on our world, including the the cost of computing and the energy expense that that comes from having to run so many servers at once for these large corporations. But I think I was just as put in this headspace where I was kind of taken aback by how often I assume that A.I. and computing technology and algorithms are just these like non-physical magical things that exist. And I forget so often that they have real world impacts on our earth and on our natural resources and and that just every time that that hits me again and again, it always makes me take a step back and pause and and reflect on what that means for our discipline and for our planet.

Speaker2:
And as we heard in this conversation, sometimes, maybe most of the time, that's by design. Right. Like there are. People trailing Kate when she goes to investigate, you know, certain people's space operations, their space bases, as I might call them, but but it's it's like it's insidious, right? It's like we talk about just I think we were talking a few months ago about dark patterns in online spaces and in websites or devices. And it almost feels that way here where like the stories and the narratives that were telling about even like the most optimistic, like we're going to get to Mars right underneath. There are these other narratives and stories and theories and all that that are actively feeding into either some sort of utopia dystopia to get people to do something which sounds conspiracy theory ish. But really, after reading Kate's book, some of that's real, like the information that we're that you just shared about the cloud. And if you like, you look at the history of how the branding around the cloud and it's like, oh, this is nice. It's like, oh, it's fluffy cloud or this dark cloud.

Speaker2:
I can see it out my window. This is this is the power of language. Right. Is why the work of Sulin Blodget and Emily Bender and Tenwick Ebru and other folks in that space are so important to be like, what are we signifying here and what are the stories that are coming out of especially industry who, because of our capitalist system, largely are looking to make a profit. And that's the way you do it, using behavioral psychology, et cetera, et cetera. But that gets into another thing that we talked about in this interview, which was around categories and the power of language. And the whole conversation we had about categories is just like right up my philosophical alley. But it's really I mean, it matters. It matters how we even think about A.I. and the stories we tell about A.I., which a lot of A.I. is doing categorization and replicating that social categorization that we already do in some very harmful ways. Just any thoughts on categories or anything else that was brought up in this interview?

Speaker3:
Yeah, I mean, I wasn't formally trained as a social scientist like you are, Dylan, but more recently I've been having to take some social science leaning classes for the first time ever. And I was recently introduced to some topics of classification and categorization and structures as they exist in society. So I've been thinking about this so much lately. And Kate had a quote during the interview where she said that categorization was the epistemic machinery of how A.I. sees the world. I had to write that down because I just thought that was such a a hitting representation of the problems with categorization and classification in A.I. systems. And it was said so well and so distinctly because we often talk about bias. And we did this recently with Caelum Blodgett. We talk about bias and how it is perpetuated in our data sets and it exists in these models and these systems and there's no way to avoid it. And we haven't talked all that much about categorization. And I think that there's some specific topics within categorization and AI that are commonly brought up, like categorizing gender or categorizing race or categorizing even emotions, I guess, as was brought up in Kate's book. But I think that just the larger construct of categorization as a topic and as something that I perpetuates is problematic in itself, as Kate was saying. And the fact that we are teaching I to see a quote in the same way that humans see in our categorizing and biased and super subjectively structural way is scary. It's very it's spooky to me. As we come back to this word a lot on our show, it's very it's a spooky concept to think about A.I. thinking about humans in the way that humans think about humans.

Speaker2:
It is spooky. That's my favorite word for me. One of the reasons why I respect Cat scholarship so much is that it? Draws from so many different traditions, right, like Kate, as you were just quoting, it brings in feminist theory, is bringing in other philosophical traditions we talked about to today. And also, Kate is grounded in computer science and knows that training data knows how to implement that training data and all of those different knowledge bases and traditions she brings together so well and so beautifully in this book and in Kate's other work as well. But but in this book, I was just blown away by how many different areas that Kate covered. Well, also, just each chapter was more profound than the last. So part of the part of why we did this episode, right, is to tell you to buy this book. So hopefully we have proven our point. And that Kate has has also pushed you towards buying this book, because this is something that I think all of us who study this stuff we needed in our library. Right. We should not be studying this stuff or making an impact on this stuff without at least knowing what this book says.

Speaker3:
And as Kate said before the interview started, she said to us on the resume, called together that she was planning on leading some classes and workshops where each chapter had like a week long of lessons. And so we thought that was super fitting for this book because each chapter has honestly like years worth of lessons for this field as a whole, and especially if you're interested in responsible technology. This is an amazing starting point. So really, if you are considering it, check our show notes. We will have links to all of the indie bookstores that you can imagine for finding your copy of the book. And unfortunately, we wish we had more time to talk about all of this, because, like we said, this could be a year's worth of conversations. But we're trying to fit it into one podcast episode, so we'll leave it at that for today. But for more information on today's show, please visit the episode page at Radical.

Speaker2:
I beg if you enjoyed this episode. As always, we invite you to subscribe, rate and review the show on iTunes or your favorite podcast, your catch our new episodes every other week on Wednesdays. Make sure to buy this book, Atlas of A.I. by Kate Crawford and join our conversation on Twitter at Radical iPod.

Speaker3:
And as always, stay radical.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including automated translation, automated subtitles, secure transcription and file storage, world-class support, and easily transcribe your Zoom meetings. Try Sonix for free today.