Jeff and Jason sit down with Emily Bender and Alex Hanna, co-authors of The AI Con, to unpack the myths and real-world harms behind today’s AI hype. They discuss why the term “AI” is so often misused, how Big Tech’s models can fail marginalized communities, and why smaller, community-driven AI projects can better serve local needs. The conversation also explores the pitfalls of generative tools, the challenges of democratizing art, and the urgent need for real accountability in the tech industry
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow
Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice!
Emily M. Bender is a Professor of Linguistics at the University of Washington, renowned for her work in computational linguistics, language technology, and as co-author of the influential "Stochastic Parrots" paper. Alex Hanna is a sociologist and Director of Research at the Distributed AI Research Institute (DAIR), whose work focuses on how data in computational technologies shapes racial, gender, and class inequalities. Buy their new book "The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want" at http://thecon.ai
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
CHAPTERS:
00:00 - Podcast begins
01:44 - Introducing Emily Bender and Alex Hanna
02:23 - Why “AI” Is Misleading: Better Terms for Today’s Tech
04:31 - What Is AI Actually Good For? Real-World Use Cases
06:29 - Hidden Data Opportunities in AI and Language Models
07:20 - Community-Driven AI: Real Benefits Beyond Big Tech
09:45 - The Thai Library Thought Experiment: Why AI Lacks Meaning
14:23 - Inside DAIR: Building Alternative AI Futures
19:13 - AI and Creativity: Remixing, Music, and Fair Compensation
24:00 - Does AI Democratize Creativity or Homogenize Voices?
33:47 - Debunking AI Doomerism and TESCREAL: Media’s Role
40:29 - Mystery AI Hype Theater 3000: Podcast Origins and Mission
42:07 - Book Launch Details: The AI Con Release Events
45:07 - Can We Build Effective AI Guardrails?
50:30 - Synthetic Text Extruding Machine... STEM?
50:56 - Oxford Comma y/n?
51:18 - Thank you to Emily Bender and Alex Hanna for joining the AI Inside podcast
Thank you to Emily Bender and Alex Hanna for joining the AI Inside podcast
Contact us with questions and feedback: contact@aiinside.show
Learn more about your ad choices. Visit megaphone.fm/adchoices
Emily Bender, professor of computational linguistics and co author of the influential Stochastic Parrots paper, as well as Alex Hanna, who's director of research at DAIR, join Jeff and I to discuss why calling everything AI clouds real understanding, how big tech's use of massive language models impacts marginalized communities, and what meaningful accountability looks like in shaping the future of automated systems. That's coming up right after this. This is AI Inside episode 67 recorded Wednesday, April 28 2025, The AI Con. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinside show. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
Hello. Welcome to AI Inside, the show where we take a look at the AI that is layered throughout so much of the world of technology. I am one of your hosts, Jason Howell. My co host, Jeff Jarvis, is here as well. He's gonna join in a moment for our amazing interview that's coming up.
But first, real quick before we get started, I just wanna give a big thank you to those of you who support us directly on Patreon. That's patreon.com/aiinsideshow. Marie Teixeira. I hope I'm pronouncing that right. Marie, thank you so much for your support for as long as you have.
Alright. Jeff and I chatted with Emily Bender and Alex Hanna on Monday, 04/28/2025. That's when we recorded this interview. And it was super insightful as you'll soon see, a solid challenge to AI maximalism. So why don't we waste no more time?
Let's dive right into the chat right now. Alright. I wanna welcome to the show two amazing guests first, and actually, guests that we've talked about on the show plenty of times before. First, Emily Bender is the professor, professor of computational linguistics at the University of Washington, coauthor of a paper that we've talked about many times on the show, the influential Stochastic Parrots paper. Emily, it's, nice to have you on the podcast today.
I really appreciate being here. Thank you for having this conversation with us. Absolutely. And also welcoming Alex Hanna, sociologist, director of research at the Distributed AI Research Institute. Alex, it's a pleasure to have you here.
Thank you. Thanks for having me. It's a pleasure. Yeah. You you both, are here for a various, you know, particular reason.
Emily and Alex, coauthored the new book, The AI Con, which apparently every single one of us has. The AI Con. I I believe you. How to Fight Big Tech's Hype and Create the Future We Want. It is out May thirteenth of this year, twenty twenty five.
And, fantastic stuff. Excellent book. I wanna start off with something that you lead with in the book because I think it's a good kind of launchpad. It's often it's something that I've often wondered about, which is the fast and loose usage of the term AI. Right?
It's thrown around all the time, and it's used to describe almost everything in technology right now. And I'm kinda curious if we if we eliminated that phrase entirely, what like, what would be left? What would be the replacement phrase or terminology or something be that would actually describe what these systems are capable of in a more clear sense? I'm gonna jump on this question as a linguist because, like, words are kind of my bellowick. Excellent.
It it depends on what you're doing. Right? If you want to talk about the group of technologies as a whole, then I think the more effective term is automation. And that's gonna catch a slightly wider net, but that's okay because the stuff that goes under the AI umbrella wasn't coherent anyway. Or if you're talking about something specific, then you say, well, what is it?
Right? Are we talking about image generation? Are we talking about, some sort of automatic decision system? Are we talking about automatic transcription? Are we talking about, the term I like to use for chatbots is conversation simulators.
Like, what is the specific thing that you're talking about? So either own that it's automation and talk about sort of issues around automation or name the specific thing you're working on. The other word I like that you use in the book is is content extruder. Yes. Yes.
Not content, actually. Of course. We say yeah. Text. Yeah.
Synthetic text extruding machine. And I sort of shy away from content because content is, like, what it means. Right? But what's coming out of these machines is just the form, and we're the ones who make meaning of it. Yeah.
So the the AI is so much shorter. It's it's so much easier to just say AI. Anyway, sorry, Jeff. It feeds the the hype so much better. So let me I'm gonna extend that question, the next one.
But first, I wanna start. I'm so thrilled for this conversation. Alex, the work you do at DAIR is so important right now. Doctor Bender, I quoted you a number of times in my last book, The Web We Weave, and I have quoted you in podcasts endlessly. So we'll come back to that in a minute.
But you do a great job in the book obscuring the hype. And I think that there's two angles here. There's the hype and there's the misuse of these technologies. So I'd like to get down to a base case, if it exists of, if you if you if you if you strip away the hype and you strip away the the bad use using LLMs on search engines and so on and so on, What are the good uses that we should or could be concentrating on? What's left?
So are you interested in good use cases for language modeling or other things that are called AI? Anything that's called AI. Alright. So I'll start with language modeling, but then, pass it over to doctor Hanna who I think has, other areas that that she might be more excited about. But, you know, language models are models of the distributions of word forms and text, and they're important component of things like automatic transcription and machine translation systems, which need to be used carefully.
Right? Because with automation, it's gonna have a certain error rate, and you're gonna be able to tell where that like, what that error rate is, and it's gonna vary across context. But I have no problem with that kind of automation in general. I'm a little bit concerned about the ways in which it is being done right now with the so called large language models because they're sort of bringing in everything. And instead of doing a very clean, okay, input to output, it's like input gets encoded into the vector space, and then we just sort of let the language model go, and it can output lots of of stuff that's completely unmotivated in the input.
But in general, there are technologies under the AI umbrella, including ones that use language modeling that I think are fine, and they're about basically translating from one form of language into another form of language. Let me let me stay on there one second before we move to the rest of the world with doctor Hanna. I wonder what strikes me about, the training sets and the use of these large language models is that it creates a concordance of available text, which it strikes me would be useful. And I'm curious for you as linguist, what other connections and data might come out, not not to create text, not to extrude it, but instead for these large datasets that now exist, are there other research opportunities that are being, ignored? Ah, I see.
So, there's a lot of really interesting things you can do with linguistic corpora, but in order to do any science with that, you have to know what's in the corpus. Mhmm. Right? And so these absolutely enormous undocumented, you know, sort of obscure datasets are actually not scientifically interesting. Got it.
Alex?
Yeah. I mean, with regards to thinking about the technology, we could have a lot of a host of visions that we could do well, of what we could do with language modeling. I mean, my backing up in my background, I did some language modeling in my dissertation. I mean, the whole thing was let's build a classification system that helps us identify articles that mention protests, which is actually super helpful for social movement scholarship.
And, I mean, I think the main gist of it is thinking about what kinds of types of automation are gonna be helpful for particular groups or for particular communities. Right? And so fast forwarding to the end of the book, maybe, jumping all the way to chapter seven, we talk about a few different, organizations like Te Hiku Media that has produced machine translation, automatic speech recognition tools for the te reo Māori language. So using these things, which are then trained on language that has community control with certain access privileges.
So there's particular institutions that know what's going into the data, can control how that's being used. Meanwhile, they compared us to OpenAI's Whisper, their automatic speech recognition tool, and, effectively, this thing was not able to do very well, was not able to do this kind of code switching between Maori and English very well. And we have companies that wanna come in and say, like, OpenAI or Facebook or Better, like, why are you and they typically are going after, like, African language start ups, and they say, why are you doing what you're doing? You know, we've solved this, and they haven't solved this. They're doing a really piss poor job at doing this.
I also should ask, can we curse on this podcast?
Sure.
Just don't know if this should be PG to R.
I can bleep the really bad ones.
Okay. I won't say really bad ones. We'll keep it family friendly here.
No. It's YouTube friendly is all we care about. YouTube friendly.
Well, with YouTube, you can't just can't curse in the first thirty seconds. Right?
And so, and so they're doing this just absolutely terrible job on this, and and these things are not under community control. They're not there's not an ethical process of collecting those data. They're scraped without, you know, consent to speakers of those languages, etcetera, etcetera, etcetera. Right?
So there are good uses for these types of automation specifically in language and acoustic models in terms of automatic speech recognition.
Emily, I've quoted also besides, Stochastic Parrots and others of your work is your Thai library explanation. And you didn't, I don't think, address this in this book. No. I think that's not included in the book.
It's there was a bunch of places where I was going into the weeds on the linguistic issues, and Alex is like, this is maybe a little bit too much. And a small proportion of those ended up as meaty end notes, and there's some sort of sociology, political economy, meaty end notes too. But some stuff ended up, you know, on the cutting room floor and yeah. So the Thai library, National Library of Thailand, in fact, that experiment did not make it in.
So I wonder if you could share it with our audience because we're geeky enough here. It's fine. And because I think it's the thing that I harp on again and again and again is how these models have no sense of meaning whatsoever. And it's such a useful, curriculum to explain to people.
So I wonder if you could I I can summarize it, but you'll do a better job. Yeah. Happy to. And so, actually, maybe the purpose of it is exactly what you're saying. That one thing that we know from linguistics is that languages are systems of sign.
So there's always the form part and the meaning part. And the form might be marks on the page. It might be, you know, sounds from a vocal tract. It might be, signs with your hands on your face if it's a signed language. And the meaning part is gonna be, like, dictionary definitions of words, but also sort of like the social connotations of words and and also the way those meanings get put together.
And the thing about a language model is that it only has access to the form. And the thing about being a competent speaker in the language is that once we've learned the system that maps form to meaning, it's there immediately, and we can't turn that off. And so it is very hard to imagine that a language model that exposed exposed enormous amounts of English text isn't getting the meaning. Because when we see that text, we see the meaning. And so I said, okay.
Here's a thought experiment. This only applies if you are not a speaker and reader of Thai. And so I have I have an alternate location, which is, I forget what the official title is, but there's a similar library in the country of Georgia. And Georgian also has a, a writing system that's unfamiliar to many people who are not already speakers and readers of Georgian. So if you already speak Thai, imagine I'm talking about Georgia instead of Thailand.
If you speak both of those, I want to meet you. Congratulations also. Incredible polyglot. You're the person. So so the story is, you are in the National Library of Thailand, and I have come through before you got there and removed every single book that has anything other than Thai script in it.
So no mathematical equations, no bilingual dictionaries, no picture books. It's just very large amounts of Thai text. There is somebody who you don't get to talk to, but they arrive three times a day with delicious Thai food. So you have sort of unlimited time. You are, you know, well taken care of.
And the question is, could you learn Thai? And how would you do it? Right? And there's various things that people say. They say, well, I would go through and I would look for common subsequences, and then I will be able to guess at those common subsequences or maybe words like the or is or things like that.
Like, whatever's going on in Thai with the with and it's like, okay. Yes. You could start to get some traction on the grammatical function words that way. It's not gonna help you with the content very much. And, also, that's not something that a language model is doing.
Right? Or someone says, I would go through and I would be able to tell that this book is a translation of the Lord of the Rings. And I know that really well, and so I could, like, use that. Okay. Same thing.
You're bringing in something external that the language model wouldn't be happy with. And then my favorite answer is, oh, I just stick around and enjoy the Thai food. Does this resonate? I mean, do people start to get it that it just can't there is no such thing as a wrong answer because there's no such thing as a right answer. It's it's a prediction of words is all it is, and and I just don't think people get that through the hype.
That's the the problem. Yeah. Yeah. I think that when I have a chance to have this conversation with people, it does help some. And, you know, it's also what we're trying to do in the book.
It's like, we don't do the the National Library of Thailand, but we do talk about sort of how these synthetic text extruding machines work and why it is that they are so convincing because that has to do with how we interpret language. Right? You might think that you interpret language by sort of just unpacking the meaning from the text, but it's not that at all. Right? It's this complicated but reflexive process of imagining what the person who chose those words was trying to convey by choosing them.
And that's how we do it. That's how we always do it. So when we encounter synthetic text, we do the same thing, and we are therefore imagining a mind behind the text that isn't there. That's why your field is just so fascinating right now. I was talking to a brilliant student from, Syracuse University the other day, and she was saying, I might go to law school.
I might go to linguistics. I go to linguistics. Go to linguistics. Great. Yeah.
Also, linguistics is a fantastic pre law degree. Just Oh, okay. Yeah. Well, that's true too. Yes.
Alex, I wonder if you could talk for a minute about DAIR. And and I wanna come back to TESCREALity and all that in a second, but I think just to set context of what, Timnit Gebru and you and the team at DAIR are doing now and how that fits into this world.
Yeah. Absolutely. So DAIR or the Distributed AI Research Institute, we're a nonprofit research institute that was founded after, doctor was fired from Google.
So it was founded that was founded a year after that. And what we are, I our tagline is kind of like AI is not inevitable. So really thinking about community uses of technology. So we think we kind of operate with two pillars. The first one is thinking about, what it means to, kind of fight the worst elements of AI, and I think we end up doing that a lot.
And the second thing we're really trying to do is think about what it means to have possible tech futures, in which we're not focused on building these huge language models or fighting against automated decision making systems or doing x y z. And so the for the first thing we do, we've had a few projects and thinking about looking at different kinds of harms of AI in particular, harms of I don't wanna say I AI and and because that, of course, doesn't have much specificity. But thinking about automated decision making sums, text text and media extrusion machines, the ways that we see those harms in the world, actually existing. So for instance, I'm writing a report with a former Amazon driver and a former charter school teacher, Adrian Williams, and we've been working on writing on all the surveillance that in the Amazon trucks, through their, object and person detection cameras in there, through a system called Netrodyne, as well as this program that folks have to have on their on their phones or the scanners called Mentor. So we're writing a report on that.
We did an interview with several drivers. Are those the brand names they use? They're they're so dystopian. Metrodyne is a very They're very they're very they're very dystopian. So Metrodyne, you know, you I mean, you could pull it out of the Terminator.
What? Skydine? I mean, it sounds like Skydine. Right? It does.
Yeah. I mean, it's it's it's This is the room, people. Yeah. I know. Is that?
Yes. Right. Exactly. Well, because you have to well, I mean, it's like Palantir and Andrew. You know, these folks are like they're like, let's use all these, you know, Tolkien terms and not have any kind of reflexivity on them.
And so yeah. I mean, I think interrupt when I couldn't No. No. No. It's we you could write a whole essay about this kind of the dystopianness of naming of of Yeah.
Of of, quote, unquote, AI tools. And so, these tools are used for surveillance, for doing surveillance or drivers for worker surveillance. And this is all over different industries like app based work, gig, you know, gig work, etcetera. So that's one project. Another project we've been doing this long running is the Data Workers Inquiry, which my colleagues, Dr. Milagros Miceli and and Doctor Adio-Adet Dinika have been working on as, coordinators in which data workers tell their own stories about what it's like working for a contractor of OpenAI or Meta, and having to do this, you know, this this, psychologically trying poorly paid work of content moderation or red teaming or some kind of work in the AI pipeline.
In terms of the possible feature series, I mean, there's different we did we did an event last month that was just on, not even that last month. It was earlier this month. It was focusing on trying to think about what new technological features could be. And some of the things that we I was there. It was very Oh, yes.
Great. Wonderful. And so one of the things in one of the sessions, I know we, there's a piece that, to me in in Aswanaj Tekka and I wrote on, kind of an Internet for our grandmothers. So kind of thinking about language technology that could be used by, you know, speakers of what our grandmothers spoke, whether that's kind of like kind of like country Egyptian Arabic or or Amharic or Coptic or something of that nature, which really has no kind of access. You know?
Like, it's very hard to use the Internet in those languages, and the interfaces have to be all keyboards and there's no keyboards. Or or I I think they I mean, Asma, for instance, had to develop a Giz language keyboard with a Giz script, which is a has a very different, different from the English or Latin alphabet. And so, you know, like so we've been thinking a lot about that. We've been, putting energy into, different African language start ups, trying to think about what it would mean to have alternative to the kind of big tech ecosystem.
Jason, I've been monopolizing before I go down the TESCREAL rat hole, if you wanna ask.
That is that is that is a rat hole in and of itself. Yeah. I'm you know, I I think in reading through the book, a a large focus is this, like, hype cycle that we're locked into right now around, you know, what we're putting in air quotes, the the AI tools that that exist. And something that's really close to my heart, because as you can see, I've got some guitars hanging up behind me. You know, I'm I'm a creative person.
I create a lot of things, you know, music, video content, you name it, and words on the page. But when it comes to creating something like music, I'm often looking for ways, approaches, tools that help me feel like I'm not quite alone in my own and lost in my own thought process. And so, you know, the collaborative approach is something that that I highly value in my creative process. And I think a lot of people take a look at these tools that are coming out leveraging AI as it's loosely defined, And they see these tools as being a replacement for the human creativity and not necessarily as a collaborator or a possible collaborative tool for for the artistic process. And I'm really curious to hear your perspectives on how these systems might not necessarily need to be used in a replacement of human creativity perspective or approach, but how they how they might actually have some potential, using them in a in a way that's a little more collaborative.
And So I wanna I wanna take issue with the word collaborative actually because that's one of these anthropomorphizing words that makes it sound like the thing on the other side is very human.
Yeah. That's true.
The, you know, anthropomorphizing this technology also tends to dehumanize people at the same time. And I think also if you if you think about using some sort of a remixing system as a way of getting ideas, let's say, if you're collaborating with anybody, it's the people whose work was used to create that system in the first place.
And you could imagine a way of doing that that actually honors the humanity and creativity and work and effort of those people. You know, there was I forget their name. There's a person who had a really funny account that was, like, using language modeling, pre ChatGPT to, do things like come up with, messages for, Valentine's Day candy hearts. Like, feed in a bunch of them and see what comes out. And, like, that was hilarious, and it was done on the sort of known dataset.
And so you could imagine a more local community controlled way of saying, yeah. I I want to con I wanna be part of this artistic collective that is contributing to this tool and that we are all benefiting from, for example. But as soon as it is sort of funneled through big tech and and not just funneled, but basically stolen by big tech, I just can't get on board with it. And I would say that there's, you know, some somebody that's been helpful to follow is, I don't know if you've seen this person, Ed Newton Rex. He used to work at, I think, Stable Diffusion.
Got really fed up with the kind of ways in which the and Stable Diffusion, I think, was trying it was either Stable Diffusion or Midgerny. I think it was Stable Diffusion and was trying to come up with a way to fairly compensate or credit artists, and and in a way that would actually respect the ownership and give consent and and and give sufficient credit. Found that there was no way of doing that, especially under the kind of, you know, valuation regime and that stable division was trying to chase after. I mean, how are they actually gonna try to profit on this? So he ended up founding an organization called Fairly Trained, which has a lot of audio, models and support startups in which either models are certified to have consent compensation, whatever, or datasets that have have done that for for artists.
And a lot of them are audio in nature. Right? Mhmm. But, I mean, yeah, just like Emily was saying, I mean, it the element of control when when big tech gets in the mix, they want to, try to train the biggest, you know, the biggest model on everything or, quote, unquote, everything without really any kind of respect to, what the what the process of the human is and their what the artist is doing. And so, I mean, the thing that was just really flooring to me and really just to me gave the game away is what where Sam Altman was pleading with, with, Congress or the courts or whomever would listen and said, listen.
If you don't let us violate copyright, you know, you're gonna lose the war with China. And I'm like and, like, what did how did we is the math mathing? Like, how is this actually working? And so is this really, you know, it's really initiative.
So let me stay on this for a second before we go down the rabbit hole. I I wrote a syllabus for Stony Brook this fall, in AI and creativity, 100 level course. And the the the, spoiler for the students is that the aim of this is to have them examine their own creativity and their own expression and and understand the tool's relationship to that. But I so I I wanna push you on one point in the book, I think, is, that there are many people who are intimidated by writing. I'm intimidated by drawing. I can't do it worth a damn.
And, on the one level, I think there's an opportunity for people to be able to use these tools to help them do what they want to do. Now when I talked about this at the executive program I started at CUNY, two of the students, one, who ran a site for imprisoned people, one who run ran a site for, First Nations in Canada, said, woah, white men. You should not want to, homogenize the distinct voices of people according to what the AI comes out with. Stipulated, your honor. I agree.
But I wonder whether those of us who feel fluent in certain media are exclusionary. And so the one thing from the book is Julianne Dawson, who you quote on page one zero five, said that, that they, people who use AI the problem with people with oh, I'm sorry. I'll start over. The problem with AI is the people who use AI. They don't respect the written word.
She told four zero four. These are people who think their ideas are more important than the actual craft of writing. So as a writer, I get that. I'm a writer. And for me, writing is never easy, but it's what I do.
But for those who for whom it is not, who want to express an idea, who want to express their lived experience, who want to express themselves, if they find these tools able to do that with them under their control, is there so much wrong with that? Is it is it is it, you know, when when photography came along engravers got pissed off, and when, iPhones came along photographers got pissed off, but we all had the ability to take photos and and record things in ways that that we couldn't before because we didn't have the equipment and expertise. And so sorry. It's a very long winded, Joe Scarborough like question.
So I I think the context of that quote is really important. And, you know, who is Julianne Dawson? She's the founder and creator of Bards and Sages, which is a publisher of speculative fiction that had to close because they were getting snowed under by people who were throwing AI slop at them, like synthetic text. So in that context, yeah, I think that's a totally fair position that as a publisher, the publisher can say, I don't want any of that. I want stuff where it is has been crafted by someone who has taken the time to hone the craft of writing.
I think that, the sort of you didn't use the word democratize, but oftentimes people will say this is this is democratizing art, and I'm glad you didn't. And there's a wonderful counterpoint to that that comes from someone named Bertoni on Blue Sky. This was this was, published in, Portuguese, and then someone named, El, translates to English. To democratize art is not every person having a cute drawing made in seconds. To democratize art is every person having time and health to learn and make art if they choose to and mainly to have the means to think and relate introspectively with art.
And I think that this also holds for we think about writing in other context where it's not necessarily artwork, but you are writing for coursework or you're writing emails at work. Right? This is something that we learn how to do, and we learn how to do it by doing it. And anytime that somebody turns to a chatbot because they don't feel ready, two things are going on. One is they are missing out on a chance to make some progress with their own skills, but also it means that something else meant that they didn't have the time or possibly health to actually engage in that.
I would also add that I think that yeah. And, I mean, just to situate the Julianne Dawson case, I mean, we also talk about the Clarksworld case in which Clarksworld is this this sci fi publisher that was effectively having a DDoS, a distributed denial service attack on their submission. Look at it. Yeah. Yeah.
I mean, distributed denial service attack on their submissions. Look at it. Yeah. Yeah. I know.
And because it was and and I think had been a platform that also had paid. Right? And I mean so we have this kind of case in which and we're seeing this in a lot of kinda cases where there's these things where there might be some kind of a small payoff if you are able to get something published. And if you can play the scale game, then that that lets you cash out. Right?
And and and we're seeing this in a few different places. There was this report on, the California Community College System, which really upsets me because I mean, because the California Community College System is an excellent system. I I live down the street, from a community college, where I live. And it's, you know, effectively, there had been these kind of massive registrations of bots, and then they would drop halfway through, and they were trying to reap the kind of financial aid that came through. And so that's that's that one case.
But back to the core of your question and thinking about, does it open up doors? And I'm thinking Yeah. What is and I'm thinking back to this thing and thinking with Emily. I'm like, what are these things that prevent people from engaging in this? Is it, you know, is it the time and space?
And and, surely, it's the time and space, but it's also thinking about what it means to hone a craft. Mhmm. And people that are I think, you know, there is a lot of people in the space that are hostile to craft. I mean, there's Miramarati, who said famously on this on on, I think it was at Vanderbilt. You know?
Like, some of these creative jobs shouldn't have existed to begin with. You know? Or you have these things, or you have people like, I think it was an some at Exact at one of these audio companies that was like, well, musicians don't, like, like making music. Like, they just wanna have which is Oh, boy. Which is wild.
Like, you know, I also I I was guitarist of my other room, and I and I'm not good at it, but, like, I like just the tool around. Right? You know? And I played saxophone for many years. And and so, you know, like, people like making music.
You know? But I I don't think that's who you're talking about, Jeff. Like, I think who you're talking about are, you know, people that want to to to to engage in in in craft and thinking about that. And I think, you know, like, for that, you know, if this thing if it actually opens the doors and it isn't built on the back of, you know, thousands, if not millions of people who have their data stolen, I mean, that's probably the difference you have between your iPhones and your and your Mhmm. Or your photographs and your engravings.
I'm sure many people are pissed on that. But also, you know, also, you know, there's and then there's elements of kind of, like, fabrication of of of cameras and whatnot, which I can't really speak to, but could be a consideration as well. But, I mean, is it worth, you know, writing a paragraph and having to use all that energy and doing it on the backs of people who didn't consent to it, whereas you may find a writing coach or you might find somebody that can guide you as a mentor. You know? And that is going to be a lot more intimate and a lot is going to accelerate your your development as an artist in a much deeper way.
We had Lev Manovich in for a conversation from the City University of New York grad center, and and he is using AI famously for his art. And what we said was interesting was that that he likes it when it does something that he thinks is wrong, and he wants to then interrogate that and figure that out. And it it it makes him see things in a different way, which which is an interesting counterintuitive way to see this. Well, I might still try to invite you into the class, this fall to to tell the students what they should be learning. Let's go down the rat hole.
I would I would also say that I would also say that there's interesting ways of doing it that are not so dependent on intense energy usage or kind of and, I mean, people have been doing computational art for Mhmm. Many, many years. Right? And I was at a, you know, I was on a panel with, Beth Coleman, who teaches at University of Toronto in my old department. And, you know, she has a series on, where she uses a lot of generative adversarial models or GANs, generative adversarial networks.
And these things, you know, are much more thinking about about remixing existing existing photography, kind of adjusting hyperparameters and whatnot. It's not eating up, you know, just like data centers full of energy. You could do this, like, locally on your machine. Right? And it can be a conversation between yourself and and sort of other other reference.
And then there's a point that Jenlina makes really well in terms of craft, on our on our fourth episode of our podcast, but we mentioned her in a book. But the idea that art is a community of practice. Right? You might want to have a reference that is it's not going to be free of any kind of community that you're engaging with. You're going to always have reference, and engaging in that community of practice is a critical feature of it.
So now for the rathole. We've had Emile Torres. We've had two conversations with Emile Torres, and they're just great in explaining TESCREAL. Though when I try to mention TESCREAL to people, sometimes their eyes do roll over because it's such a complicated long thing.
But I'm really glad you attack this because the one hand in the book, you attack hype and doomerism is a form of hype as you as you so so well explain that it's an effort to show people's power and macho and get money and and and and and and, how do we cut through this? I'm very frustrated that, journalistic coverage of OpenAI and, and other companies and Elon Musk and Peter Thiel and all the characters we know who follow these, I think, faux philosophies and the dangers that lie within, reporters just are too lazy or they're too confused to understand what's going on behind, to understand the context here, that the word safety has been ruined Because they have their definition of safety versus present tense, you know, way future tense, a million years versus present tense which again Stochastic Parrots is a great, I I think core paper to to call on there. What's the I mean, you wrote the book and this is really helpful, but what's the strategy What's the strategy? Yeah. For trying to cut through, and undercut TESCREAL?
Yeah. So I think some of it is, first of all, just like follow the money. So you say, you know, there's some reporters who just aren't somehow doing the job of actually challenging this and situating it. There are others who are literally paid by the money. So Vox has this sub thing called Future Perfect, which as a linguist, I'm very upset at the repurposing of that, which is like the name of a tense.
Yes. And that's that is effective altruism money, and that's the EA and the test group bundle of ideology. So some of the journalism is actually coming from inside the house. Right? Like, it is Yes.
It is right there. I think another thing is, the way in which we see so much tech journalism, platforming, paper shaped objects as if they were research. Like, I would love to see journalists drawing a really hard distinction between peer reviewed research and what's effectively company blog posts. And so much of this is coming out as and it it just completely falls apart as research. And we have a lot of fun with that on the podcast.
So the other strategy, that I really want Alex to elaborate on is is what we call ridiculous praxis. Yeah. Just make fun of it. I mean, it's it's bullshit. I mean, it's because I think that happened that happened recently with, Roko's basilisk, which if you're, I was I was next to my partner in bed, and she was reading, last night, and she was reading, Malcolm Harris's, Palo Alto.
And I turned, and I saw the page, and there was the and there was the rant in there from Eliza. is it Eliza or Eliezer? Eliezer Yudkowsky, and the one where he he he brings up the response to Ripos Bascoulis, which is like which is this thought experiment, which is like, oh, you know, like, you mentioned, you know, like, if you think about this, there's going to be some future, you know, like, future robot overlord. And if you are not subservient to it, it's going to torture you and and, you know, the singularity for all eternity unless you declare. And so the fact that you have mentioned this and it is now in some corpus, you have now given Rocco or or, I guess, the basil I guess it's the bass it's like it's a Frankenstein situation. Right?
I said, so you're now giving the the basilisk this idea. So and so people online were panning this. They're like, oh, Tech Bros reinvented Pascal's wager or, you know, or they, you know, or they're sad as some future monster daddy, like, reducing it just to how inane this is. Right? And so I think that ridicule, you know, is is very good.
It it you know, comparing the suffering of the seven or 8,000,000,000 people now compared to the 10 to the fifty eighth people. I I looked this up, Emily. You're right. It was It was it did. It was '58.
It was the forties. It was, I looked up number. It's a made up number. Right? I think more old is it ever.
These people that exist, you know, that the future people that are somehow living above the stars and doing space colonization, I mean, it is very absurd on the face of it. I should also say that and or and there was some great reporting by in Semafor yesterday. I don't know if y'all saw this, But, but it was about the kind of, like, haunted group chat that Mark Andreessen had started. And it was effectively it was kind of wild because it has within the group chat this just like the worst people you can think of in Silicon Valley. So it has, Joe Lonsdale, cofounder of Palantir, as, Balaji Balaji Srinivasan.
Srinivasan, who's the person who's developed this whole ideology of of, like, we're gonna have all these network nation states, that are just, you know, built on artificial islands. And you're like, oh, not only like, of course, these people were in some kind of, you know, terrible, you know, alt right network and because yeah. Like, well, they're in some kind of network, but it's just like, oh, no. They did literally have a group chat in which they're sharing the worst ideas. Right?
And so I think that's helpful. I mean, that's that was really helpful reporting in so far as you're like, okay. These folks are converging not not because they're necessarily all Trumpists or, you know, they're all you know, because a lot of them weren't or, you know, not not, you know, not at least to begin, and now they've kind of all fallen in line because, you know, you need to to make any kind of, if if they hope to, you know, have their industries protected and not be pursued by similar types of antitrust. Although that hasn't saved that hasn't really saved Mark Zuckerberg at the moment. But, you know, it's that there is this kind of hegemony in this kind of, like, group think far crypto right that is, like, that is very I mean, it is if it is not straight up TESCREAL, it is is very adjacent.
It's knocking on the door. Well, because it's accelerationist rather than numerous. Yeah. Yes. But but it's it's there together.
Talk about your podcast for a minute. Plug your podcast. Yes, please. So Mystery AI Hype Theater 3000, is, I like to call it an accidental podcast. So it just started off as a Twitch stream, and, we were just gonna do it as sort of a one off takedown of this terrible blog post that ended up taking three episodes.
And then at that point, we were kind of on a roll, so we kinda kept going. And then eventually, we decided it should be a podcast. And it's basically, we look at at AI hype across many different domains. Sometimes it's just the two of us. Sometimes we bring in external experts, one of those particular expertise that we want.
And, you know, it's we we look straight on at some of the worst stuff and laugh at it. Yeah. Really taking our, what is it? What is the story? Meg mentioned this as a as a as a Meg Mitchell offered this as a model Mystery Science Theater 3000.
And, Emily hadn't seen the show, but I'm a big fan. And I'm like, yes. Absolutely. And the original logo for the show was I think we looked at, it was like a a very credulous piece of New York Times, reporting, and then I think it was the shadows of, Joel Hodgson and and Servo and and and and, what's the other one named? I don't know.
Crow pointing at the screen going, ah, look at that. Ridiculous. But we decided that maybe we couldn't get away with that, and so we, commissioned a fresh logo. Yeah. When you're defending copyright to go after copyright, it's a rough Yeah.
It's a little misleading. Yeah. Yeah. Yeah. Yeah.
I'm not sure that was fair use, so we actually, you know, have a great logo by, Hayward Pleasure Park, and we've done all the, great assets for the show. Yeah. Jason, you wanted to love it. I love it.
Well, I wanted to give you an opportunity to talk about your your launch event, because I know that's, right around the corner. At least as we record this, which is April 28. So we're recording this a little bit in advance of when it's releasing. But tell us a little bit you know, we've got the book release coming up. Tell us a little bit about the launch event.
Yeah. For sure. So we are doing a launch event, a virtual launch event with DAIR, and it will be at 2PM on May 8. And if you want to see all the details about it, you can go to the con.ai. We have a wonderful inter interlocutor in, Wahini Vara who recently came out of with a book that Emily and I both participated in events with her. And her book, his name is Searches.
And then she's a, her first book, her novel, her debut novel, was a Pulitzer Prize finalist. And so Wani is a a wonderful, interlocutor. Really excited to have that conversation with her, And that will be live on our Twitch channel, but it's helpful if folks preregister at the event. Right? Yeah.
So thecon.ai has actually all of our events, and it's Alex's stroke of brilliance that we got that URL. No kidding. So we have, the book launch event is on May 8. There's a second virtual event on May 12, and it's 2PM May is 2PM Pacific time for those of us who don't have the luxury of living on the West Coast, aren't up early all the time. So 2PM, May 8th.
There's something on May 12. And then we start doing some in person events in a few places, and all the details are on, that page, thecon.ai. Excellent.
So at the end of the book, you go through a a list of of helpful, to
-dos. Everyday resistance, information libraries, meaningful further regulation and transparency and disclosure.
I'm I'm I'm I'm leafing through what I read last night. Building socially situated technology, strategic refusal like earlier. The grimy residue of the AI bubble, which I quite like, what's gonna happen to the this economy. NVIDIA's as we speak, NVIDIA's going down because, Huawei is doing a competitive chip. So those are those are your very helpful suggestions.
I wanna ask you about the one that we keep hearing in this discussion that and full disclosures, I'm I'm, you know, or I'd just like you know that I'm dubious about this myself, is this idea of building in guardrails. The idea that you can make these systems, to align with human goals. Again, when they have no sense of meaning, how do they have any sense of ethics? Seems to be obvious. And then, I I wrote a book called The Gutenberg Parenthesis because I'll plug mine too, which so so and I think back to the printing press as a general machine.
Anybody could make it do anything. And you couldn't, tell Gutenberg, well, you you could do this, but just make sure you keep it away from Luther. Right? Just in the future. So, there seems to be as as the one hand, you have the doomer side.
You also have a false comfort in the idea that these alleged air quote safety people at the various companies can build in guardrails and alignment. And that drives me bananas when I hear it. So I'd love to get your arguments to get that because I know you have them. And, and and I seem to fail at trying to get people to latch on to the futility of that. So I've I've I've revealed my views clearly.
Alright. So you'll you'll notice that in our chapter seven where we have suggestions of what to do, guardrails aren't among them. Right? We we do talk about manipulation. Right.
Yeah. And I think, you know and there is this thing, and we get into this earlier in the book about the notion of a general purpose technology. And I think that the the difference between something like the printing press or, some of these people will say electricity is a general purpose technology, but electricity is a natural phenomenon. The technologies of electricity are things like wires and transistors and switches. Right?
And each of those things has a specific function that it does that you can use in many different ways. Can't talk about the specific function of AI because it's not one thing. But if we drill down into the synthetic text extruding machines, the synthetic the specific function that they do is continually provide a response to what's a likely next word. Right? So Mhmm.
What's that good for? And you're absolutely right that the idea is that we can somehow align its outputs so that it the likely next words are neck likely next words that are good in some generally accepted sense makes no sense at all. Right? It's it's basically a category error. I think where we do need guardrails is on the activities of corporations.
Right? That's laws. That's regulation. Mhmm. And anytime someone is talking about alignment or misalignment, they're basically displacing accountability from the companies that are making decisions usually based on the profit motive to this thing that is not the sort of thing that can actually take accountability for anything.
Well, is it also true that that that, Art, if they say, yes. What you know, you have Anthropic says we're the safe one. Sure. And you have, I forgot who it was who left, who's who left OpenAI and starting Il Ilias is superior. Yeah.
Right. Safe superintelligence. Right? Yeah. Which is which is a whole bunch of, high people.
I would I would too, like, $2,000,000,000 not to build God.
Yeah. Yeah. Well said. But the companies can't you you say so we hold them accountable, but we also have to be sensible about what we could hold them accountable to and recognize what they cannot do because they act as if they are all powerful and they are not.
So part of the problem you know, there's so many paradoxes in this world where So here we've gotta reveal their lack of power. Right. Exactly. So companies can be accountable for data practices. They can be accountable for labor practices, and they could be made accountable for every single output that comes from their machines.
Mhmm. If it extrudes something that someone takes as bad medical advice, well, then they're medically liable for that. Right? If they if it extrudes something that ends up being liable, well, then, you know, we could get them, like But is it but I'm saying they couldn't do that. There's no way you could anticipate every possible use.
Right? We could make them accountable for it. So whatever the machine spits out, the company that set up the machine is accountable for it. That's the kind of thing that we could do with the right political will. Mhmm.
And I would also say that, I mean, the kind of the notion you know, like, there's so many problems with alignment as as kind of the idea behind alignment is the kind of pipe dream here is that, well, there's a way in which these things have a certain set of autonomy, and that autonomy in that in of these inevitable machines means that they have to be kind of good independent actors in the world, which is I'm like, well, that's already granting too much credulousness into where these things are going to go. Right? As if these are not not programs running on servers and as if and I think the vision is that someday they're going to be self replicating or x y z. And that's just no. This is software.
You know, we don't think this about calculators, and we don't think this about image generators or whatnot. I mean, this is software, and people ought to be accountable for when their software goes wrong or does things. And and, I mean, I think that was a big part of, an effort that DAIR was involved in in trying to get in the EU AI Act some kind of accountability mechanism for the providers of people who, were creators of LL Labs. Basically, like, you cannot say that it's just the facial recognition or the things in hiring, which are high risk. These things are not, quote, unquote, foundational, which is terminology which is comes directly from Stanford.
I mean, no one was using that term before Stanford HAI was saying it. They are not foundational in any kind of way that they undergird under technology. These things will will produce the harmful output, and these companies need to take responsibility for it. Okay. We take taking a lot of time, but I've got one more two more quick questions.
One, synthetic text extruding machine. Was that your silent your little slap at STEM?
No. No. No.
Not at all. Okay. That's that's actually very funny. I didn't even realize that until you said I didn't know I heard Emily say it. I thought, oh, it's t e m.
Okay. No. And, you know, linguistics sits sort of uncomfortably at the intersection of humanity, social sciences, and natural sciences. So I'm I'm not gonna try to You're taking trouble with your cousins. Alright.
Alright. I've got I've got a I've got a major national linguist with us right now, so I've gotta ask one last question, doctor Bender. Oxford comma, yes?
So linguists tend to be descriptive rather than prescriptive. So Oxford comma, yes. There's variation on that point.
Okay. That's great. Love it. I love that.
And we got it just in time. Emily Bender, Alex Hanna, thank you both so much for hanging out with us today and talking about, of course, your new book, The AI Con. Everybody should, check it out here in a few weeks. When it releases, May.
Alex, you gotta get in the habit of of having it everywhere with you. Always at your side.
Oh, gosh. My book bag is gonna get that much heavier. Yeah. I I am still just, like, utterly besotted with the cover, and I'm just so it's still such a thrill to, like, have the physical thing in our hands. So Yeah.
The tangible quality. It feels nice even. Very nice. It's really wonderful. Thank you for the chance to help show it off some more.
Absolutely. My pleasure. Yeah. It's absolutely our pleasure, and we'll have you back again soon. Appreciate your time.
Pleasure.
Thanks again to our guests, Emily Bender and Alex Hanna. Of course, you can visit thecon.ai for more information about their new book and everything related to that. Also, huge thank you to Jeff Jarvis. As always, jeffjarvis.com.
You can find all his books there. If you enjoy interviews like these, we're gonna be doing more of these, and they're separate from the news episodes that happen every Wednesday. So, you know, leave a review. Let us know what you think. Apple Podcasts allows for reviews, or you can just give us a star rating on whatever your pod catcher is.
It really does help. Everything you need to know about the show can be found at aiinside.show. And then finally, as a reminder, we've got our Patreon that people support us, and we really appreciate it. Patreon.com/aiinsideshow. You get ad free episodes, a discord community.
You can get an AI Inside t-shirt by becoming an Executive Producer of which we've got a lot. DrDew, Jeffrey Marraccini, WPVM 103.7 in Asheville, North Carolina, Dante St James, Bono De Rick, Jason Neifer, and Jason Brady. That's seven of you. It's amazing.
Let's try and add an eighth. So thank you for your support. It enables this show to continue. We appreciate all of you though. So thank you, and we'll see you next time on another episode of AI Inside.
Bye, everybody.