This week, Jason Howell and Jeff Jarvis welcome Dr. Nirit Weiss-Blatt, author of "The Techlash and Tech Crisis Communication" and the AI Panic newsletter, to discuss how some AI safety organizations use questionable surveys, messaging, and media influence to spread fears about artificial intelligence risks, backed by hundreds of millions in funding from groups with ties to Effective Altruism.
INTERVIEW
Introduction to guest Dr. Nirit Weiss-Blatt
Research on how media coverage of tech shifted from optimistic to pessimistic
AI doom predictions in media
AI researchers predicting human extinction
Criticism of annual AI Impacts survey
Role of organizations like MIRI, FLI, Open Philanthropy in funding AI safety research
Using fear to justify regulations
Need for balanced AI coverage
Potential for backlash against AI safety groups
With influence comes responsibility and scrutiny
The challenge of responsible AI exploration
Need to take concerns seriously and explore responsibly
NEWS BITES
Meta's AI teams are filled with female leadership
Meta embraces labeling of GenAI content
Google's AI Test Kitchen and image effects generator
This is AI Inside Episode 3, recorded Wednesday, February 7th, 2024. Follow the funding of AI DOOM. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, hop on over and support us directly. And thank you for making independent podcasting possible.
Hello, everybody, and welcome to Episode 3 of AI Inside, a weekly show where we take a look at some of the cascading things happening in the world of artificial intelligence. It's me, Jason Howell, joined as always, week after week, by Jeff Jarvis. Good to see you, Jeff. Good to see you.
Good to see you too. Episode 3, that must mean that we were like on a roll. That must mean that we have a process down. And we know what we're doing if we're three episodes down.
Jason, you are a Mr. Process, as I've learned. I try. Oh, boy, sometimes.
You know what I realize about myself? I can, I can follow a process, but creating a process from scratch is really challenging. I don't do process at all. OK. Well, I got you covered, then. Real quick, before we get into the meat of this episode and we have a great episode coming up for you, I just want to say thank you to everybody who have given us some reviews. We do have some reviews popping up in Apple Podcasts. It really helps us out.
So, if you haven't done that yet and you're liking the show, please head over to Apple Podcasts and Post a review. We would appreciate it. And then, of course, if you want to support us directly, of course, you can. We would not stop you. In fact, we encourage you to do that. If you like Patreon.com/aiinsideshow is how you can support this show directly, and you get some perks as well when you do that. So, thank you so much for considering.
And with that out of the way, I'm going to throw it over to you, Jeff, because you're going to do the introduction today. So, why don't we bring in Nirit? So, Nirit Weiss-Blatt is a wonderful scholar and researcher who I came across first when she was working on trying to understand when the attitude of media toward the internet changed because we all knew it changed and just trying to figure out what. And she did some wonderful research and came up with a book called Techlash. If you look at it, hardcover is very expensive because it's academic, but paperback and audiobook are more reasonable.
And I recommend it highly. And Nirit found that there was a moment when media coverage shifted from optimistic to pessimistic or utopian to dystopian. And that was, if I may quote, the election of Donald Trump. And so, I've quoted her more than once in books and quote her often and dine out on her findings constantly. And so, she does wonderful work on media and technology. And lately, she's become very interested, and I'm glad for this in the crazy faux philosophies that drive some of the boys of AI.
And so, I thought it'd be great to have her on this show to talk about that. And so, Nirit, why don't you talk about how you shifted your attention, not just to AI, but to the crazy philosophies? If I may be so bold as to call on that, that drives some of the AI boys. What drew your attention to that?
Yes. So, my research is always about the coverage of emerging technologies. So, with the Techlash, it was mainly social media. And the backlash was about the big tech companies. And following the framing of the news about technology, I was like amazed by the AI panic that we're seeing starting in 2023. It's like we really got it was like gradually, but then suddenly, AI panic was everywhere.
Right? We had the open letters, and we had Eliezer Yudkowsky saying on Time Magazine, we need to bomb data centers, like we had these really, really bad shit, crazy pieces about human extinction. And looking at how it went like from this fringe thing to mainstream and mainstream media, which is what I analyzed, I was just fascinated about how did we get here? And this is why I died into like the ideologies, the funding, and everything's behind it. And so, you found that people were claiming doom and gloom with AI.
And tried to dig into that. What did you find in terms of how that message was spread? Because you're also, I should say, Nirit is a scholar, not only of media, but also particularly of public relations and thus messaging. How did you find that message was spreading?
OK, that was fascinating because I'm looking at like the framing and the themes. And people started to talk about AI everywhere, like this God-like AI, superintelligent AI, out-of-control AI, Skynet-like AI, like all those terms, and all the science fiction stuff. Yeah, and not just AI systems or, you know, and then I came across some interesting studies that we can talk about. It's my AI panic campaign, a two-part series exposé, which you have like companies, AI safety organizations, that what they do is do like message testing about what are the best phrases to freak the hell out of you. Like, what can work best to create this fear-mongering with AI, like the media and policymaking? And they were really trying to perfect the existential risk messages to target audiences. So, they actually checked it's like age group and gender, education, Republicans versus Democrats.
It's not like out of the blue. They're actually checking the quality of the messages, and the aim is like twofold. One, how concerned you are if it succeeds in like raising your worries.
And second, how are you? How much are you convinced that AI needs strict regulation—we need to pause AI or stop AI? I suppose we should start. So, when Jason and I were workshopping this program in the TWiT network, we had on Emile Torres, who's a great scholar of all this stuff.
They and Timnet Gebru have done a lot of work and coined the acronym TESCREAL. And I always feel it all stands forward. Transhumanism, extopianism, something are effective altruism and longtermism. Yeah, so all of those singularitarianism. That's why I forgot that one. I always forget about one of the seven dwarfs. Yeah. So, so they've, they've chronicled that.
And what you've found. And so, so that's the underpinning of this belief by some that we owe the future more than we owe today, that if we screw up and destroy today, we've screwed up with also what we lost today. But tomorrow is what matters. And that's what bothers them. And they think that they should have the money and the power to determine what that tomorrow looks like as technologists and as, smart people, and as rich people. So, but the doomerism is so confusing to people because it's oftentimes the AI people themselves were saying, we are so powerful, we could destroy the world.
And you'd wonder why they're doing that. And as you said, once I quoted this in my book, you know, you said that the that the what was it that the fear is the marketing or something like that and that they're trying to show how powerful they are. And some of them, like Yudkowsky, are have switched and believes that, oh, my God, we'll destroy the world, but still believe in the technology wants to do when we're ready. And so it's a variance on this theme. So, you found early on you dug into something which I loved, which was a survey of AI scientists that, in the headlines, said that they believe that it could destroy us imminently.
Why don't you talk about what that said and how you dug into that? Yeah, so it's the AI impact survey. It's an annual one.
It became a ritual every year. We have the same headline about five percent scientists believe in human extinction. And last year, Professor Melanie Mitchell, whom I admire, and I wrote like a very detailed criticism of this survey because it has a lot of issues.
And I can walk you through those issues. Mainly, it's like the question phrasing and the survey itself, the organization behind the funding behind it, and all of this. That's actually missing because the headline, as you said, Jeff, is human extinction. But when you look at the actual question and I could read it for listeners, it's more like vague than that.
So, the question that we're talking about is what probability do you put on future AI advances, causing human extension or similarly permanent or severe disempowerment of the human species? OK, so first question. What?
Second, what does it even mean? Like the disempowerment of the human species? And, like, each person that you're going to ask will have like a different scenario there, like disempowerment of humans. But OK, so the second question adds another layer, which is you need again to speculate about the probability, but now on human inability to control those AI systems and thus causing human extension or disempowerment of human species. Now, in our criticism last year, Melanie and I wrote, among other stuff, when is this futuristic scenario going to happen?
Again, it changes the answer, right? Is it going to be like 50 years, 200 years, 1000 years, million years? We want to know. Yeah. Next week? Maybe. So, if you ask Eliezer Yudkowsky. And they added the third question. Again, same phrasing.
AI system, causing extinction or disempowerment of humans within the next 100 years. So at least, like, you know, they listen to criticism, and they added that. But the same inherent problems are there because the phrasing is still, as I said, very vague. What does it even mean? Can you tell me what it means? Well, yeah, that's, that's kind of what my question is, the disempowerment of humans.
Like there is so much wiggle room within a statement like that. Disempowerment, how much to, you know, to what degree, how, you know, what is the, the gravity of this disempowerment? You know, the director, indirect harm.
And there's a million different ways you could go about that. And was it, was it their choice to be ambiguous? About this, you think? Or are they just trying to, like, I don't know, capture as much in the net as possible? I don't know what went from their mind, but they're rolling the same thing every year and it's successful.
So, why stop? And it keeps being criticized for this reason. Every year, every year we have the same headline. Yep. Right. Human extinction. You won't see any headline saying disempowerment of the human species.
No, of course not. It’s not sexy enough for a headline. Right. Not when you're competing with extinction. So, the headlines are extinction. Yeah.
When the question behind this long thing after that. So, for them, it's very successful. I understand why they keep doing it, and they keep getting funds to do it because it's successful. So, my larger point here is that when you quote AI impacts, you need to say who they are. And let me tell you who they are. So Katja Grace, who co-founded this place, came from, Jeff Jarvis, it will be familiar to you, Nick Bostrom's Future of Humanity Institute. And then, she moved to MIRI, Machine Intelligence Research Institute at Berkeley, being next to Eliezer Yudkowsky, which we mentioned. And she and four other guys are the entire organization of AI impacts. And they got from effective altruism organizations, $2.5 million for this annual survey. Now, it's not me saying that it's on their website. You Google AI impacts to go into the website and actually say AI Impacts is based at the Machine Intelligence Research Institute in Berkeley, California currently has three regular staff.
I counted five, but whatever. And their funding is clearly out there. You can see it's like publicly available data. It's effective altruism funds. It's the Center for Effective Altruism. It's open philanthropy. It's the Jaan Tallinn from the survival and flourishing fund, Future of Life Institute.
And before its bankruptcy, also Sam Bankman-Fried, FTX Future Fund, was really enthusiastic about their survey and wanted to fund them, but you know, jail (with other people’s money) and stuff. Yeah. So, it's all out there. And if you are a tech reporter and you write these are the results of the survey, you can add like a sentence or two.
Who made this survey? Like phrased it so badly and analyzed it and pushed and created this misrepresentation in the media. There's something behind it. Right? Mention it, please. I mean, I think the readers will appreciate it.
So, let's talk about how it was represented in the media because this is what was most striking to me and how you dug into this. You and Melanie both. Where the headlines said, as I recall, that 50 percent of AI researchers believe there is at least a 10 percent chance that humans will become extinct because of their inability to control AI. Right. That was the top line, if I'm correct, headline that we saw all over half of AI researchers say 10 percent chance of doom. But when you dug into the numbers, I've got in front of me, if you don't have on your head, how many people did the poll go to? How many answered, and how many can you break that down for us?
Do you remember the numbers? Last year, it was 150. Out of how many it was sent to? The one that I have, you have sent to 4,000 people. Only 700 responded.
This is like two years ago or the first, perhaps. And then, of those who did respond, only a fifth of them answered the extinction question. And then only half of them gave the possibility of extinction percentage merely at 10 percent. OK, so this year, they added more AI conferences, so they have like 2,000 respondents out of 3,500, I think. Not specifically this question.
This question was answered by like 600 people. I can look at the numbers. But the point is that also, like you say, AI scientists, the leading AI researchers believe X. The people who answer those surveys can be undergrads and grad students who sent their articles to this conference to promote their BA or MA or whatever they're doing, not the leading AI researchers of the world. But again, that's another example of something that is like misrepresented. Right. So, so, there was.
So, put this together before we go to the next stage here, is that there are people who have a self-interest. Nick Bostrom is from Oxford. He's the kind of philosopher king of TESCREAL. And these other folks who are very involved in this, they get a lot of money. They do this survey. They want to show how powerful they are and how power should be invested in them because they're the smart ones. So, they want to show that, oh, this could be doomed. So, watch out.
If you're not nice to us, we could destroy the earth, but we won't. You know, the funny thing is, Jeff, that even in their own results, they're such a small percentage of minority. And I put a screenshot on Twitter, so they have this table when they ask, like, overall, looking at future AI systems is going to be good or bad. And the five options are: extremely good on balance, good, neutral, on balance, bad or extremely bad. Now, extremely bad doesn't have to be extinction, right?
But bad, and it's five percent. OK, great. And now look at the others. So, on balance good, like 30. OK, and if you add everything like the positive is much like higher numbers than the negative. The headline should be: More than 95 percent are not doomers. Right, right, right. This is my read of this thing because even the five percent that are extremely bad are not doomers, right? Not all of them, I think, about extinction.
Just things that can go terribly bad. And, and yet, it is like, as I said, they're celebrating. It's like, oh, a lot of people are doomers like us. Yeah. Yeah. Well, it fits in with your Techlash research that shows there is an incoming desire to present technology as dangerous and bad. And so, taking the smallest amount here and making that the headline fits that narrative in media and fits the PR strategy of the TESCREAL folks. Yeah, it works because people don't remember that it's five percent. Just remember the headlines of how many are worried about doom. At the end of the day, it's so you're saying there's a chance. You know, I don't know if you remember that reference, but really, at the end of the day, like it's so far overblown rather expanded into this major kind of statement on the health of AI and the potential, the impact that it might possibly have.
But the numbers, yeah, that's, that's a really, it's a really interesting graph to take a look at. And I think the question that comes up for me, maybe it has to do with, with the state of journalism and where we're at with reporting on these things. Like we, I can understand that there are people who, whether they want to admit it or not, you know, fall into the doomer category, the TESCREAL, whatever you want to call it. And then there are the people that are reporting on this. Is it that the people that are reporting on this are so afraid of the potential bad things that could happen that that's highlighted? Or is there something else? Or is it just that they know that reporting on this gets more people to read? Like, I guess I'm trying to figure out, like, I feel like this is the responsibility of responsible journalism to point these things out. And yet, that doesn't happen.
Why is that? Well, there's a saying in journalism: if it bleeds, it leads. Yeah. So again, you only see headlines about extinction because it's worrisome. Jeff might say clickbait. You know, people want to read those doom things that's like your instinct. Like as a person, you want to read about it.
It's interesting. And so, it drives more readership, but uninformed readership than, let's say, like a nuance or a different headline, as we suggested. But OK, let's say we pass the headline. Headlines, it's a different topic. Inside of the articles, like first paragraph, second, something. Put the background, put the context.
Just mention when you say I have three organizations here, AI impacts, Epoch and Forecasting Research Institute. That was Time Magazine. And we asked all three organizations about when might AI outsmart us? Just perhaps mention that those three organizations are existential risk organizations funded by Open Philanthropy. It's again, publicly available data and say, oh, the trio got 16 million to promote existential risk ideology and then put like the rest of the thousand words about what they have to say. But just give the context. That's the failure here. And call you or, or Margaret Mitchell or Melanie or Timnet Gebru or any of these folks to get some perspective on this. Jason, I also, in my next book, the Web We Weave, coming out this fall, which I quoted Nirit more than once.
I pause it. I'm not a conspiracy theorist, but I do think that news people have a perspective that they also don't disclose. They have come to see the internet as competition. They think that it stole their audience and their attention and their advertisers and their money, none of which I buy because it's just simply competition in the world.
But they are resentful now. And the story they want to tell is this internet thing is bad. How many times do you see stories in newspapers these days about how you can turn off your iPhone or how the reporters spend a week without an iPhone and how wonderful that is when, in fact, a third of people are giving up news instead. So, there is a conflict of interest here, I think, in the media narrative that is willfully painting the internet as nothing but bad. And there's bad on it. Sure, it's a human enterprise.
There's good and bad. But I think that's where it goes. And now we're seeing this whole. What's fascinating to me is that I watched Nirit report on that when it came to social media, as she said earlier. But the story is repeating itself amplified now. Would you would you say that is the case, Nirit?
Yeah, a lot of people pivoted, like think of Tristan Harris. Yes. So, they pivoted from social media is going to ruin society, to AI is going to ruin society. Same thing, same issues, different technology. So, talk about about the system behind the scenes now. The building is you're a scholar of PR. You understand how PR operates.
They seem to be doing a pretty good job of using this money. Well, what's the what's that that behind the scenes structure look like in getting this message across? Yes, I recently called it the well-oiled X-risk machine. And what's in that machine is a lot, a lot of money, which I showed in the AI panic newsletter. So there's this whole ecosystem of they call themselves AI safety organizations. They focus on existential risk, human extinction from AI.
And they got, over the years, half a billion dollars. And some of them are focused on AI policy regulation. Some of them are about the communication side. Some of them like applied research and what they call AI alignment to align a future AI system with human interests, whatever they may be.
And it's this whole ecosystem. And interestingly, as I said, they're like shaping the messaging to what works best. So, they see, let's say, what Stuart Russell, Eliezer Yudkowsky, all the others wrote in the media, they analyze and do surveys and see if it works, as I said, twofold, make you more concerned and more willing to put strict regulation. And it's like it works. We're saying it works perfectly because through the media, it goes like and through lobbying, a lot of lobbying, it goes through to into politicians talking point. So, this is where you see the end result of this machine, because then you can see the EU talking about AI being as risky as pandemics and nuclear weapon, which is like an exact copy-paste of the Center for AI Safety’s open letter about it.
Exact same words. And like the UK and the AI act that we saw and regulation here, even in the US, are increasingly talking about frontier models and existential risk and putting those very, very strict things that those organizations advocate for. So, you can see in their reports that they send to places like the EU, they need to first create the fear, which then justifies what they're asking for from regulators, from policymakers: please surveil software, please limit GPUs, please criminalize development of AI systems. So, it doesn't get out of the blue.
It needs to have this ground of extreme fear of extinction for that to be receptive at the end of the policymakers. So, you can see this chain starting with the messages that we analyze, but then lobbying and actual like meeting with politicians. Tell us about some of these organizations. You've mentioned them before, but one you've focused on, I think, is Open Philanthropy.
What do you know about them? By far, the biggest one. So, they are the biggest funders of this whole ecosystem of AI existential risk.
They call it existential safety again. And they poured more than three hundred and thirty million dollars into it. And this is where it gets tricky, because when I'm looking at datasets like openbook.fyi and there's this Vipul Niak donation list website, the tag of AI safety can show you the amount of money those organizations are getting like Future of Life Institute, Center for AI Safety and all these guys. But some of the donations go to, let's say, Future of Life Institute, but the tagline is longtermism or something else. But it's still money that pours into those organizations. So, saying that if you add one plus one plus one equals three, it's five hundred million, it's the modest estimation here.
OK, it's much bigger than that, but it's like different tag lines and things like that. And someone in the name of, I think, I wrote it down here. Yes, Brendan McCord published a higher valuation of $883 million from open philanthropy and with effective ventures. There's the effective altruism fund and ventures that they also invest a lot. And you have also, as I said, Jaan Tallinn’s Survival and Flourishing Fund. And you have the Long-Term Fund, LTF. So, a lot of funders, and then you have like all these groups that they fund. So, what I love is there's this map that someone made, and I linked to the guy who made it that maps all the organizations that get money from Open Philanthropy and the others to advance the x-risk ideology. And it has like the policy guys, the media guys, the ones who are doing research, whatever, and it looks like islands and, you know, thanks in something in the sea and a valley. So, it's actually a map and it's illustrating the islands and the places. And in there, you have like the organizations, their logo, a link to the website, and one line of description. And when you see that map, this is why I put it in my newsletter.
It's like opening your eyes about like the magnitude of this thing that like just becomes so much bigger now with all this funding, and it grows every minute. So, one, there's a reporter with a Washington Post named Nitasha Tiku. And she wrote a story I did not like some time ago when Blake Lemoine, who was still there at Google, determined that Google's large language model was sentient. You probably all remember that story, and that spread all around very quickly. And I thought that it just took it on credulously and didn't do the kind of reporting we needed.
And I didn't like that story. However, Nitasha Tiku did another story more recently than I did like, where she actually quoted Timnit Gebru about how the Open Philosophy Foundation is funding clubs and classes and fellowships for students. At Oxford, Stanford, MIT, Harvard, Georgia Tech, Georgetown, Columbia, NYU, because they're trying to, I'll use loaded language here.
Brainwash young students and using these universities because university, I know if I've ever been one, you want to give me money? Sure, I'll take your money. I'll do this. It all sounds good. It sounds like it's about AI safety.
It sounds perfect. But what they're really doing is using these institutions, the schools themselves and the institutions within them, classes and clubs and so on and fellowships to spread these doomer ideas, these x-risk ideas for the next generation of scientists and leaders. And that sobered me a lot thinking that they've got a major strategy here. Meanwhile, you did some reporting some time ago on the existential risk observatory, don't you love these names?
Yes. How they were testing the messaging. They're using marketing techniques to test the messaging. Do you remember? Can you talk about that for a second? Yes. So, I came across one of their studies and then I went down the rabbit hole of all the other studies.
It's just because, again, I'm googling things like God-like AI and I'm looking for the phrases. So, they’re doing those surveys, as I said, where they test, OK, these are like the narratives and headlines that we successfully put in the media. And let's ask people what are their thoughts about it, if they're convinced. So, they ask them about their worries before and then after like every test. And, and then, they check what worked best. And some of the things that I found very amusing, it was not done by the Observatory, but by the Campaign for AI Safety, which I called the Campaign for AI Panic. And in there, there's this guy who uses a market research startup that uses AI to do research against AI, which I find hilarious.
And in there, he's checking. OK, so if I'm saying that it's out of control, is it works better than, say, then it's superintelligence? And if I like, they really like testing the actual words, and then they analyze, as I said, if it's a female versus man, it's age and Republicans versus Democrats, etc. So, for me, it was really, as I said, eye-opening to see how hard they work on their messages. And then they share it in places like the Effective Altruism Forum so all the organizations can read it, learn from it and, you know, implement it in their advocacy later to policymakers.
So that's like the important route that I think people should take with them and understand it's going on. So, I'm looking at my manuscript, and I listed them. You've talked about a lot of them already. But if you just if you just put all these names together, you have the Future of Humanity Institute, which is Bostrom's; the Future of Life Institute, which was the source of the moratorium letter; the Center for AI Safety, which issued the next letter; the Center for the Study of Existential Risk, which must help fund the global priorities Institute; the Center for Security and Emerging Technologies; the Center for Effective Altruism and the Four Thought Foundation for Global Priorities Research run by Bostrom's partner at Oxford William McCaskill. Those names. It's like they hired a naming company. Or they tried ChatGPT. Right, yeah.
If they didn't do this work, they knew better than to spend that time doing this work. Please offer 100 variations of AI safety orgs. Boom. Yeah, five seconds later. And what makes me funny is sometimes they're like saying, our organization is, its entire purpose is to prevent extinction from AI and to make sure the human species is flourishing among the stars. And I'm like, really? Yeah.
And then they get $100 million. So, I guess, you know, it works. Yeah, apparently, something's working. So, I have a question around all this that keeps bubbling up for me when we talk about these things as we have a lot in the last, I'd say, half a year, Jeff Jarvis, at least, the conversations on the beta of this show and this week in Google and everything as much as we have is that there is like, I understand the machinery behind a lot of what we're talking about is highly flawed, x-risk, effective altruism, the funding that goes into all these ideologies that have some sort of a purpose or some sort of a driving point and a destination in mind in kind of cultivating the fear around artificial intelligence. But if we were to remove all of that entirely and say those things don't exist yet, artificial intelligence still does, I think that there still would be remaining in some people's minds some sort of concern about this technology because it's the fear of the unknown. It's this thing that they don't know about. It's this thing that's capable of doing something that they didn't think was capable before. And if that's the truth, then what does that mean? How can these kinds of concerns be explored in a responsible way and take them seriously without buying into something bigger?
You know what I mean? Because that fear will exist. It's going to exist. There's too many people on this planet to expect that they're all going to kind of see through these things and immediately understand that there is no fear, there is no need to be fearful of it or risk maybe at the end of the day. But how do we do that?
I don't know if you have the answer. Jeff? No, it’s for you, Nirit, you are the expert, you take it. So, my angle, my small niche, is the tech journalism part of the puzzle. Right?
This is what I'm looking at for the past 20 years. And as you said, responsible journalism, they need to do the work. And I always lecture about it. To put, as you said, if you're going to put AI impacts, maybe invite Melanie Mitchell to give her a quote in there.
So, just balance those fears with other rational experts. And just give room to other voices because those doomers became media heroes. They were like the stars everywhere. And they had their time.
I get it. People were afraid, as you said, of the capabilities and freaked out. I mean, yeah, it's a jumping capability. We, we can see that it's more capable than we thought. And it created this like snowball of panic. But then you need to have people that we say, this is what we should expect.
And this is maybe the harms that we need to deal with right now and just balance it. Are you getting? Am I naive? No. You're hopeful. Hopeful. Are you getting I've sent reporters to you. I don't know that I'm calling you. And I also tell them to call Margaret Mitchell, and Emily Bender, and Timnit Gebru, and so on. But I rarely see them quoted. I tend to only see quoted news coverage, the A.I.
boys themselves, the same people who are propagating this. Have you seen any progress? Are there any tech reporters you think are doing a good job of trying to balance this of trying to dig below the obvious surface of this messaging? Oh, of course. There is hope.
There's always hope. OK. So, there's Chris Stalker-Walker, that is the most productive tech reporter on the planet. He's a freelancer, so you publish every day all over the place.
And he's doing a balance work. There's a Sharon Goldman at VentureBeat that really like examining different angles, like people from the A.I. community and interviewing them in very thoughtful pieces. Will at Wired. I can name, like, there are good A.I. reporters out there, definitely. Some of them I interviewed for my next book, by the way.
I tell us about that next book, if you would. Oh, so as Jeff mentioned, the Techlash demonstrated how the reputation of the tech industry can rise and then fall. And my point is that it can also happen to the ideologies. And what's interesting to me is the effective altruism and existential risk ideology and how it is like a rise to power.
We can see it, as you said, academia, media, policymaking, rose to power. But it can fall, and I think it will fall. And that will be an interesting story to tell.
So again, maybe it's me optimistic here. I can see a backlash coming because, as Spider-Man fans know, with great power comes great responsibility. And also, with great power and influence, it will be like more scrutiny that you're going to face, right? So, people now, more and more people realize like the magnitude of this movement, the funding and like the influence campaign that's going on there, operation, you may say.
And, you know, sunlight is the best disinfectant. And I think people will gradually understand what we're dealing with. And there's going to be a backlash and hopefully with some reckoning about how we let them be so influential in the first place. We're not there, but it will happen, and a book takes time to write.
So, I'm collecting all the materials about the rise. And hopefully, by the time the book is out, there's also the fall. That's optimistic.
I’m an optimistic girl. Oh, do you, do you? We need to let you go in a minute here, but let me just let me ask you an unfair question. Do you think that the people at OpenAI and Anthropic, let's say, which are both involved very much in these ways of thinking are, I don't know how to phrase it, stupidly good guys or bad guys or, or responsible or irresponsible, or headed the right direction or the wrong direction? Those are two different companies. Anthropic is heavily inside the effective altruism and, and doomerism all over.
OpenAI is more complex and nuanced. Yeah, it is these days. As we saw from the board firing Sam Altman with all the drama there. Um, I think at the end, if you give like so much power in their hands, they're going to do good things and crazy things. And we see some of this and some of that. And you need to speak truth to power and hold power to account. And we need to look at them as those powerful systems that are just doing whatever they want and talk about doom because this is what they believe. And we just need to like highlight those stuff, criticize them, look at everything with a critical eye.
That's the only thing I'm requesting, I guess. You're asking for journalism. Journalism scholar, what can I say?
That's what I do. Whoa, whoa, whoa, whoa. That's crazy talk.
Asking for journalism. That's let's tamper it down a little bit. No, I think the work that you're doing is, is tremendous. And I, I didn't mean to, I didn't mean to ask a question.
Might, might get a little uncomfortable there a little bit earlier. But, but I think it's important because, you know, people, people want to know that their, their thoughts and their feelings and, you know, all that when it comes to things they don't understand are being respected. And some people will be afraid of this regardless of, you know, how good the journalism is around it.
There's not a whole lot we can do about that. I think Jason, the word you just used is so important as respect. Is it's about respecting the public to be able to inform them. So, they can make their own decisions about how nervous they are, whom they trust or don't trust. But a lot of this coverage is dumbed down to the extreme of not letting people think that they're going to think for themselves about this stuff. So, if you can hold up my cover of Techlash Book again. Happily, Nirit. That's the entire thesis, the pendulum swing. So, we only have the extremes and not a lot of middle ground, unfortunately.
Yeah. Well, you're doing, as I second Jason, you're doing, you're doing just great work here. And I'm, I'm ever your fan and follower. What's your DrTechlash is your Twitter handle?
Yes. Where do you, what, what platform are you using mostly these days for social? Twitter. Still Twitter. Yeah. Okay. Because it's people.
Depressingly. So, so follow DrTechlash, retweet her great work because it is important to get this message out to other technology people and journalists as well. And I'm really glad that's right.
What you do, Nirit. Thank you so much. I love you guys. Thank you. Thank you.
AIpanic.News is the newsletter as well for people to check out the writing that you're doing around this. Nirit, thank you for spending time with us today. It was great to see you again.
My pleasure. We'll certainly have more opportunities in the future to bring you back. So, we'll see you around the dystopia. That's right.
We'll try not to get too, too deep into the whole of dystopia in the. If we will not get extinct by then, Jeff. Exactly. All right. Take care, Nirit. We'll see you soon. Thank you.