Jason Howell and Jeff Jarvis break down Greg Epstein’s thoughts on tech worship culture in Tech Agnostic, OpenAI’s legal battles with publishers over copyright infringement claims, Amazon Alexa’s generative AI struggles with hallucinations, Google Notebook LM’s friendliness tuning updates, and more!
🔔 Support the show: http://www.patreon.com/aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
0:01:48 - Interview with Greg Epstein
0:35:35 - Biden’s administration proposes new rules on exporting AI chips, provoking an industry pushback
0:38:39 - NVIDIA Statement on the Biden Administration’s Misguided ‘AI Diffusion’ Rule
0:40:42 - Biden signs ambitious order to bolster energy resources for AI data centers
0:42:08 - OpenAI presents its preferred version of AI regulation in a new ‘blueprint’
0:43:39 - The New York Times' takes OpenAI to court--again
0:46:43 - Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal
0:43:21 - Amazon's upcoming Alexa AI brain transplant might make you use it more than just weather and timers
0:57:35 - Adobe’s new AI tool can edit 10,000 images in one click
0:59:33 - Slopaggedon: Behold the AI Slop Dominating Google Image Results for "Does Corn Get Digested"
1:04:53 - Google’s NotebookLM had to teach its AI podcast hosts not to act annoyed at humans
Learn more about your ad choices. Visit megaphone.fm/adchoices
This is AI Inside, episode 51 recorded Wednesday, January 15, 2025, tech agnostic. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. What's happening, everybody? Welcome to another episode of AI Inside, the show where we take a look at the AI that is layered throughout so much of the world, the technology.
I'm one of your hosts, Jason Howell, joined on the other side of the country here in the US by Jeff Jarvis. How you doing, Jeff? Hey, boss. Welcome back from CES. Yeah.
CES. And I didn't get sick while I was there, so it was amazing. You know? That's phenomenal. Did your have your feet recovered?
My feet are fine. Yeah. I was really only there for, like, 48 ish hours, maybe a little bit more than that. So it wasn't enough time to get completely destroyed. But, I think I've got ideas on how to do next year differently and to see more, you know, because it was it was too short.
I need to expand it for next year for sure. But it was fun. It's always fun to kinda travel and see new things and everything. Good to have you here. First, just super quick, wanna thank our patrons, our amazing patrons, patreon.com/aiinsideshow, including Thomas, one of our amazing patrons.
Thank you so much for your support each and every week. We could not do this show without you. And secondly, if you have not subscribed to the podcast, please head over to aiinside.show and do that. Then you won't miss it, and we don't want you to miss it. Because if you miss it, then you might miss shows like today where we have an amazing guest lineup to join us.
And, we might as well dive right into it because often on this show, we talk about well, the the, dare I say, the worship of artificial intelligence. It's really in the world of technology, AI has has dominated the last couple of years. And in the same way that, like, crypto bros and whatever, there's these factions of people that are like, this is the second coming of technology. This is the biggest thing to happen in the last, you know, millennia or whatever. And I think today's guest can speak exactly to that because, well, he's written basically a book about that.
Greg Epstein is the humanist chaplain at Harvard and MIT, a leading voice on ethics and technology, author of Tech Agnostic, How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation. And, Yeah. Greg, it's a pleasure and an honor to get you here with us today. Thank you for taking time. Thank you so much, Jason and Jeff.
And, Jason, I appreciate all the, religion metaphors you slipped in there, the worship and whatnot. Or or is it even a metaphor at this point? I mean I I don't know. These are our overlords we're speaking of. We we must we must be more polite to them.
Yeah. I mean, you can tell me. You wrote a book about this stuff. Your book really, calls for critical rethinking of our relationship to technology. And I can only, like, speak from my own experience.
Like, I I feel like everyone who is attached to technology on a personal level probably has their own personal experience. You know, for me, dating back to being a kid and being fascinated by technology, this has followed me around throughout my life. It's informed what I do as my job, as my career, and I've never really thought of it, in the terms of like, oh my goodness, is technology kind of a form of religion that I've attached myself to? How does this analogy help us to kind of understand our individual relationships to technology? Sure.
I mean, first of all, I'll confess. I mean, I'm in one of these interesting situations having written a book like this where, you know, I've got a hammer. Maybe it's all I've got, and so everything looks like a nail. You know, I admit it. Right?
I, I work as a chaplain at 2 interesting institutions, both of which have had, some influence on the creation and shaping of our AI world. And, you know, I'm a chaplain, which is typically a religious adviser, but I'm non religious. You know, I I I identify as atheist if you want, as agnostic if you wanna get super philosophical about it, more aptly as a humanist, somebody who's trying to be good without God, who's trying to live a good life, build a healthier community and world, in a non religious way. And, you know, I had been working in that capacity for about 20 years now, a little over 20 years, at Harvard, the last 6 at MIT as well. And, you know, essentially, what I thought of myself was trying to do was build community.
Right? I mean, I that's what I wanted to do. I felt that, I still feel that, you know, although I'm personally not religious, there's a lot that is valuable to human beings about religion, and and probably chief among that is the the sense of community that it can bring, you you know, our society today increasingly isolated. I really want people to feel more deeply and meaningfully connected to one another. Right?
So I set out to to view technology as what I thought of as, I mean, I rather, I set out to view religion as what I was taught to see as the world's most powerful social technology. That's the kind of conversation that I would have had at Harvard Divinity School where I studied or many other places. But after about 10 years of building this formal congregation, in Harvard Square, raising 1,000,000 of dollars for it, spending 1,000,000 of dollars on it, I began to get this sense that something wasn't quite right, that it wasn't maybe the work that I I felt like I wanted to do for the rest of my life. And, I started to realize, like, oh, religion is not necessarily the most powerful social technology in the world anymore. This is around 2017, 2018.
Tech is now the world's most powerful social technology. I, you know, I thought of myself previously as trying to bring the world closer together, but I didn't make that phrase. That's Mark Zuckerberg's phrase from the rewritten Facebook mission statement of 2017. And it's like, these guys, and they were mostly guys, the the the leaders of them, set out to do something that was sort of, you know, in in Zuckerberg's language, almost ripped from the pages of the textbooks and discussions that we would have at Harvard Divinity School about what it looked like to build community. But does that mean they were doing it well?
Does that mean they were doing it ethically, sustainably? Any good thing that you I mean, in you know, it's not that it had nothing positive to say for. But in many cases, it was just not good. Right? And so I began to think like, okay.
Are there other ways in which tech and religion can be compared, you know, usefully? I started making a list, and I've never stopped to this day. This was about 6 years ago. And, you know, the the the to now give a briefer answer to your pretty brief and, you know, important question. Right?
If billions of people had begun to devote themselves in the way that we have to tech and AI in the last few years, as fervently, as, as thoroughly as we have to this new phenomenon of tech. And believing in some of the really strange ideas, that tech and AI are associated with that I think we'll get into a little bit in this conversation. If that happened, you know, and we were genuflecting before, you know, real altars the way we are 100 of times a day to our stained glass black mirror altars. The bible that we carry around in our pocket with us everywhere we go. Know to be more critical of it.
You know, the it would be like, woah, bro. You need to start exercising some critical thinking. This is not okay. You know, you are you are spending hours on this religion, you know. But with tech, it's perceived as okay, totally normal, really a smart thing to do.
And it's, you know, it's scary because these are becoming, you know, 1,000,000,000,000 of dollars, worth of companies that that are demanding, our attention, our devotion, our worship more than ever before, more than anything maybe has ever before. And, you know, we need to demonstrate more critical thinking about it. So, Greg, there's so much I wanna talk about. Oh, wow. And It's rolling.
The whole universe. And, so I wrote a book about the early days of print, and was fascinated by the relationship of the technology of print to the reformation. Yeah. It wasn't, technological determinism, but it obviously had an impact. It was enabling of Luther.
And, and then and I'm fascinated too about how when time goes on, the technology tends to fade into the background and become and you write about this somewhat in your book. And so I I think that that at first it becomes more mysterious and mystery is about religion. Mhmm. And and people, do kind of worship it, because they think it is all powerful. And then the question I'm trying to get to here is that what happens as time goes on?
That, at a certain time print became just a thing that was around everybody's lives, and it wasn't mysterious, and it wasn't powerful. It was in our hands to do something with. And my hope is that the same thing will happen here, but but you kinda write about this a little bit in, you want the technology to be the forefront of what we're thinking about so that we're aware of it. But I also think that if it fades into the background, it becomes less devotional, kind of. Mhmm.
So two parts to this is one kind of I think we're going through a phase. We're going through 2 arcs. One is the Internet where I think we're coming maybe not to the apex, but we're we're we're we're more distant from the early days of the Internet now. But then here comes AI, which again has been around, but now that everybody's aware of it generative AI and all the crazy philosophies that are around it, transhumanism and and extroporanism and long termism and so forth, especially in the AI world, you do have a religious bent. And so I'm wondering about that.
What what's the what's the sequence you see here in terms of technology seeming all powerful and mysterious and religious or familiar? And is that is that familiarity good or bad? Sorry for the long question. I feel like I'm just gonna start over here. Fascinating topic, Jeff.
And I, you know, I discovered your work, last year, actually, and and have really enjoyed following you and your your really unique and beautiful voice. So, so thank you for all you do. And I'll I'll just say, look, it's a great opportunity for me to say from pretty much the beginning of this conversation, when you raise the idea of the printing press, you know, I'm not anti technology. No. No.
Right? Just like I'm not anti religious. Right? So I'm It's how do we use it? How do we what's our relationship to it?
Right? I'm an atheist who is, you know, very much willing to work with religious people when they're doing good things, when they're believing in in worthwhile things that, you know, I might disagree with the theology, but I I, you know, I I I appreciate the the the center the centeredness on human justice and on human Mhmm. Compassion that you find in a lot of religion. And, you know, similarly, in the world of tech and what I would call the the tech religion, you know, there's a lot of good that various forms of technology have done since the very beginning. I I trace in the book technology, you know, what is technology all the way back to midwifery.
The idea that as the human brain developed got bigger than the primate brain, groups of women had to come together and help each other to deliver babies, so you needed social technology. Otherwise, you know, you couldn't have the the human animal. And so technology goes all the way back, I think, to that, 1000000 of years ago. And it's you know, these are good things by and large, but, you know, individual forms of technology can can do harm, can fall into the the wrong hands just like individual forms of religion. Right?
Mhmm. So, you know, the the question I think that that comes up for me is is who is it for? You know, if you've got, the, you know, the ancient or, you know, the the the historic, technology that Jeff is talking about. Right? Mhmm.
You know, that stuff, it's not all in the hands of the people who are creating it. Right? I mean, I you know, yes, you've got people who are creating the printing press, and they've, you know, they're they have some control over what's going on in society. They've got But that diminishes as as time goes on. Yeah.
You know, to the extent that it is right now, right, where, you know, you've got, these major corporations with, you know, 1,000,000,000,000 of dollars invested and investing in these technologies that they stand to benefit from. So it's like, of course, we're told that the AI is gonna be great for humanity. Right? But, you know, we're already seeing studies now that show that more AI equals less critical thinking. We're seeing, you know, people fall in love with their chatbots.
You know, we're we're seeing all this stuff that you know, we're seeing people who are being persuaded by chatbots to take their own lives. And, you know, all of this stuff is just sort of being unleashed on society without any regard for or at least without anywhere near enough regard for safety, for, you know, for for ethics. And so, you know, will that will some of that normalize? Well, I mean, I guess one would like to think that there'll be a sort of distributed, you know, version of AI that will be less top heavy, less less centered on, you know, the the $1,000,000,000,000 corporations and more beneficial to people. But, you know, I'll believe it when I see it.
I mean, in the sense that, you know, at this point, I think we need a lot more evidence to suggest that that's where AI is headed. Because right now, you know, it's incredibly expensive, and it's in the hands of the most powerful people in the world, turning them into even more, influential oligarchs or or broligarchs, if you'd like. Especially now. Yeah. So I'm curious about your prescription for this.
And it's it's hard as you're talking, it's hard I think it's hard to separate out AI from the Internet. Because the Internet, I argue in my latest book, is a human enterprise. It is about bringing people together. It is about connection for good and ill. And AI is still in the hands of the wizards.
Though I think that will change as the tools become more available. So let me go back to the internet a little bit in terms of your thinking. I've long argued that that Facebook should have and God knows it's changed, in the last, 2 weeks or so. In terms of all of, I was gonna say Musk's, Zuckerberg's, new proclamations. But I've long said that that Facebook should have begun with a North Star, a raise on debt.
This is why we're here. And so we could have had some call upon it. Or or in the book, I I use the the more religious word of a co a covenant of mutual obligation. That Facebook just now obligates its users. It doesn't obligate itself to accountability.
But I wonder whether that would go against what you were saying, in that it would be Facebook acting too religious, too much like a religion, like too there's too much power over people. Is it better for a technology company to be itself agnostic, or is it better for a technology company to try to say that we want to bring out the best in people and make those judgments about what is better and worse? Well, you know, the the idea that it should have started with a North Star, is I mean, it just it made me sort of smile painfully, like, it did start with a North Star. The North Star was Mark Zuckerberg and his friends wanted to to rate the young women in their dorms and see if they were hot or not. They wanted to, you know, to crack, you know, crass jokes that, you know, centered them as as privileged young white men.
That's that was the North Star of the thing. Right? And then it centered Harvard because it was, of course, you know, available first, you know, at Harvard. And then it centered the Ivy League. And then, you know, on and on.
And and, you know, the North Star of this thing imagine if the North Star, you know, one day just became unreliable and it kept shifting around in the sky. You know? I mean, Zuckerberg keeps changing with the wind because the North Star for him is what I describe in the book, and I, you know, wrote a short piece about, this for time in case people wanna sort of start with something brief. It you know, it's it's with filling a hole inside that is very common among young men, of the sort of demographic that that starts Facebook and and that that that that then, you know, proliferates it. It's this idea that there is, something the idea that I described is is something called the drama of the gifted technologist, which takes from, takes, inspiration from a book called The Drama of the Gifted Child by a psychologist from the 20th century mid-twenty century named Alice Miller, who essentially argues that, we people who, you know, are otherwise fairly well-to-do might who might have, you know, relatively good families, that relatively loving people, might be very common to feel a sense of of of inadequacy, of lack of wholeness, of a whole in themselves that they think can only be filled by being, exceptional, by being the best, by by winning, by demonstrating that they're so much better than others.
It's widespread. It's kind of the official psychopathology of of both the Ivy League and Silicon Valley. And, you know, the tech world both, is inspired, if you could say, by that kind of thinking, and it also exacerbates it or puts it on a kind of digital steroids. Because, you know, there's this self mythologizing that goes into all these tech companies like, hey. We are prophetic.
Our investors are prophetic. We are going to save the world. So invest in us, and then you will see, you know, what I call the theological symbol of the tech world, which is the hockey stick graph. You will see profits go up exponentially forever into eternity. And, you know, that's that's a big part of of what tech is all about.
And I I guess, you know, what I'll say about it is, you know, you ask, like, is there too much religion? And, you know, what I would say is that there is something you know, it it would be wonderful if we could talk about the kind of thing that you see in liberation theology is a is a, you know, a trend in in theology that that was, you know, we're we're meant to liberate other people through a particular kind of religion that that centers, like like, let's make a world that's more just, that's more kind, that's more accepting, that's more welcoming. And, you know, yeah. Sure. All of that would be great, but that's just not what this form of Silicon Valley tech is.
It's it's it's North Star is filling the hole inside, not reaching outward and making the world better. Mhmm. Yes. Yes. Thank you.
I wish I'd had that line before I wrote the last book. Slide that right in there. Something that kinda came up to me is is I've been kind of, you know, hearing hearing this conversation around the different ages. Like, obviously, right now, you know, we're heavily in, you know I mean, what do you want? Do you wanna call it the the Internet age or the information age?
We're not really quite sure or or if we've moved beyond that to something else, you know, there was the industrial the Anthropocene now. Right? Like, we're we're we're at the dawn, some people would say, of that of the post human era. I mean, I think that's really scary and overblown, at the same time. But, certainly, that is what I think certain people would tell you that they're trying to usher in.
Sure. Sure. Okay. So then given that given this moment that we are seemingly in and knowing that we have history to pull from with all of these other ages, like, I guess I guess where my mind is at right now is there really does seem to be a lot of, a lot of energy around this kind of, this kind of deep, deep devotion to the power and the promise of artificial intelligence for whatever, you know, for whatever the reason is, be it profit, be it, you know, true dedication and and belief and that what the technology can do for humanity and all these things. If we look back in previous ages, is there is there a similar correlation to draw between the the technologies of the time and the reaction to them?
And what lessons can we learn from a historical perspective to apply to where we're at right now? Or is this moment just so much different because the technology is so much more advanced that it that it it makes the scenario, different and and hard to compare? So this is this is why I actually really did feel that the premise of this book, that that technology has almost literally become the world's most powerful religion today, was was gonna be useful, because, you know, yeah, you can look back at the history of different technologies and say, well, you know, how do we compare the emergence of this technology to to to those? But there's never really been a technology that that has that self mythologizes quite the way this one does, along with its seeming ability to perform miracles. Right?
I mean, that's that's how it's been marketed. You know, there there have been a number of scholarly papers, that have shown, you know, how it it really is being presented to us as literal magic. And, you know, when Sam Altman got on the stage, for example, at Harvard's Memorial Church on the the the Bema as as my Jewish brethren would would say, you know, or the the the altar, and and literally pointed out up underneath a golden cross that his technology was miraculous. Right? That that it it really is, being presented in this special way.
And so I thought, well, there's there are some precedents in the world of tech, but where I'm seeing enormous precedent for the kinds of technologies that are emerging today is in the history of religion, where, you know, when you have these new incredibly powerful new traditions emerge, new ideas and symbols and rights emerge, it can completely change, reorient, redefine our sense of what it is to be human. Right? I mean, you know, whether or not you know, think of it this way, I guess. Everybody has a sense of what it is to disbelieve in religion because, you know, you may be deeply faithful, in your religion, you may not, But you have a sense, if you're a thinking person, that there's all sorts of major religions in the world. And, you know, you probably don't literally believe everything and all of them, nor do you probably, you know, fulfill all the rights and sacred obligations of all of them.
Right? And so, you know, you have this sense of, like, as I as I raise in, chapter 2 of tech agnostic, this this question of, when a messianic figure comes on the scene, like, how do we know that this is really our savior? How do we know that this new thing, this new person, this new endeavor is really going to be as good for us as it purports to be. Right? Like, Sam Altman, again, an example, can tweet out as he once did, abundance is our birthright.
Right? Sort of echoing genesis and, be fruitful and multiply. And, you know, ChatGPT and, OpenAI really are making Sam Altman's life quite abundant. I mean, he's not wrong. Right?
I'm sure there is a lot of abundance going on in Sam's life right now. I mean For sure there is. Bill Gates, Mark Zuckerberg, these guys have a lot of abundance going on. No lie. Right?
The question is, is it gonna lead to that same kind of wonderful abundance for us? And and the best comparison that that I can make, perhaps, you know, or at least in in in short is, you know, at at the beginning of chapter 2, I talk about, this guy named Shabtay Zvi, who was the the most important, figure false messianic figure in history. And, you know, the question is, like, how do we know that that Messiah is is real, is is worth worth believing? Mhmm. Interesting.
If if those were next step now with oh, sorry, Jason. No. Go for it. Go. My computer is very delayed, so I apologize.
Right there's there's there's golden calves and graven images here around technology. But what gets the most these days is this idea of humans creating the superhuman. Yeah. Artificial general intelligence, superintelligence, and the eugenicist roots of all of that. Yep.
It's it seems like is is is this a unique time, or is this a human trend that humans think they can outdo even themselves and become, pardon me, godlike in the creations of what they what they make. Yeah. I mean, that's that's literally what we're talking about. Right? When we we're having conversations about AI, you know, just as an example, there's a guy named Anthony Levandowski, who is a is a decamillionaire.
Right? Made over a 100,000,000 of $1,000,000, at both, Google and Uber working on, self driving tech. He also, by the way, was sentenced to jail time, for since the jail time for trade theft, intellectual property theft, got out of it in the last couple days of the Trump administration when he got pardoned. But I digress. Levandowski, created an official religion.
He calls it way of the future, and he he filed papers with the US government to to create this religion. And the idea is AI is becoming a god, for all intents and purposes. Right? Like, he's he's not he's not saying that it's, like, actually Jehovah or Yahweh or or whatever. But he's saying, for all intents and purposes, it's becoming a god, and it's gonna be pissed off at us if we don't start to worship it soon.
And, you know, then you've got people who actually even believe that that the traditional god and the god that Levandowski is talking about are 1 and the same. Right? But, you know, beyond even that, right, like, another, you know, helpful example to sort of reframe it just a bit is, like, you've got, like, Arvind Narayan and and and Sahaj Kapoor to Princeton Scholars, a great book, AI Snake Oil came out earlier, you know, last year as well. In their popular blog, AI Snake Oil, they write, that AI, AI companies, are pivoting from building God to creating products good. Right?
And yeah. You know, like, that's that is good. But the point is, you know, if they have to pivot from if the if the world's most influential industry today has to pivot from building a god, like, you know there's a lot of god building going on. Right? Right.
And, you know, it's it's this it's all this passion to show how powerful, how exceptional, how, you know, how extraordinary the people who are building these gods really are. Right? Like, they're not saying they're making themselves the gods, but that's because they don't have the superpowers to do that. So the next best thing is make the god and then look at all the glory they get to bask in from their own creation. Yeah.
What's interesting about that is making the god, but then also as we've seen putting out the messages to say, hey. Well, you know, we we do need to control this. We do need, to, you know, to put barriers and boundaries in here and do it responsibly. We'll be the ones to help you and tell you how to do that. Right.
Right. So that that makes them the prophet of the religion. Right? Like, if they if they're the only ones who know how to do the religion correctly, what is the what position does that put them in? If, you know, it makes them the pope, the the cardinal, the prophet, whatever.
And and that's you know? Now have there, according you know, over history been good popes, good priests, good prophets, good cardinals, good whatever, you know, good shamans? Of course, there have been. And so, you know, I'm not saying that anybody and everybody that has ideas about how to make tech better is some sort of false profit. No.
That's not that's not the point. The point is, however, that, you know, we need a lot more critical thinking about which people are in the most position to benefit from having you believe that AI is this mystical religion that can either take us to a supernatural sounding kind of heaven state. And I I draw the parallels in chapter 2 of of tech agnostic 2, you know, between, talking about AI futures, Utopian AI futures, and and religious theology about heaven, or, you know, that it's going to doom us, that we're going that we're all going to a kind of AI hell. And I draw parallels as well between, you know, the idea of existential risk or or AI doomerism such as, the kind espoused by Geoffrey Hinton, the recent Nobel laureate in physics who's not even a physicist, and recently declared, like, hey, up the number from 10% chance that AI is gonna destroy all of human beings in this century to 20%. Why not?
Because my scientific genius brain just came up with that number, so let's let's do it, humanity. You know? And so, you know, the when you put yourself in that position to be that kind of theologian, you know, things rain down upon you like Nobel prizes or 1,000,000,000 of dollars or 1,000,000,000,000 of dollars of investment in your company, whatever. But, but it's not all that it is cracked up to be. We call them moral entrepreneurs for a reason.
You know, they got yeah. Right. The product that they're building is their own form of morality. Mhmm. Yeah.
Yeah. Indeed. Greg, as we said before the show, I I knew that our time would would roll fast because we always have so much to talk about on this topic. And, I'm so happy that we're able to get you for a little bit of time to talk about the topics from your book that is tech agnostic, how technology became the world's most powerful religion and why it desperately needs a reformation. Folks can go out and get it right now, published, just a couple of months ago.
So congratulations. Glad we finally got you on. Thank you, Greg. Yeah. My pleasure.
Thanks for thanks for having me. Yeah. We continue the conversation. Yeah. Yeah.
Yeah. More more, opportunities in the future to continue the conversation. You might need something about this in the news next week. So if, you know, if you if you do, just let me know. Okay.
Alright. Sounds good. Thank you again, Greg. We appreciate your time and, wish you all the best going forward. Alright.
Take care. You again. Bye bye. Alright. We'll talk to you soon.
Alright. Fantastic. Thank you for the suggestion, Jeff. You were the one that that reached out and said, hey. You know, be a good idea to to look into Greg, and I'm really happy that we're able to get him on to talk about this.
Really ties in very, very well with It sure does. A lot of what we've been talking about. Yep. We're gonna take a super quick break. And then when we come back, we do have some news to talk about.
I'm sure some of this news interweaves with the the, information that we were just talking about, before. So hold on tight. We'll be back in a second. Alright. This, is the last full week of the Biden administration here in the US, and, seems like very suddenly Biden's throwing a bunch of basketballs at the hoop to say, okay.
We we gotta I gotta get some get some, movement here on my own kind of plan around artificial intelligence. So there are a couple of things that's happened. The, the administration introduced a rule that would restrict the export of GPUs to China, and other nations that, they have deemed to be adversarial. It's a 3 tier system. And so kind of in the first tier is 18, allied nations, unrestricted access to GPUs for those, you know, Australia, UK, Japan, and more.
The second tier covers most other countries limited GPU exports without a license caps on high end GPU shipments. And then the third tier is the tier that you probably don't wanna fall into, and it is, you know, your China, Russia, Iran, North Korea. And this require and the reason you don't wanna follow into it isn't because you're one of them in one of those countries, but because it requires licenses for GPU exports, which would generally be denied. Denied. So this is this is like these are the you know, if you follow that 3rd tier, probably not gonna be any export action between US and, those countries when it comes to artificial intelligence.
And, of course, all of this is being done, in the in the race of AI and to make sure that we, the United States, stay supreme or or, you know, raise our our chances of being the winners in the AI race. Yeah. It's it's it's it's interesting to see that in the one hand, I think Biden is trying to strengthen the AI industry, the burgeoning AI industry in the country, but also by trying to hold it into the US. Nvidia stock went down as a result because their their their export market for chips might be restricted. It tries to hold on in a way that that the America got so accused of in the Internet itself that we were, we were said that, you know, we were running the internet and we were letting anybody else in, which I don't think was the case, but I get, I get the, the jealousy, let's say.
So it's interesting. It's a, it's a yin yang here. I'm gonna, I'm gonna create more things for energy and, and domestic chip building and data centers and all that, but I'm gonna try to limit it, to the US and not acknowledge the global nature, I think, of AI. And it's this threat idea, Jason True, that that hit the whole TikTok story that China is a threat on TikTok. And I'm I'm really nervous about TikTok going away myself.
I'm no fan of China and authoritarian regimes, but, I think we have to deal with the rest of the world. So I don't know if it's possible to control it to this extent that people wanna control it. Yeah. Yeah. Yeah.
You mentioned NVIDIA not being happy. They put out a pretty sharply worded statement. They it wasn't just that, like, we don't approve. They they they had some words to share on that for sure. Rather than mitigate any threat, the new Biden rules would only weaken Americans' global competitiveness, undermining the innovation that has kept the US ahead.
And that's some of the nicer things that NVIDIA had to say as far as that's concerned. But, yeah, it's interesting. And then the and so, by this way, this is this is not like law law that's going into effect. This is open for public comment, 120 days, which is kind of interesting putting this in at the very last part of your presidency, and then we've got the new administration, the Trump administration stepping in next week. And, undoubtedly, anything that I mean, my guess, my hunch is that anything that has the kind of the taint of the previous administration in it, No.
Out with that because I'm gonna come up with my own thing. So I I just feel like this is probably dead in the water to be to be honest, but I could be completely wrong. It's also interesting though to see, this is where the Musk versus Altman versus Microsoft versus Google. It's hard to see where the Trump administration is gonna go with the technology companies particularly because Musk has his own view and he's obviously close to the throne. He's fighting with OpenAI.
I don't know what his relationship is with NVIDIA. He's not necessarily friendly, I think, to either Microsoft or Google or Amazon. So is the Trump administration gonna benefit all technology or just some players' technology? Right. It's too willing to tell.
That's a real big question. Yeah. I think, well, I think, you know, along these lines, we'll start to find learn that pretty quickly. I have to imagine. That'll that'll be And it could change frequently too.
Well, yes, that much we know as well. You just never know if you're on solid ground or not. Another executive order, issued, by the Biden administration is to accelerate the development of infrastructure for AI in the United States, and this is all about building large scale data centers using clean energy in the facilities to support AI operations, although notably doesn't address water consumption, which data centers are very thirsty, and so that would certainly be part of this. So my understanding is it doesn't really address that aspect of it, so that's got some people worried. But it really, you know, again, goes back to fortifying national security, reducing the reliance on those foreign AI tools, and all with clean energy.
So it's a win win win. Yeah. But I think you're I think you're really right, Jason. I think we might solve, at least at that level, the clean energy issue in some time in the future. I hope we do.
But water is another issue. And with climate change, who knows what's gonna happen there? Yeah. Indeed. The locales are gonna the water what happens in watch what happens in California right now?
Oh, jeez. The localness of the water supply, is critical. Oh, very critical. Yeah. Absolutely.
I've certainly, yeah, had some, very, very close experiences with with water and and the lack thereof, here in the state of California. And just one day prior to all of this executive order, business, OpenAI had released an economic blueprint, in quotes, outlining its vision for AI regulation and development in collaboration with the US government. So it's kinda like I have this as its own separate story, and I was like, wait a minute. This came a day before the executive order that it feels like that they're very intrinsically kind of connected in some ways even if they weren't named to be. But, you know, OpenAI calling for the need for significant federal investment for infrastructure, you know, chips, energy talent, that sort of stuff.
It was critiquing state level AI regulations, calling it, fragmented at best, and really calling for streamlined federal policies, which seems like, to a certain degree, that's that's what Biden's administration was attempting to do. So I don't know how related these things are, but they certainly fall into this a similar bucket. And this is the regulatory capture that we see happening is that they're trying to get their voice in, and we see certain people going to the, inauguration and giving money and trying to sit at the table. And what you have in OpenAI is what they've said all along is, well, make some suggestions, but not really some rules. And and those suggestions should be the suggestions we suggest to you.
And so that's where OpenAI has been in all this, and it's a game. It's a lobbying game. And, you know, I leave out what I in the in the list before, Meta too and Meta's relationship to all of this, and their interests are not aligned, these various AI companies. So this is this is OpenAI trying to get its dibs in. We'll see.
Yeah. Yeah. Indeed. We have a couple of stories here that have to do with, copyright. And this first one, New York Times, other publishers taking OpenAI and Microsoft to court this week, accusing them of, yes, of course, copyright infringement, training on their data with no compensation, and, trying to get some, you know, some final movement on this.
OpenAI stressed this is not document retrieval. New York Times, of course, says that OpenAI use their data for training, but also now the RAG approach, which is retrieval augmented generation, which essentially integrates more up to date information. And New York Times is, you know, is is claiming that, you know, if you've trained on our data and you're pulling in up to date information, there's no reason for anyone to ever go to our our, the things that we write because it's all available in in OpenAI. And I think New York Times had, you know, had shown attempted to show its content, being, you know, ripped off wholesale and the results that were generated through OpenAI. But, of course, OpenAI kind of responded to say, well, that's that's after you go, you know, multiple thousands of times on a fishing expedition to try and find those responses.
Mhmm. Yeah. This is the suit that's been out there. What's different in part here is two things. 1 is that they've joined with the, Center For Investigative Reporting and the Daily News, which amuses me because I used to work at the Daily News.
The New York Times never helped out the Daily News in any way. And so So they're trying to join together. And as the NPR story points out, there's now 2 camps. There's those who are suing New York Times, Daily News, and others, and authors. And those who are making deals and getting bags of money and not suing.
And we see there it goes. So meanwhile, of course, Microsoft and AI, have told the court as well that they wanna drop the suits. That this is a matter of, of of copyright fair use and transformative and the the arguments we've heard. So these are just steps in the court play. And I don't think we have any signals yet to see where this is gonna gonna land.
No. But we have, you know, this really, this case is at the stage where the judge, you know, ultimately decides whether to proceed or dismiss. And one of the things, you know, that really stands out in this article is that, you know, and the coverage around it, is that at the worst at the worst stage, OpenAI would be called to destroy their dataset and start over, and that would be, well, that would just be catastrophic for the business whether, you know, whether people want that to happen or not. Right. I mean, it would yeah.
Anyways, that would just be what a big deal that would be if that happened Right. Considering where we're at right now with OpenAI. Right. Yeah. And so and then the next story you're you're gonna go to here is on, why don't you do that first, and then then let me riff for a second.
Yeah. Yeah. So this definitely falls into a similar category. Meta used Libgen Libgen, however you spell that, or or pronounce that, a repository of, quote, pirated books that we've known about for quite a while now. I think it was created in Russia or something along those lines, to train its generative AI models.
A court, had unredacted information, that it released with that. So now, you you know, that is has been revealed essentially. And this is the case filed by Sarah Silverman and others. We've been talking about this for the last year. This was first filed back in July of 20, 23.
And yeah, so so this is more just like a confirmation that, like, okay. Now we now we know Meta was using this pirated dataset or this this pirated books dataset for, you know, for for the training of its model. So I think this I think there's a shape forming here that that makes this discussion more sensible. And we've had it on the show. There's the question of training.
Is that fair use and is that transformative? There's the question of acquisition. Did you acquire a subscription to the New York Times? Did you acquire this book legitimately or did you steal it? Yes.
To read it is not to steal it, but to take it when you don't have any right to it is to steal it. So what are the rights involved there? And then 3rd is this question of licensing for quoting. Where yes we wanna use your current material. Yes we wanna use your brand to give it credibility.
And yes that requires a deal. And those to me are 3 related but separate questions that I think courts are gonna have to slice up and and it's not gonna be an easy process because it's gonna be 25 different suits and they're gonna deal with different parts of this in different ways and it's gonna take a long time, I think. Yeah, to get to a legal structure here. Option 2 is legislation, but I don't see that happening very smoothly. So I I I I moderated a panel at the BDMI, the Brit Roseman Investment Group, about 2 years ago.
And, a lawyer who was in who I interviewed said, you know, yeah. These are all good issues. They're gonna they're gonna lollygag their way through the courts. It's gonna take forever. In the meantime, figure it out and come to deals.
And in a sense, you could argue that's what the the moguls are doing by getting their bags of money, but they're not really doing it on the basis of principle. They're saying, okay. I won't sue you if you could be a bucket bucket of money. Mhmm. And everything's okay.
Right? Rather than saying, let's come up with industry wide principles about how this should operate. How could that happen? I think it has to happen at a higher level of industry associations. There should be, but it ain't going to happen now because they're all looking for protections, legislation and buckets of money.
And so if the industry association is just the big guys, we see this happening right now in California with news companies. All they're going to do is try to benefit themselves to help with everybody else. So the industry associations are now lobbyists, and that's how they operate. So they're not really fair traders. Imagine an industry association for AI.
It's too fresh and new, and they're all fighting with each other. Yeah. So they're not gonna come on up to any sensible structure here on their own because they're all fighting around. So what it means is there's there's vast uncertainty in this new industry. And if you're gonna invest if if you're gonna invest in OpenAI, you're right, Jason.
The fact that there is a chance, maybe minuscule, but a chance that they have to erase all the work they've done and start over, with a very much limited set of of training data, that's a risk. You've gotta now, disclose in investment documents. So uncertainty is not good in this field, but I don't see an easy path out of it. End of risk. Definitely.
Def definitely not. And I think what came up for me around this, you know, I think it's great that you broke out those different kind of branches. You know, one of those branches just to to touch back on that is, you know, what is it is it legal or within their rights to obtain this information to begin with? And when we're talking about this dataset, you know, and I put I put in in quotes you know? Well, I I think the the title of the Wired story is notorious piracy database, so that would indicate that there are a number of books in in this database that I imagine all of the the books are a part of that and they, you know, are not clear.
There there are no rights to get that is essentially it it would be illegal to download that repository. Is that right? Is that how law works around this sort of thing? Like, I think about, like, you know, people who, you know, go on a torrent site and download a copyrighted movie. Is the act itself of download downloading this thing that you definitely do not have the copyright for or the right to download, is that act illegal?
And if so, does that mean that Meta is then has has performed an illegal action by obtaining access to this database of of pirated content? This goes back to the tragic case of Aaron Swartz, who downloaded, a bunch of jstor at MIT and then was brought on very serious charges as a result. I think what he was trying to do was liberate academic research for an enlightened society. Mhmm. And I don't think it was it was certainly not oftentimes, people are being presented online if you if you copy something or quote it as a pirate, and you're a criminal, and you're a thief.
Okay. That's one way to look at you, but you may be someone who is in fact trying to, inform more people and open up knowledge in better ways, and the copyright has gotten overblown. And it's a legitimate discussion to have if, again, we can take out the emotion of it and the accusation of it, of people trying to virtual signal before congress and judges. But that's not where we are right now. So Aaron Swartz was a huge loss to the world, a brilliant young man, who killed himself, in 2013 because of exactly this fight.
Interesting. Okay. Well, see how that, trickles down. Like you said, it's gonna be a long time coming before we have any sort of We'll be talking about this in the future. Yeah.
But it's gonna be massive. Once we finally you know, once the once the wheels are off, right, like, once we finally get to that that that court case or whatever the the case may be where it's like, alright. Now we've got established law. This is fair use or this is not fair use or whatever. There's gonna be massive change, regardless of which direction that goes.
It's gonna be really interesting. And then Amazon has been working on bringing big change to its alexa. I don't want to say it fully and fire off people's devices because it turns out a lot of people have those devices, whether they're still using them. I wonder, but things have been slowed down for Amazon due to a number of challenges, response accuracy, speed, reliability, hallucinations, which they say Amazon says must be reduced to nearly 0. Good luck with that.
Mhmm. And the response times need to be practical. All is to say that Amazon I think what's interesting to me here is that Amazon was one of those players that was there in the early days of the voice interaction Yep. Thing, which feels kind of like a major moment for where we're at right now with artificial intelligence and interacting with chatbots and then, you know, using our voice to communicate with them and all that stuff. And seems like a really prime example of a company that was there.
You kind of expected that if you're there, you're that first mover advantage that you're gonna be riding the wave, you know, the head of the kind of pack along the way, and Amazon has really kind of drifted out of the lead and now needs to play catch up. So Yeah. It's interesting. You can be too soon to this world. My friend, Bill Bill Gross, at Idealab, who has founded more than a 150 companies, he has a, I think, a TED talk out there, which is very good, about how, he studied success versus failure in companies, and Bill has had successes galore.
He's also had failures. He did pets.com. And his conclusion in the end was that it was almost all timing. Yep. And so in a sense, madame was too soon with too little.
It wasn't able to do what you would expect it to do. And now that brand association has just set in and I don't know if there's any fixing madame. Similarly, we talked about all the time. Google was way ahead on AI, but it wasn't there as loudly when chat GPT came out and it's being seen as behind when in fact I don't think it really is. So yeah, timing is, is if not everything a lot.
Yeah. Timing definitely plays a big role. Yeah. Google being there early with Google Glass, and then Mhmm. You know, it's like, oh, okay.
They're doing this thing. The world wasn't quite ready for it. And, you know, and I'm not saying, like, suddenly we're seeing, you know, connected glasses everywhere, but we're certainly a lot more open to the capabilities and the technology now than we were 10 years ago. Did you see a bunch at CES? Oh, you know, I I I had hoped to see more.
I guess we didn't really did we talk much about CES? Because I know I got back from and we've got right into the interview. No. I I mean, I I had hoped to see more. And I, you know, this is where I realized I planned as best I could for 48 hours, and I scheduled a lot, and it didn't leave room for discovery.
Right. And so I only have the time to go between places and meet with the people that I had already set up with, and I'm happy I did. I saw some, you know, really neat things and everything. But there's a whole list of stuff that I just didn't wasn't able to see because I couldn't I didn't have the time in my schedule. And so next year, I'm gonna do it a lot differently.
And but I know that there was a lot. I mean but you know what? I was also I was watching a fellow creator, Lon Seidman's, video where he was taking a look at his coverage of CES a decade ago, which was his 1st year going. And he showed off some AI connected, you know, Android glasses from from that time. That was a really big hit at the show.
And, of course, you know, probably months later, it fizzled away and and died completely. But this whole, like, AI glass or or or rather just connected glasses, let's say, theme is around year after year after year at CES. Yeah. And it has yet to really catch on. So who knows if we're even at the point to where it catches on, but it definitely furthered in development.
And they were there were a lot there. So, gonna take a quick break and then we got a few, stories to round things out. Kinda, yeah, I'm I'm excited to talk about some of these stories here after the break. Alright. Adobe introduced a new feature, Firefly Bulk Create.
You know, I'm I'm a user of Adobe products. I have their Creative Cloud Suite. And, so, you know, some of their AI products are just things that are integrated into the tools I'm already using, and they do a really, really great job. What this is is it's meant to streamline large scale image editing tasks. So up to 10,000 images in a single click simultaneously, you can automate things like background removal, resizing, customization, to your brand's assets, that sort of stuff.
And it's not available to everyone, but it is available in a beta in Adobe's Firefly web app. And, I don't know. That sounds pretty pretty handy and helpful to me because those those tasks do take time to open up an image and do it all manually and everything. Just fire off a batch. And even if, like, 90% of them work and 10% don't, that still saved you an insane amount of time.
Yeah. I looked at this as an individual consumer thinking, why would anyone want want this or need this? But then you see immediately the image. If you have if you run a catalog and you wanna change your official color of your brand, boom boom, you can do it fairly easily or change change other factors in your in your visual, grammar for a company or for a site. So, yeah, I guess it could be useful.
It's just it's also just impressively powerful. Yeah. Totally. Yeah. And, you know, Adobe does a great job of integrating even if these are on a top tier, you know, a version of this probably if it doesn't already exist coming to Photoshop, I have to imagine at some point, you know, maybe it won't be as powerful as this, but I'd love to be able to do some batch processing.
I am I am constantly removing, you know, foreground from background and and I'm not doing it as manually as I used to, but it's still a process and it's maybe it always will be a process. It's just a different process, you know, with AI. It's not like you save time. You just do it differently. But, anyways, I think that's pretty neat.
Word of the day, sloppagedon. I hadn't I hadn't heard that word before. No. I think it's brand new. Futurism has an article showing that a Google image search for does corn get digested shows just a ton of AI generated slop imagery, slop imagery, of course, the kind of the the the word that's meant to have a little bit of bite to it associated with any image that was created by AI.
It's just automatically slop. It's it's gross. You don't need that. But in any in this case, you know, these images that are coming up in in Google search for this particular question, you know, the text is all AI weird and and kinda there, but not entirely. The, anatomy is incorrect.
The information is just inaccurate, and, ultimately, you know, the article makes the case, makes the point that when you've got this slop, so pervasive in a place like Google image search, it makes it even more difficult to find the reliable information. That much is true. For sure. So my friend Matthew Kirschenbaum, who's professor at the University of Maryland, wrote a wonderful essay about what he called the textpocalypse. That's right.
Yeah. As AI feeds on itself, it becomes gray goo. So that's the text version of the story and this is the visual image of that story. Mhmm. And both are ruining the web.
And so I I think that that I've said this before on the show. I say it on on Twig as well. Google gets blamed for getting worse. You know, that may be true to some extent, but I primarily blame, companies and AI for ruining the web itself. And I don't know what happens here.
I don't know what you do, because this is gonna be hard to keep it around. So provenance will matter. Does it have human provenance, expert provenance? Maybe this is pushing us over an edge where we have to invent the institutions that matter. Pardon me for the plug here, but in the Gutenberg parenthesis, I talk about the first call for censorship in 14/70, when, Niccolo Perotti was offended by a translation of Pliny and told the pope, you gotta do something.
You gotta appoint a censor. And he wasn't really asking for censorship. He was he was anticipating the creation of the institutions of editing and publishing that would assure provenance and authority and credibility and artistry to an extent in publishing for half a millennium to come. So we're gonna have to invent some stuff to sift through this crap and avoid this crap and find the good stuff. And that's gonna become all the harder.
I've long argued in the social world, we'll end up in the net with more good stuff. You'll end up with more chaff, but also more wheat in it. As I think about it right now, in the AI world, that's not necessarily the case at all. You just end up with a tremendous amount of chaff that you never could have imagined before and no way to call through it. Yeah.
So this can ruin our web. Yeah. And it it's so, vast right now. Vast is the wrong word, but there's so much of it happening right now being generated, being published, being distributed online ahead of what you're talking about. And I guess what what occurs to me is the legacy the ongoing legacy that the Internet creates in its path.
This collection of information and data sources and product and and whatever. And if we're just filling it, cramming it full of, you know, of slop, the in the sloppagedon before we ever get to a point to where we've created those norms, how do you even undo any of that? It's just it's mess that, you know, somehow has to be, sorted through or accepted, I guess, at a certain degree. Or the or train an AI to be good at recognizing AI slop so we can get rid of it. I don't know.
Until they outsmart each other. Right? So for if if you if you're enjoying this genre, you can go to the, is this Twitter? Yeah. I'm afraid.
Sorry. It is Twitter. Feed Facebook AI Slop. So you will see a bunch, of this stuff. And that's what wired, quotes here.
Or futurism quotes. Pardon me. Facebook AI slop. I think I have found it. Yep.
You know? Oh God. I'm really sorry that the first image was what it was. So anyways, okay. This is gonna be an interesting feed to to pull through.
Yeah. You know, and then at the same time at the same time, I have to say though that AI slop has its own ridiculous, endearing qualities too. Like, it's you know what I mean? Like, people have a visceral reaction to AI generated images and video that's just ridiculous and not accurate and all these things. But it's also kind of its own form of art, and it's it's it's, in a certain way, captivating because of that.
Yeah. It is. Because of its ridiculousness and inaccuracies, you know. I don't know. I think some people probably argue argue that.
But, and then finally, another thing we talk about a lot, notebook LM. Hi, Bronson. Notebook LM has been tweaked to reduce its annoyance with pesky humans like you and me. Back in December, they had rolled out a new feature called interactive mode, and this would allow users to interrupt the AI generated podcast with questions. The AI would, in turn, remark with things like, I was getting to that or and I you know, it was probably done nicely.
It was probably like, well, I was getting to that, but blah blah blah blah blah. But felt and I'm sure, you know, users responded, you know, in their usage of it too, felt adversarial, almost like it was annoyed that it was being interrupted. And, you know, it turns out these systems, they're trained on on human interactions, so it's probably not wrong. You know? No.
I wish I wish it were a switch I could turn it on. I would love it Yeah. If if AI got a little more testy, a little more prickly, a little more what do they call my my cat at the veterinarian? Spicy. Spicy?
I want AI spicy AI. I think that'd be fun. Turn on spicy mode. Yeah. I just and then you could tell people, you know, just you idiot.
Just just go ask AI, this the spicy AI, and it'll tell you what for. I won't have to. You're not worth my time. Yeah. I can almost see a whole new feature to see the business.
I can almost see a slider, a continuum between the nicest and the the, you know, most upset or adversarial. They call it friendliness tuning. So that could even be the name of the the slider, you know, friendliness tuning. But Mike Masnick Mike Masnick has a great, great post that he says, you've been sent here because you have said something wrong about section 230. And so I I I kind of want the nasty AI, you know, that says to me that you've been sent here because you're wrong about something.
What is it? Love it. Love it. And I love Mike's work. That's awesome.
Well, we have reached the end of this episode of AI Inside, and I do wanna thank real quick once again, Greg Epstein for Thank you, Greg. Joining us for the conversation at the top of the show. That was really wonderful. And, of course, you know, don't, don't miss out on his book, which is Tech Agnostic, how technology became the world's most powerful religion and why it desperately needs a reformation. Thank you again, Greg.
And thank you, Jeff. Always fun. Always learning stuff. Jeffjarvis.com is the site for people to go to check out the book, that you released most recently, of course, The Web We Weave, and then, yes, the Gutenberg Parenthesis Magazine. We need to get your older books there too.
You can, like, just have a whole whole library there. You might as well. Oh, shoot. So jeffjarvis.com for that. Everything you need to know about this show could be found at our site.
Just go to aiinside.show. You can find subscribe links for whatever podcaster you're using. You can find an embedded, play, kind of interface. You can find all of our episodes, audio and video. So looking at last week's episode, you've got your audio, or you can scroll down and you've got the video from the YouTube, channel, and, it's all there.
So, you know, but ultimately just, you know, use the controls on the site to subscribe in your Podcatcher, that you use regularly and you won't miss it. And then finally, if you really, really love this show, you can leave us a review on Apple Podcasts. And then if you are a really, really, really big fan of this show, then you can go to patreon.com/aiinsideshow, and you can, support the show directly. We've got ad free shows, Discord community, Hangouts. You can get an AI Inside t shirt.
If you become an executive producer of the show, we have some amazing executive producers, DrDew, Jeffrey Marraccini, WPBM 103.7 in Asheville, North Carolina, Paul Lang, and Dante Saint James. Thanks to all 5 of you for supporting the show on the regular. We really could not do it without you. And thanks to all of you for watching and for listening each and every week.
We will be, be back next week with another episode of AI Inside. Take care, everybody. We'll see you next time. Thank you, Lu. Oh, we did.
We did we have a comment? Oh. We have a super chat. We have a super chat. We have a super chat.
We have a super chat. Thank you for the super chat. That's amazing. Appreciate that. Thank you, Lou.
Alright. We'll see you all next time. Bye. Bye.