Jason Howell and Jeff Jarvis discuss NVIDIA's GTC conference with CEO Jensen Huang's keynote on tokens, the Vera Rubin CPU, and the intersection of AI and robotics, plus more!
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
NEWS
0:02:23 - Nvidia announces Blackwell Ultra GB300 and Vera Rubin, its next AI ‘superchips’
0:22:37 - Nvidia and Yum! Brands team up to expand AI ordering
0:25:49 - Google brings a ‘canvas’ feature to Gemini, plus Audio Overview
0:31:08 - Gemini 2.0, Google’s newest flagship AI, can generate text, images, and speech
0:34:42 - People are using Google’s new AI model to remove watermarks from images
0:36:06 - Google plans to release new ‘open’ AI models for drug discovery
0:40:36 - EFF: California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly
0:44:20 - Ben Stiller, Mark Ruffalo and More Than 400 Hollywood Names Urge Trump to Not Let AI Companies ‘Exploit’ Copyrighted Works
0:49:11 - Anthropic CEO floats idea of giving AI a “quit job” button, sparking skepticism
0:52:19 - People say they prefer stories written by humans over AI-generated works, yet new study suggests that’s not quite true
0:57:36 - AI ring tracks spelled words in American Sign Language
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] This is AI Inside, episode 60, recorded Wednesday, March 19th, 2025. NVIDIA's Economy of Tokens This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
[00:00:28] Hello everybody, welcome to another episode of AI Inside the Show where we take a look at the AI that is layered throughout so much of the world of technology. I'm one of your hosts on one side of the screen, if you're watching the video version, Jason Howell. On the other side of the screen, it's Jeff Jarvis. How are you doing? Hey there, hey there. I think usually you bring me in, don't you? I know. I was down here looking at my notes and I thought it was very rude of me not to be looking at Jason, but I apologize. But I do have nice hair, don't I? Yes, yes, I'm liking the part in your hair, yes, indeed.
[00:00:56] You know, as we test out new features on StreamYard, which is what we use to record and everything, I told you right before we went live, I was like, I'm going to use this new feature, and if I don't do my work in advance, it's going to surprise me. And it surprised me. But, you know, it's not a bad surprise. No, no wrong. We could be on the screen at the same time. It's okay. Yes. Good to see you, man. Good to get together and talk about the world of AI.
[00:01:22] Before we get started, I just want to throw a huge thank you to our patrons, patreon.com slash AIinsideshow. And I want to call out patron of the week. It's Brandon Kester. Thank you so much for supporting us, Brandon. We do this each and every week. We call you out as a thank you for your support, patreon.com slash AIinsideshow. Also, if you watch us live, which we have a lot of people that tune in live, make sure and subscribe to the show, the podcast anyways, AIinside.show.
[00:01:52] Do that. You won't miss any episodes, even if you happen to miss the live recording. And with that, all of that out of the way, it's time to dive into our news. And Jeff, so, okay, so I think the big news, if we had to pick one thing, was that NVIDIA had their GTCAI conference. It's an annual conference, almost guaranteed to get big news, right?
[00:02:17] Jensen Wang keynote, showing off lots of new chip advancements with NVIDIA, which is arguably one of the most important, relevant hardware companies related to AI and software. And you followed the keynote. I did not watch the keynote top to bottom, but you mentioned in the notes here that you did and that you've taken notes. What were your thoughts? The funny thing is, watching a Jensen Wang keynote is now key entertainment for me. Yeah?
[00:02:46] Oh, it didn't used to be. Well, the last three you've watched have just fascinated me. In part, I mean, on the superficial end, because of his showmanship, he's amazing. He came out this time and said there was no script. I don't believe that. But two and a half hours of solid presentation, of solid information. And it's really impressive.
[00:03:10] So what struck me, Jason, was that the first one I watched two ago, and I think you and I watched it, and what amazed me was the scale. He was showing how much there was in a Blackwell chip and then how big it was in a compute center. And I'm just building scale upon scale upon scale. All right? The last one, what fascinated me, was his discussion of digital twins.
[00:03:40] And my joke was that there is a matrix, but we're not in it. That there's a matrix out there that is constantly looking at alternative futures. Which is really fascinating when you think about it. So whether you're a car or a factory, it's constantly testing all these things. And then he also, pardon me, talked about what it takes to train these digital twins. And how he has to get enough data because there's not enough data, right?
[00:04:09] So now we lead to this week's keynote. And the whole opening spiel was about tokens, tokens, tokens. Everything is a token. And I thought, hmm, that's interesting. But he's trying to recast the world of data and information and archives into an economy of tokens. That's my wording, not his.
[00:04:37] And so as he's talking about the stuff that he does here, it's all about generating tokens. And there's a factory for tokens. That's what you're building. It's a factory for tokens. So what fascinated me about that in turn was that it kind of leaves reality. That now it's all about synthetic data and how do you generate the tokens that will in turn generate the output you want.
[00:05:06] How do you do that efficiently? How do you do that at scale, swiftly, with the right amount of energy, with the right transmission speed within all this? And so that was the – he never said – I mean, the opening theme was, you know, stentorian narration about tokens. Tokens here, tokens there, right? So you know what the theme was going to be. But he didn't get explicit about it in that sense. He never uses a two-by-four.
[00:05:36] So he said a computer is no longer a retriever of files. It is a generator of tokens. Oh, let me think about that. Yeah, a computer is no longer a retriever of files. It is a generator of tokens. That's interesting. You're going to have a factory. And what the factory will build is a factory for – you're going to have a real-life factory. Then you're going to have a factory for AI. And the AI factory is generating tokens for that digital twin, for those what-ifs.
[00:06:05] And so I'm waiting for somebody to say, well, tokens are the new oil or gold, right? Right? Right. He didn't say that. He's too smart for that. But it becomes really interesting. He talked about one effort. He had one demonstration of something. I forget what it was exactly. And it had to try to get to what it wanted. He talked about wasted tokens. It was trying too hard.
[00:06:30] He made a point of saying that trillions of bits of information are used to generate one token, right? In the sense that it has trained on all this information. And in the end, it calls on any amount of that just to get one token. And then you get how many tokens. I didn't get a good sense of the scale of tokens in terms of an individual task.
[00:06:55] But as he talked about the new chip, where that goes, you're going from a 100-megawatt factory. You're going from a 100-megawatt factory. You'll end up with 12 billion tokens per second. But that's what it's generating, right? I think we thought of tokens before of the training yielded tokens.
[00:07:25] And now it's about generating these tokens and that value. And as usual with him, I can't get my head fully around it. So that's the kind of really high-level view. He also talked about there's a new Blackwell CPU. Then he announced that the next CPU is going to be named the Vera Rubin. And it will have 1.3, no, 1,300 trillion transistors.
[00:07:57] And that it's a 900x increase over, what's her name? Grace Hopper. The Hopper, right. Two times faster than the Blackwell chips, I think I read as well. So, yeah, you're going from the Hopper to the Blackwell. Well, next to the Vera Rubin, he snuck in there that the next generation of chips will be named after Feynman.
[00:08:24] So Vera Rubin was a discoverer of dark matter. And her grandchildren were the audience for this honor of having the chip named after her, which is pretty cool. Indeed. Then, finally, he also introduced a new – I hesitate to call it a desktop machine, but a 20-petaflop DGX station that will soon be on your desktop here. So we can just podcast at amazing speed. Yeah.
[00:08:54] Yeah, AI PCs, but for podcasting. Right, exactly. Because there's got to be a fine use of all those petaflops. He talked a lot about photonics, trying to get faster transmission within racks. He then talked a lot about robotics. He's going to keep on, I think, doing more and more robotics. Well, I mean, and that makes a lot of sense. Yes.
[00:09:20] Because if you really kind of ellipses this out into the future, the intersection of everything AI and robotics seems to just become one as far as I'm concerned. And so he called robotics embodied AI. Yeah, there you go. That's a great way to put it, actually. Yeah. AI is in there. And then he talked about the way to train on an ongoing basis, this robotics, is verifiable rewards.
[00:09:49] So he said what we need is laws of physics, and we need a new physics engine. So at the end, he announced that DeepMind was doing a deal with Disney. And then out came the stupid little robot, which was just a gimmick. But probably seems to be, yeah, at this stage. But I found that interesting. So that's my report on this. For those of you who are really into chip details, sorry, I'm not good at that. Yeah. I don't know. And rack details. And he spends time about these things. And I don't know.
[00:10:19] The funny thing is, it's ridiculously powerful. It can do things I can't even imagine. But the consumer in me, the gadget consumer in me says, I think I want one. I wouldn't know what to do with it if I had it. I couldn't afford it. And by one, are you talking about the AI PCs? Yeah. Is that what you're talking about specifically? Yeah. Because they've got the GPUs and everything. And even that is really abstract for me.
[00:10:47] I have no idea or interest or knowledge. I know a lot of people who do, who really care about the GPU thing. But I don't. And I've just got a Mac studio here. And it's perfectly fine. But I look at the AI-focused PCs, the DGX Spark, the DGX Station. And these are not going to be inexpensive machines. I think in January when they first mentioned this, when it was known as Project Digits, prior
[00:11:13] to this kind of rename that they've announced, I think the lower tier model of that, which is DGX Spark, was going to be somewhere starting at like $3,000. And I think it is a really interesting question from a general consumer perspective, which is certainly where I come from. And I'm assuming where you would come from on this too, is like, does this machine, like, how do I use it as a general consumer? Or is it just not meant for me?
[00:11:42] You know, maybe it's meant for researchers. Maybe it's meant for people who are building things that are far outside of my capability. But same as you, like, I don't know. I think it'd be neat to have one. What would I do with it? I have no idea, but I'm sure I'd figure it out. One, um, computer scientists in the world. I don't know what he called them when he told them, uh, uh, programmers or whatever. He said every one of them is going to have a programming assistant. So I think it's for them. It's certainly for researchers. Yes.
[00:12:09] I'm, um, uh, now working, as you know, at Stony Brook and that's a STEM school with major computer science. And I can see those departments wanting lots of these because the way I would imagine this is useful and I don't know, is that it's almost going back to the old, um, days of buying time on somebody's mainframe that you didn't have a computer and you had to buy time on somebody else's. Right.
[00:12:37] And so now, uh, I think this gives you something on your desktop that you can work with locally, uh, not at the same scale, but, you know, kind of work up to making it worthwhile to use the higher end, uh, racks. And yeah, another, another kind of thought that I have around this is so much of this is rapid development right now.
[00:13:02] How fast do you reach the point to which, like, do you reach a point that we, that we get to with our computers right now where we're like, eh, I bought this computer three years ago and now it's just, you know, really slow. And I feel like I need to upgrade and like, is that the same kind of turnaround for an AI PC? It's already doing so much, but I mean, you know, if you're a researcher and you're really taxing it and pushing it, do you, is this the sort of thing you have to replace every two or three years? I don't know.
[00:13:28] So, so Wong made a joke, uh, on stage saying, um, that, uh, the hopper chip, which was prior to the black belt, the hopper chip, he said, oh, you know, it's still okay. You can still do some stuff, but, uh, you know, my, my salespeople are getting all mad at me now cause they can still sell them, but you know, uh, uh, um, so, you know, it's the economics of this. Yeah.
[00:13:55] So yesterday, um, but what's interesting is they throw in reference to being more efficient, more energy efficient, but you use that energy efficiency not to save energy. Oh no, to do more compute, uh, to throw in more stuff, to still use the same number of megawatts. Um, and so he's selling skate constantly selling scale. It's his own Moore's law. Almost. Uh, he said at some point, this is the most extreme scale up in the world has ever, uh,
[00:14:24] no, uh, I don't know. Uh, but, um, 130 trillion transistors in the grace black belt. Uh, and he said everything, um, uh, everything in the machine is T T for trillion. Yeah.
[00:14:51] And then as, as a non chip guy, like I'm not, I'm not a GPU guy. I'm not a chip guy. You know, we, I think we're, we're both in the same category here. As far as that's concerned, I hear these numbers and it's really hard for me to plant them into any sort of sense of scale or reality. It just sounds massively complicated and, and no, and no doubt incredibly capable. Like I don't doubt it at all.
[00:15:16] It's just, it's hard for me to plant myself in some sort of field of, of understanding when we were talking about processors and systems and chips that do that much compute that are capable of that much. It's just so beyond my comprehension. I'm trying to look at my notes for, for one thing here, though, because what, what, what, what might freak me out a little bit is we're already there. We've been there for some time at this level of complexity.
[00:15:44] It is impossible to imagine, um, explainability. We don't know how, I mean, one thing that strikes me again and again and again is that I heard this from, from Ray Kurzweil on last week's, um, intelligent machines is this, well, we don't know how they work, which is a weird admission. We've been hearing that for years about AI. That's the, the really intriguing thing about this is really under the hood. We can't point at a line of code and say, this is why it did that. It's absolutely not. It's purely elusive.
[00:16:14] Uh, and when you add in the randomness, it gets even more the case. So, so the complexity, the speed scale and complexity of this is, and this is what freaks out the doomsters. Well, we don't know what it'll do. We can't control it. Yes, you can. You got a plug. You pull it. Stop. Stop. Um, uh, but it, it confuses us in ways. My friend, David Weinberger, who wrote the book, uh, everything's miscellaneous and he's working on a new book now that I read the beginnings of.
[00:16:40] Um, uh, he, he, he's really good at explaining this stuff in terms of, uh, letting go of our presumption of explainability. Uh, you know, one of his books, David talked about how an accident is, uh, I'm paraphrasing him badly, but basically it's an accident is just something we can't explain. We call it an accident, right?
[00:17:07] Well, in fact, there were factors that led to whatever it was that happened that we can't explain. We call an accident, right? So life is like this. Life has our own brains, how we operate, why things happen. That's all unexplainable, but we thought computers were explainable. We thought we could point to that line. We thought we could get that answer. And we talked about this at a point, right? And we talked about this a few weeks ago when we, when you get to this, this level of approximation, that's where this world operates.
[00:17:32] And it's, it's freaky for our brains because even though that's how we operate, we thought the computer operated differently. Now the computer's operating is operating. In fact, more like us. I don't think it's human. I don't think we should be going that way. I don't think it's artificial general intelligence. I don't think it's super intelligence. I think that's all BS. But that aside, there is in a neural network, we think more of a similarity and it operates more approximately. It operates by association and it operates by these other ways.
[00:17:58] So now this scale is just really impressive and it's going to get only more so and more so. He, oh, he, his, the three principles. So they have halos. He called them safety assessments against diversity, transparency, and explainability. Okay. But every line of code, and he mentioned, you know, however many billions of lines of code they have. Every line of code is checked against as a safety assessment.
[00:18:26] And it's one of those cases where I'm thinking, how? How? Against what outcomes? Against what bad uses? Against what mistakes or accidents? I don't know how they do that. But anyway, so that's why my Jeff goes to summer camp moment. I love it. You know, and it was in San Jose and it was jam packed in the stadium, right? It's just absolutely huge. And it's a developers conference.
[00:18:54] And these people there, the IQ level in that stadium, you can just imagine. Talk about trillions. Yeah. And the nerd level. Yeah. For sure. The right people at the right time. You know, the way I want to be there, but I'm not sure what that really does because I can watch it on the, what I'm really watching is his showmanship on this. Yeah. Anyway, he got a little pissed off a couple of times when something didn't go the way it was supposed to do. They had one big screen thing.
[00:19:25] And somebody give me a human. I want a human, which was a big laugh line for the audience. Like when tech behind the scenes wasn't working, not related to AI, but like driving the thing. Yeah. Isn't that interesting? The AI can get a lot of things wrong and we'll excuse it. But if this happens, it's inexcusable. He pulled up two laser to electric connections, right? For the photonics.
[00:19:54] And he picks them up and they get tangled. And you just get a little glimpse of him as a boss. Oh, they were doing this. Oh, thanks a lot. You know, he's growling. So I imagine afterwards somebody's saying, oh, jeez, I got it. Oh, no. You left those wires tangled. Because he strikes me. And I've been fooled in the past by these moguls. You know, he still strikes me as a smart, decent, nice guy. I don't know whether they're here. They're not. This is a show. But it was. It was.
[00:20:23] So anyway, those are my notes. That's awesome. I love your perspective on that. Thank you for the fun. I did not watch it. I just read through articles to understand. For folks out there who are watching, it's two and a half hours. It's a major commitment. Yeah. But if you're into this at all, and it's really Steve Jobs scaled. Yeah. Right? Steve Jobs. He's got a charisma to him. Well, he does.
[00:20:52] Jensen has a total charisma to him that not all big tech CEOs have. You know, Sundar Pichai gets on the stage. I like the guy. Yeah. You know, and I like some of the choices he's made and everything. But there's not a whole lot of, like, engaging, pull you in charisma the way Steve Jobs had. I see Jensen Wong on a similar kind of level. I think so, too. And I think there's people who are on top of these companies who don't know their stuff. I mean, Sam Altman's not a developer.
[00:21:21] Elon Musk doesn't really know his stuff. But I get the sense that Wong knows his stuff. Oh, yeah. I do, too. And he joked yesterday. He said, oh, you come here. You get the math. You know, and he's going to blow your mind with math and how he does it. And, you know, he has a sense of the scale and what it's building and what we know. So, yeah. I'd love to meet him someday. Anybody who knows Jensen Wong out there, I'd love to have him on the show. But I somehow doubt that we're up to a scale.
[00:21:50] You never know, Jeff. You never know, though. So we've got a guest around the corner. We do have a major guest coming up. Of a scale that, you know, I'm super proud of us for making that happen. So that's going to be in a couple of weeks. Still aren't really talking about exactly who. But you'll see. You'll know. You'll see. And it's going to be a lot of fun. Before we get off the NVIDIA train, I do want to mention real quick, this doesn't have anything to do, I think, with their announcements at the event. But you put in also that NVIDIA and young brands.
[00:22:18] Well, I mean, we talk about major, life-changing, huge, fundamental things going on that change the nation and the culture and society and the whole future. For example, Jason, what might this one be? Well, yeah. I mean, it doesn't get any bigger than fast food. No. It doesn't. I mean, fast food's important. It touches all of our lives, whether we want it to or not. And NVIDIA and yum brands.
[00:22:44] Now, yum brands, you might not instantly recognize them, but you know they're kind of – what do you call them? They're brands below them. Yeah. Yeah, I guess the brands within yum brands. Taco Bell, KFC, Pizza Hut, Habit. Partnering with NVIDIA to integrate AI into its Bite by Yum platform to do things like – okay. Yeah, doesn't mean a lot to me.
[00:23:11] But voice-automated order-taking, drive-through management, order accuracy checks, analytics, which then, you know, okay, fine, AI analytics. But could you imagine pulling up to the drive-through and having some sort of – I don't know, AI-automated service? You know, we're already – when we go into a restaurant of this type, we're already presented with a display that we bypass talking to someone by the counter.
[00:23:39] We just go to the display and punch it in. I suppose this is the next phase of that. You're just using your voice. Yeah, I think that – was it Wendy's and McDonald's tried this, and it didn't work so well? McDonald's did. Because also, you know that every jokester on earth is going to try to make it explode. Does not compute. Oh, sure. Right. And then hold it to account when it gets the order wrong or whatever. Yeah. And the poor person behind the counter is just going to be – that's kind of what I was thinking too. Yeah.
[00:24:10] But to me, it's fascinating that they called on NVIDIA. I mean, to me, you'd think just some little startup could work on this, and that's fine. But it's NVIDIA working on this. So odd partnerships, right? Disney and a physics engine and yum brands and reordering. NVIDIA is everywhere. Yeah, it is. NVIDIA is everywhere. You mentioned McDonald's playing around with this.
[00:24:33] I think that experiment resulted in ice cream topped with bacon and some orders put through where hundreds of dollars worth of chicken nuggets were ordered, not intentionally. So, you know, as with everything AI, it can go – it can do really cool things. It can also totally mess it up. So, by the way, yesterday or today, the stock – yesterday, the stock went down before the talk. Today, I'm just looking it up right now. Now, it's up 1.74%.
[00:25:02] And everything in tech has been just smashed. Yeah. And NVIDIA, you know, ended a honeymoon recently. But I think that what's happening here is that people continue to be impressed. What they're doing. Yeah, indeed. Well, what they are doing is indeed impressive. And – He said that his data center infrastructure revenue will hit $1 trillion by 2048. Oh, another trillion. Another trillion. Another T. Jensen T. Wong, I think it's going to be now. T. Wong, yes.
[00:25:30] My middle initial is T. I put the trillion in T. All right. We're going to take a super quick break. Then we're going to talk about non-NVIDIA news, including Google, which has a bunch of news, actually. We'll start with that after this break. Trust isn't just earned. It's demanded. And whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program,
[00:25:57] proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks, like SOC 2 and ISO 27001, centralized security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk. Vanta not only saves you time, it can also save you money.
[00:26:22] A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months. Join over 9,000 global companies like Atlassian, Quora, and Factory, who use Vanta to manage risk and prove security in real time. For a limited time, our audience gets $1,000 off Vanta at vanta.com slash AI inside. That's V-A-N-T-A dot com slash AI inside for $1,000 off.
[00:26:54] Everyone's talking about AI these days, right? It's changing how we work, how we learn, and how we interact with the world at a tremendous pace. It's a gold rush at the frontier, but if we're not careful, we might end up in a heap of trouble. Red Hat's podcast compiler is diving deep into how AI is reshaping the world we live in. From the ethics of automation to the code behind machine learning, it's breaking down the requirements, capabilities, and implications of using AI. Check out the new season of Compiler, an original podcast from Red Hat. Subscribe now wherever you get your podcasts.
[00:27:23] Google shared a gaggle of Gemini news this past week, and Canvas is one of the new announcements. You know, real quick before we get into this, AI companies really are not that creative when it comes to naming things. They all jump on the same bandwagon. Yeah. Yeah. But everybody has a Canvas. I know that there are other examples. I didn't research it before the show, but I know there are other examples.
[00:27:52] I mean, deep research. You know, they all have a deep research product. I don't know. I guess as a consumer, it makes it easy to know what you're doing from each direction. And they're kind of jumping in on Canva as a brand. I suppose so. Yeah. That's kind of a Canvas thing. Yeah. But yeah. Yeah. But this is a different style of Canvas, of course. What this is has something a lot – shares a lot more in common with ChatGPT's Canvas tool, which we have talked about in recent episodes,
[00:28:20] that is essentially kind of an interactive workspace for working with your writing projects, with your coding projects. Instead of just having a single chat-based interface where you kind of plunk it in and it gives you your output. And then, you know, like in my case, when I'm working in perplexity a lot, I might get my output and then I have to copy and paste that, move it over into a notepad, and then I work with it from there. And this is more like an integrated workspace for that sort of thing.
[00:28:47] It's a more collaboratively laid out productivity suite, if you want to call it that. And so everything is kind of integrated into one space. So if you generate a big block of text or a big block of code, you can highlight a portion of that and say, for this part, this is what I want. And then it will do whatever it's going to do and then replace it within – in line with what you're working. I think that's really useful. I haven't really relied upon a Canvas solution yet, though.
[00:29:16] But I think I'm kind of getting to the point to where I want to start doing it because I realize how it could be useful for me in writing. Yeah, I mean, I still have the ego of a writer that I want – it's mine! I want to write it! I don't want the machine to write it! Right now, I'm getting toward the end of my Linotype book, the first draft, and it's very drafty. And Mark Twain is a character who comes back and forth throughout. And at the end, I'm talking about his bitterness about technology. He was bankrupted by a competitor in Linotype.
[00:29:45] And you can see this in a Connecticut Yankee in your office court and then an unfinished book called Number 44, Mysterious Stranger that he wrote. So I asked it. I explained straight out and said, I want you to do this. And it gave me back an okay 12 paragraphs. But nothing in here that I would say I want to steal. Now, two weeks ago, I think I talked about using the high end of perplexity and their
[00:30:12] deep research about Morse code to Bodeau code to ASCII and other implications. That's right. It was really well done. It was really cool. So I went back to – so this was okay. It was pretty good. I took the same prompt into perplexity, and this time it wasn't so good. It was more like a software essay. You just never know what you're doing. You just never know. You just don't know. Yeah. Yeah, I was – when I was searching for – what happens the way I write this book is I've read a lot before.
[00:30:42] But then when I get to writing the section, I end up doing a little – I go back on. I research more. And then I find something. Then I find a link. And then I find another essay I want to get. And then I ask for it from the library. And no, no, no, no, no, no, no, no, no, no, no, no. On the topic of Twain and technology and Connecticut Yankee and King Arthur's Court, it was interesting. The Google search was filled with cheater high school essays, college essays. Oh, really? Yeah. All those essay services.
[00:31:11] Because this is a question the students are asked to write about again and again and again and again. Sure. So like a third of the results in the first four screens were these essay services. Oh, yeah. That's so interesting. But they're going back. You're like, no. I don't need that. I don't want that. Right. That's just polluting the data stream. Yeah. Diluting it. Maybe not polluting it, but diluting it. It's kind of, yeah, resampling and resampling. And yeah. The textpocalypse. The noise. Yep.
[00:31:41] Textpocalypse. That's exactly it. Yeah, you're right. Yeah. I mean, I'm interested in using this, just the kind of integrated approach, be it from Google or ChatGPT or whatever. I just, it's not just text. It's also images. Right. Is Canva also, or Canva is just, wait a second. I'm confused now. Is this where I get nuts on their brands? Yeah. Canva is completely different. Canva, Canva, the site. Oh, Canvas. Okay. Well, go ahead and explain Canva. Go ahead.
[00:32:10] Well, I was just going to say Canva is a very different service. You know, Canva is kind of like a Photoshop alternative online and meant around digital graphic design. Yeah. So that, you know, very, very different. This Canvas approach is more like an integrated suite that you work within for writing and coding. Very, very different.
[00:32:32] I don't know if the Canvas approach does support images yet, but I do know that Gemini 2.0 Flash was announced last week and it does. Its support has new capabilities for image and audio generation. But I don't know that they're necessarily this one in the same, you know, often these things kind of release at similar times because this is an example of how you integrate that
[00:32:59] or whatever and I'm not seeing anything in here on Canvas about Gemini 2.0 Flash. The other interesting thing is the part of the Canvas announcement is that audio overview, which is the killer app out of Notebook LM, is now being pulled into Gemini and Canvas. Of course. Or into Gemini at least. And Notebook LM, we had a story about three weeks ago where Notebook LM almost got aborted because other departments were jealous of it.
[00:33:29] And now they're, now they're canonizing it. Exactly. Give me your, give me your features. Give me your best stuff. Yeah. But you know, it's all Google, it's all Alphabet, so I guess that's perfectly okay. But I really like Notebook LM and I wanted to maintain a distinct brand because I know also that it's, it's rag. It's going to go off after just the material I give it. But you know, I have more faith in Notebook LM than I do in a large model on its own. So I hope they don't mess that up.
[00:33:58] We should get Steven Johnson back on to talk about what plans they have over there. Well, let me write that down. Otherwise, it's gone. This is my organized friend, Jason. He keeps loose. If I don't write it down, it's gone in like five seconds. Let me tell you. I need an AI to organize random moments. It's coming, man. It's coming. Yeah. Well, actually, I mean, I should just say real quick, I've been reviewing the Nothing Phone 3A, which is kind of nothing's mid-range.
[00:34:28] Like they have the Nothing Phone 3A and then they have the Nothing Phone 3A Pro, which is the gray one here. And so it's a little step up, some improved cameras or whatever. But both of them have this button on the side that is tied to a feature called Essential Space. And basically what the idea is, is if I tap the button, it takes a screenshot and it gives me a little text field so I can add a note to it or whatever. If I double tap it and hold, or actually, no, sorry.
[00:34:56] If I tap and hold, it immediately kicks into screenshot and audio recording. So it's recording right now because I've got it held down and it will record until I let go. And then it throws that into the Essential Space. AI transcribes it. And then if I've mentioned in there, oh yeah, I really need to remember to blah, blah, blah this tomorrow at three. Then it will assign a reminder to it. You know, on my front screen, I've got a little widget with that reminder that appears there.
[00:35:25] And so kind of along this line, the more you use it, the more I use it, the more I turn to it to remember these things because I am the type of person that if I don't get it down somewhere tangible, it's gone. And so features like that, I welcome that. Maybe that's what an AI phone is. I don't know. But anyways, random tangent.
[00:35:50] Gemini 2.0 Flash, though, mentioned that's happened as well now. It can also work with external APIs. This is really meant for developers to start working on and integrating and everything. It's not going to be released to public yet. It's seeded out to select early previewers. But what's interesting is that some people who have access, let me pull this up, have recognized that it's really good at removing watermarks from images. It's brilliant. Brilliant.
[00:36:21] I mean, look at that. This image, if you're watching the video version posted on Twitter, is just blanketed in random watermarks of different styles and shades and shapes and everything. And then the output of it is completely clear of it all. Yeah. Which is a really interesting kind of thing. I know that we've seen pieces of this before, but this is really effective.
[00:36:51] Like, surprisingly so. And Google, of course, is saying, no, you know. You shouldn't do that. Yeah, this is a violation of the terms of service. I'm sure once this gets a public release, maybe there's going to be guardrails to attempt to protect against this because it's considered illegal under U.S. copyright law to do this sort of thing, is my understanding anyways. But nonetheless, interesting stuff. AI a little too good at removing watermarks. And we thought that the whole point was to have watermarks added to AI.
[00:37:19] So it came from AI. But AI is going to be good at erasing whatever watermarks are put there. Probably including AI generated. You know, AI, whatever. I think so. The watermarks that are meant to show that it was AI generated. Oh, God. What an arms race that'll be. It really is. Absolutely. Yeah, interesting stuff. So and then the Google News is not over. But this definitely is of a different type of AI.
[00:37:48] Google announced the development of TX Gemma. Open, in air quotes, open AI models designed to enhance drug discovery. And so basically what it can do, there's no real great image to show you here, but it can interpret regular text and the structures of therapeutic entities like chemicals, proteins, molecules. With the goal of streamlining drug development process around that. Something that can be very costly. Something that can take a lot of time.
[00:38:18] And I know we've talked in the past about systems similar to this that really kind of scale the amount of time that's required now using a system compared to what amount of time it would take to do this without AI. And I mean, we're just reaching completely different economies of scale as far as that's concerned. I once gave a keynote to a major drug company in Switzerland some years ago. And I didn't realize how much pharma.
[00:38:46] Pharma is all about finding a molecule, right? It's a business of molecules. Like NVIDIA is a business of tokens. Pharma is molecules. And those of you who know this stuff know that and say, well, how stupid, Jeff. You didn't realize that. But I never thought of it in that way. And so it's discovering a molecule, understanding what use it could have, designing tests around it. Yeah, this technology is going to help, I think, immensely. And that's the best part of AI.
[00:39:15] That's why I don't, you know, I get so mad at the people who overhype it and overdoom it. I think that level of abstraction is just ridiculous and stupid because we miss then the opportunity here to see what it can really do. And I think, you know, it gets cooties from that. You didn't put this in a rundown, but I just want to mention real quickly. There's also Google did a new health thing. It's run health on Google where they're adding new healthcare AI updates for search.
[00:39:46] And the company unveiled a new feature called What People Suggest, which uses AI to pull together online commentary from patients with similar diagnoses. A patient with arthritis will be able to look up how other people with a conditioned approach for exercise, for example. That sounds really interesting. It also sounds a little risky. Well, yeah, grapefruit cured my arthritis. Yeah, this is what I did. It's fine. I saw a change. It works for me.
[00:40:16] In a time when we have challenges to science and medicine from the very highest levels of our society, I'm curious how this works. I've long got the benefit of there's a wonderful service called Patients Like Me, which is, by the way, an odd brand. When I talk about this, they say, oh, what do you have? No, no, it's called Patients Like Me.
[00:40:40] I know people with MS particularly who find it invaluable to hear from others who've gone under a new medication and what their regime was and their dosages and what their side effects were, knowing that everybody's different and every reaction is different. But it's really, really helpful. And this is also useful to pharma companies, once again, because their experience, if you can codify this and learn from it, I have to think that that's incredibly valuable data to help with this.
[00:41:10] So I see where Google's headed here, but it'll be interesting where it lands out. So Google and health. Yeah. Yeah. Yeah.
[00:41:18] Well, I mean, I think it's one of the obvious, really valuable directions for companies like Google to also be spending their time in, not just new ad models to generate more revenue and everything, but to really think about how these AI models can be used for something beyond what we're used to seeing, image generation, text generation, and go into.
[00:41:40] This can actually really make a huge difference on humanity and not in the tech mogul kind of brouhaha perspective of like, AI is going to change your life, but like actually change how research, how science is done to accelerate some of this stuff to a degree that is really helpful to humanity, I think. I'm interested in that. And I want to see more of it. Yep.
[00:42:10] Let's see here. California bill, AB 412, would require AI developers to track and disclose all copyrighted works used during training. That could be a lot of work, a lot of money, and that's exactly what the EFF is arguing here. It actually boosts big tech companies that can shoulder the burden and the cost of all of that.
[00:42:37] And stunts newcomers to the marketplace because they don't have those unlimited resources. They don't have the capability to actually do that.
[00:42:47] And so the EFF is arguing that actually, you know, whatever the intentions of a bill like this, what it ends up doing is it ends up firmly planting the large tech companies and creating more of a moat for the large tech companies that is impenetrable by the up and comers because they just have the resources to deal with a bill like this. Right.
[00:43:09] And I think it also touches the issues that I don't think have been settled yet about reading copyrighted material for training. Yeah. We've discussed this a million times in the show that I believe, and I'm not a lawyer, that that could well be found to be fair use and transformative, in which case to require me to do that is onerous and might make for dumber models.
[00:43:37] And so I think that EFF has a real good point here. So I agree with EFF on this. But Tim Nick Gebru, who I also admire greatly, on LinkedIn – I didn't put this link in. I'll put it in a second – said that this is a trash take that people – the EFF take. Whether you're a small restaurant or not, you have to ensure that you're not stealing your ingredients. So why is this different?
[00:44:01] The idea that you shouldn't be expected to know what data you're using to train your systems and that doing it is an impossible task is so normalized now that it's hard to know that this has not always been the case. This is part of the Stochastic Parrots argument, and she's co-author of that, that when these models become too big, they become difficult unto impossible to audit and understand. The whole argument assumes that we need to have systems guzzling data and resources like nobody's business, also Stochastic Parrots, that size is not necessarily what matters.
[00:44:32] So she's really angry about it. This is straight-up disinformation, she says of EFF. If courts find that it is fair use, it will be because the OpenAIs, Googles, and Anthropics are spending lord knows how much money to make it so. So EFF, in this case, I don't think is standing up for OpenAI and Google and Meta.
[00:44:49] It's standing up for their competitors, and it's arguing that if you make this too difficult, they're not going to be able to provide a vibrant competition to the big guys. So we'll see. Yeah. I mean, that just illustrates just how complicated this situation is, right?
[00:45:12] Because on one hand, what the EFF is trying to do, as you mentioned, is trying to make it so that the big tech companies don't entrench themselves to push out up-and-comers and prevent them from having a business. But what Timnakebrew is saying is that all of this, that doesn't matter. All of this stands to benefit the big players because they shouldn't have access to this data to begin with, end of story.
[00:45:42] And it's like, yeah, I don't know. I don't know how you – What's the right direction for all of this? It's going to end up on the courts. It may not be the best solution then, but that's where it's going to go. Yeah. Well, and speaking of – I don't know if this has much to do with the courts, but certainly I think Hollywood artists who penned an open letter to the Trump White House would hope that it goes there and that it rules in their favor.
[00:46:06] More than 400 Hollywood artists did write the White House targeting the USAI action plan. They're urging the administration to not roll back copyright protections in face of what they see as AI companies blatantly accessing their creative output to train their systems. I mean, this seems to happen every once in a while. We have a big letter coming from Hollywood.
[00:46:36] But obviously they have – and I can understand. I can understand. Like if I'm a creative person creating things and suddenly, as we all are realizing, AI is at a moment to really place a lot of pressure and impact careers and livelihoods and all these things. So I can understand where they're coming from. But I'll be contrarian.
[00:47:05] I think if we're going to be using AI systems, it's in our interest as a society to have them be smarter, be better, and not just feed on junk, number one. Number two, the thing that I keep arguing is that AI companies have a right to learn like journalistic companies like podcasters. How are we doing this show? We read a bunch of stuff. We try to give credit for it, but actually I don't think we have done too much in terms of crediting which site we did. You show it. Yeah, well, you show it.
[00:47:34] Like the Axios story about young brands. Okay, so we credit it that way. Yes, we do. We show it, and I put it in the show notes. So we do credit there. Everywhere that the show posts. We probably could be better at crediting the bylines. But journalists don't do that either, right? They read each other. They come up with it. They rewrite each other. They do some added reporting. This is the way the world works. Yeah.
[00:47:54] And it's been a problem since the beginning of copyright, and there's some really good books about this, and I wrote about it in the Gutenberg Parenthesis, that when the framework of copyright discussion is piracy and theft, it then changes the entire discussion. As opposed to literacy and culture and being a contribution to culture, that's different.
[00:48:22] Nobody is saying that somebody should steal a book. I don't think. Well, Books 3 kind of does, but we'll get past that. Right. But if you do legitimately, if OpenAI has one subscription to the New York Times, I say that their machine has the right to read the New York Times. People will argue, well, no, not because of scale and so on and so forth. But this is what's going to go to the courts. Yeah.
[00:48:46] And the other thing I learned in researching copyright for the book is that copyright was not demanded by the creators. So all these creators stand up now. Copyright's for us. No. Copyright was demanded by the publishers and the booksellers because they wanted a tradable asset. They wanted the creators to alienate themselves from their work so that they could, in turn, resell it and profit from that. Yeah. And it was the industry that wanted it, not the creators.
[00:49:15] And I think that's important history here to understand the genesis of copyright as we have this discussion today. So, end of amateur historian. Well, no, I appreciate that. I think my comment about understanding where they're coming from is purely about the fact that when we've invested ourselves so much into something, we want to protect it. And you're a musician. And so you're a superhero. I mean, yeah. But I think it applies in a lot of different ways.
[00:49:43] If we've ever created something, we hope that we feel some sense of control over the thing we created. Whether they're right or they're wrong, I understand why they feel this way. I've built my life upon acting in movies or writing this content or whatever. And now there is a technology that really threatens what I have enjoyed. I'm speaking as them, by the way.
[00:50:09] What I have enjoyed or hung my hat on for decades. And now it's changing. Have you been to movies? Not that I know of. Oh, okay. I was going to say there's a whole sign of Jason. I didn't know. Nothing you've seen. No. No. The really, really stupid movies when I was in high school and had a camcorder. But that's about it. So anyways. I can understand the emotional kind of response. I do too. I understand it.
[00:50:37] But I think we've got to look at it dispassionately. Yeah. Okay. Well, fair. What do you think about Anthropics CEO Dario Amati during an interview at the Council on Foreign Relations, suggesting that advanced AI models could someday be given a, quote, quit button? And this is an interesting conversation.
[00:51:04] Because what it stirs up is, like, should an AI model or an AI chatbot or whatever be given the ability to determine if they wish to carry out a task or not? And what does that say about the sentience conversation that we apply to them, which is really what it's all about? It's like, what are we ascribing to these things when we say, oh, it should have the ability. It should have the choice. And it's an AI chatbot. It doesn't have a choice.
[00:51:32] It has what we give it. And so it's an interesting comment to make about the future of AI. Yeah, there's two parts to this, I think. On the one side, this is really a fancy way to get publicity for talking about a guardrail. Totally. Yes. Right? Because all the time now you try to get AI to do certain things, and it says, no, I won't do that. So that's the question. It's there. It's there. It's there. No, I totally agree. That's exactly what I thought when I read through this. It's like, wait a minute. We kind of already have this. They are guardrails.
[00:52:01] But on the other hand, what's interesting, and I just had to read for other reasons, a long essay, a doomster essay about AGI and superintelligence. And it's really technological determinism saying that once it's created, we're doomed because it can do anything. And I say, well, no, we pull the plug. You've robbed us of agency. The people who make this stuff still have agency with it. Humans have agency with it. We can decide how to make it. We can decide what to do.
[00:52:30] Yes, it's complex, as we discussed earlier. But you can look at the output, and you can say, no, stop. And maybe you can't fix it, and you've got to throw it away. That's possible, right? What was the name of that stupid bot Microsoft put out on Twitter? Tay. Oh, Tay. Yes. There was no fixing Tay. Tay was doomed. Tay's going to hell. Tay had to die, right? And so maybe that's what the quit button is.
[00:52:54] So it's interesting to say, and there's a lot of talk about trying to get alignment. Can you align these systems with human values? I think that's almost as much BS as AGI and ASI because of the discussion about guardrails, and you can't anticipate every bad use. But when you do anticipate a bad use, telling the computer that it is its responsibility at that point to quit. Yeah, sure. Makes sense.
[00:53:22] And I think it's a contrary way to have that safety discussion about, we've lost control of it. Right. It can decide. It can just hit the button when it doesn't want to listen to us and decide to blow up the planet or whatever the case may be. Whatever extension you want to take that. Interesting nonetheless. We are going to take a quick break, then we'll come back, round things out with a few quick stories, and that's coming up here in a second.
[00:53:52] You're always on the pulse of time and want to date? Then the Now Brief of the new Samsung Galaxy S25 Ultra is exactly the right for you. Stop on the way to work. Then it will be automatically get ready to go on. It rains at the goal point? Then it's recommended to you, the raincoat with me. With the Galaxy S25 Ultra you can be one step ahead. Are you interested in it? Learn more on Samsung. All right.
[00:54:21] Last week we read part of the short story written by ChatGPT's unreleased creative writing AI. I think we both kind of felt a little kind of cringy. Yeah, I think so. This meta, what was it? Metafictional literary. Yes. Short story about AI and grief. I saw a note in here about a Guardian article kind of siding with the fact that the story was actually good. Yeah.
[00:54:48] And the Guardian is not chopped liver. Yeah. And you also mentioned Leo agrees. And I missed this conversation on Intelligent Machines, but it sounds like you guys talked about this and maybe Leo liked. Paris and I thought it was junk to Leo. Leo liked it and we said he had no taste. But here's the Guardian agreeing with Leo. So we'll give credit where it's due. Yeah, indeed. Hey, you know what? Taste is subjective, right? That's the whole point. Everybody has a different idea.
[00:55:15] As far as what makes good writing, what makes good film, all these things. We're all going to fall differently. And I think what's interesting about this story coupled with another story that's in here, which is a study conducted by The Conversation, which I don't know that I've heard of The Conversation. The Conversation is great. It ties writers with academics to try to make academic work more ready for prime time. Okay. All right.
[00:55:43] Thank you for that because I was like, you know, I thought this report is interesting. I was like, I don't know that I've heard of The Conversation before. Anyways, this study presented participants, and how many of them were there? More than 650 people. With a short story that was written in the style of Jason Brown, but it was written by AI. Half were told of its true origins that it was written by AI. The other half believed that it was actually written by Brown.
[00:56:11] And the study found that participants who were told that they were reading AI-created text had a negative assessment of the quality of writing. And they said it was predictable. It was, you know, they were judging its authenticity, how evocative it was anyways. The study analyzed how that knowledge translated into consumer behavior.
[00:56:35] And what it found is that both groups were ready to spend money and time to finish reading the story, regardless of whether it was labeled as AI and they knew about it or not. They also spent no less time reading what was known to be AI-written versus the people who thought they were reading Brown's authentic work.
[00:56:57] And so it kind of calls into question, you know, we say, a lot of people say, and I think I follow this category too, I feel like I have good AI-dar when it comes to reading something written by AI. And immediately what I feel inside of me is that if I know that it's written by AI, I have less interest, I perceive less value. And this kind of, you know, calls that out and says, well, actually, consumer behavior doesn't seem to change even when the origins of that are known. What were your thoughts on this?
[00:57:27] Yeah, I think it, for me, this is weird. For me, this says as much about opinion polling and focus groups as it does about AI and public taste and culture. Okay. Because when you're asked the question, you're told this is AI, you know that your proper response is to say, well, that's going to be crap. Totally. And you know that's what being asked. And they knew that when they were mentioning that to the participants. Exactly. That's what you're going to get. I mean, you're just going. It's the bias of opinion polling, right? Yeah. And I hate opinion polling.
[00:57:54] I often quote the late Professor James Carey who said that it, and I'm paraphrasing, preempts the public discourse it's intended to measure. And to me, this is an example of that. So AI has cooties. And you're supposed to not like it. So what's really interesting about this study is it still found a way by putting the money behind it and the time behind it. People wanting to finish the story said, well, actually, in terms of at least the attention economy, AI could still bring it home. Mm-hmm. Mm-hmm.
[00:58:25] Yeah. Fascinating. Yeah, it is. Fascinating what we'll find out over the years as far as the value, like the monetary value or that felt sense of like this means something to me, the value of creativity from a human versus creativity from a machine. And, yeah, I guess that's what some people are, you know, creatives are afraid of is, well, wait a minute. If a machine can be, you know, have value in its creativity, what does that say about mine?
[00:58:55] I don't necessarily believe that human creativity, the value of human creativity goes away even if machines are good at being creative. I just – No, not at all. Or creativity. And it forces us to ask what makes us uniquely us. You know, I wrote this syllabus that I'm not going to teach. Somebody else is going to teach at Stony Brook on AI and creativity. And the whole point of the course in the end is to examine creativity and examine your own expression. And what do you want to say? And does it help you say what you want to say? And where are the boundaries? And what's the proper use?
[00:59:24] I think these are questions for students to ask, not to presume the answers. Yeah. Yeah, fair. Totally fair. And then finally, Cornell University researchers have developed an AI-powered ring that uses micro sonar technology to track finger spelling, which is, you know, essentially sign language in American – well, in American Sign Language, ASL. And it's called Spell Ring.
[00:59:53] And it's a little thing that, you know, just fits on the thumb. Here you go. If you're watching the video version, you can see, you know, this is obviously a prototype. It's got ribbons hanging out and everything like that. But it can translate in real time accuracy between 82% to 92%. And that's pretty neat. It's one of those great cool uses of this. Now, this is for letters that are spelled out. I think the larger vocabulary of ASL, it might, you know, would be a next stage.
[01:00:21] But it's fascinating to be able to translate that both ways. Mm-hmm. Yeah. You can imagine a robotic view of this as well. Mm-hmm. Oh, yeah. 100% you could. And, you know, what could it be used for? You know, maybe entering text into a machine or to smartphone or, you know, any number of, I'm sure, applications that I can't come up with off the top of my head right now. But, and then expanded, like you said, to full sentences.
[01:00:48] They also mentioned possibly expanding to capturing facial expressions, body movements. Pretty neat stuff. Yeah. Very cool stuff. I like that. And that is the end of this episode of AI Inside. We've, that's a wrap. We've reached the end. Thank you, Jeff, for, you know, all your work and your insight and for writing so many wonderful books. I always love this conversation. Yeah, me too. Me too. I really look forward to this show each and every week.
[01:01:18] And, and just real quick, I got to say also is that in the beginning of this show, it's not like I know so much about AI, but I finally have found a comfort in AI, I feel. And in the first, I'd say the first six to nine months of doing this show, I still had kind of like an uneasiness in myself. Imposter syndrome. I had it. Yeah. I didn't know how to, like how I, what are my opinions? What are my feelings? How do I talk about this topic?
[01:01:47] And it's, it's getting a lot easier. And that's really why I enjoy doing the show with you. Yeah. It's, it's making me more comfortable with the topic. And I think that what, what this matters is, is what I hope we demonstrate is that, is that this isn't just the realm of heavy duty geeks. It's all of us, right? It's our language that gets used in it. It's our language that we can use to speak to it and hear from it, at least in generative AI. It affects every sector of society. And so we all need to be part of this discussion. And that's the point.
[01:02:17] That's the point. Indeed. Love it. The web we weave, jeffjarvis.com. You don't go find the book there. Also the Gutenberg parentheses. Oh, wow. And Bigel magazine. Yes. Soon to be an audio book. Excellent. Can't wait for that. And you're not reading it. Oh, yes. Oh, you are reading it. Yeah, I read it. I've read my books. Yeah. You can imagine the poor producer. Jeff, can you take that again and slow down a little bit? I don't know.
[01:02:45] The poor producer and the poor Jeff, because that sounds like a very challenging. It's torture. Oh, is it torture? But I'd rather do it myself than that. Yeah, totally. And I love when I'm hearing an audio book and it's by the author. Yeah. It feels a lot more genuine and a lot more deliberate. And it feels right. So cool. Well, look for that. Jeff Jarvis dot com. I'm sure it'll appear there once that is available.
[01:03:11] As for this show, AI inside dot show is the place to be for all ways to subscribe to the podcast, to follow the show and many different socials. Also, you can find all the episodes, video, audio, everything is there. And yeah, including some reviews, which, hey, while you're at it, go on to Apple podcasts and leave us some some reviews, you know, let us know and let others know, because this is how we share knowledge of the podcast.
[01:03:40] We'd love to continue to grow it. And then, of course, AI inside dot show. Sorry, patreon dot com slash AI inside show. There we go. That's what I actually meant. And you can go there and you can support us on Patreon. And we have a number of different levels at which you can support us. But that that, you know, you get an ad free version of the show. You get access to a discord that's having more and more activity. I'm really enjoying that. And so many other things.
[01:04:09] You also have the ability to become an executive producer. Like, as you're seeing on the screen, if you're watching the video version, Dr. Dude, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, Dante St. James, Bono DeRick, Jason Neffer and Jason Brady. Yay! So many names. It's just awesome. And all of them on the screen. That's nice. All of them on the screen. I'm trying to think of ways to kind of make this even sweeten the deal even more. So thank you, everyone.
[01:04:38] You might as well be at Times Square. Yes, indeed. This is as good as we can do for right now, anyways. We get a bunch more patrons. Maybe we can actually do the Times Square thing. Yeah. Anyways, thank you so much for watching and listening. We appreciate you being here. And we will see you next time on another episode of AI Inside. Y'all are awesome. Thank you. See you later.



