Jason Howell and Jeff Jarvis discuss Anthropic's Claude 3.7 SONNET, Chegg suing Google over AI summaries, Perplexity's new Comet browser, the UK delaying AI regulation, and more!
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
NEWS
02:50 - Anthropic launches a new AI model that ‘thinks’ as long as you want
12:36 - Chegg sues Google for hurting traffic with AI as it considers strategic alternatives
19:24 - Perplexity AI teases a new browser 'for agentic search'
20:36 - Dia, a new web browser from makers of the Arc browser, is taking aim at Google Chrome with clever AI features
32:26 - UK delays plans to regulate AI as ministers seek to align with Trump administration
32:57 - Daily Mail copyright screaming
33:20 - Kate Bush and Damon Albarn among 1,000 artists on silent AI protest album
35:50 - Accelerating scientific breakthroughs with an AI co-scientist
40:55 - Introducing Muse: Our first generative AI model designed for gameplay ideation
45:08 - AI ‘inspo’ is everywhere. It’s driving your hair stylist crazy.
52:03 - O'Reilly: The End of Programming as We Know It
Learn more about your ad choices. Visit megaphone.fm/adchoices
This is AI Inside, episode 57, recorded Wednesday, 02/26/2025, Agentic Browser Wars. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. What's going on, everybody? Welcome to another episode of AI Inside, the show where we take a look at the AI that is layered throughout so much of the world, the technology.
I am no longer in the mountains of Colorado, although the cough that resides inside of my lungs is still apparently spending time in Colorado. I'm Jason Howell, one of your hosts, back home, back in action, joined as always by my friend, Jeff Jarvis. Good to see you, Jeff. Hey. Hey.
Hey. Boston friend. How are you? I'm doing awesome. I'm, yeah.
It's good to see you. I'm I'm actually a little bit out of my mind right now because in two days, I get on an a, an airplane, and I go to Barcelona for Mobile World Congress. Woah. I'm a little nervous about the time change thing. That always freaks me out.
When do you get back? I get back on Wednesday evening, the March 5. So programming note, our, AI inside will not happen next Wednesday. The next episode will be Thursday, March 6. That'll be the day after I get back from Barcelona.
I'm sure I will be bleary eyed and very tired, But we will talk about AI, and I'm sure there's some AI news coming from Mobile World Congress that would be part of the show as well. Oh, yeah. Yeah. But, yeah, I'm looking forward to it. Never been to Barcelona.
Have you? Very cold. No. I've, not been to Barcelona. No.
Yeah. So I'm gonna go eat some jamon or jamon or whatever you call it. Yeah. Yeah. Yeah.
A little pork. Yeah. Maybe some paella or something. I'm not quite sure. Anyways, looking forward to it.
Lots to talk about once that happens. Before we get started, real quick, thank you to our patrons who support this show each and every week, each and every month, actually. Patreon.com/aiinsideshow. John Garrison, there's a familiar name. John, thank you so much for your support and everyone who supports us on a monthly basis.
We could not do this without you. Also, if you happen to be watching live, because we are streaming this live, we do have quite a few live viewers, that that check out the show as we record it. Be sure to subscribe to the show so you don't miss it. If you miss the live stream, you know, you don't wanna miss the podcast. Right?
That's the beauty of podcast. Aiinside.show. Go there, subscribe, and you won't miss any episodes regardless of whether you see it live or not. And with that, let's jump into the news, and we might as well start with Anthropic launching Claude three point seven SONNET.
Sometimes I see SONNET, and I wanna say, like, Sone or something. It it looks like it needs to be Sone, have a little bit of flare, but cloud 3.7. When Verizon came out, they didn't tell how to pronounce it. I thought they were trying to be I I I first thought it was Verizon. Verizon.
Verizon. Verizon. Verizon. Verizon. Verizon.
Verizon. Verizon. Verizon. Verizon. Verizon.
Verizon. Oh, no. Claude three point seven Sonae, which they are calling the first hybrid AI reasoning model. I know it's SONNET. Provides quick, concise answers, but also more detailed responses based on what you need as a user.
And so that's why they call it kind of the hybrid, reasoning model. A new thinking mode, which let me see if it shows in the screenshot here for video viewers. You have a you have your normal kind of prompt area, and then you have a little thinking mode area where you can switch between normal, which is kind of like your quick answer, what you're used to seeing, and then the extended mode, which, lengthens out the context. You know, that's gonna take more time. Anthropic says that it's best for math and for coding challenges, things that might require a little bit more processing, a little bit more time to, reason to through the the problem, that sort of thing.
So Anthropic says that they have something new here, which is a mode that can do or a, a model that can do both. Whereas other models, you know, as as you probably know very well, Jeff, and anyone listening, these models all seem to have like a personality of their own. You use this for this. You use that for that. And you need to kind of, in your mind, think about what you're doing and then find the model that best fits what you're doing.
And you only get that through a lot of experience and everything. And I think Anthropic is trying to kinda tackle that to say, no. This can be all things to all people. Right now, as a user, you have to determine which which mode you're in. But eventually, we want it to recognize what mode is most beneficial for what you're asking and do the switching automatically for you.
I'm having a senior moment. The company we had on, that I met at Hearst. I mean, I met at BDMI that, that, that does multiple models for you. My studio, my studio studio studio. Thank you.
That's why mine's studio is legit in me because because it can go across companies and pick the best model for you. Yeah. Right. Which I think makes makes sense. And and so because, you know, when I saw this story and I was debating, do I really care that Anthropic has a new model every week?
Is it new model? Somebody has a new model. Everybody has a new model every single week. You're right. Right.
At least at least in the days of TV shows and cars, you had to wait for every fall before you go. Right? The the new Buick is out. Yeah. The LeSabre.
The other thing that struck me is it reminds me a little bit of the very early days of Twit Mhmm. Where they get excited about gadgetry. It's, oh, there's a there's a new monitor and it has 18 nits instead of 16 nits. And they talk about the nits for ten minutes, and I think, I don't know what the hell they're doing. Yeah.
It's in that kind of weird, incremental change Totally. To have something to announce for the sake of announcing it. We gotta get past that and, say whether something is really worth noting as a new release or shrug. Yeah. Is it the kind of thing that only really matters to the people who are already using the thing?
Right? Like like I was mentioning earlier, we all or at least I at least I can only speak for myself, but I imagine other people are in this camp too. I have found, you know, the things that I use for very specific purposes. And so I it's kind of in my habit, in my use case to use those. You know, I talk about perplexity a lot.
It's it's the one I turn to most, but I'm also using ChatGPT for certain things now outside of perplexity. And, if there's an update to the And, if there's an update to the thing I'm already using, then I get excited about it because I'm gonna be directly impacted by that. But you're right. There's so many of these models. And do I use Claude directly?
Do I go to, you know, anthropic site and, you know, fire up a query inside of Claude? Not very often. And if I do it, it's it's from an experimental kind of curiosity perspective than it is a true use case. So this matters a little less to me as well. In a case like this, we we can't afford to, have 20 or $200 a month subscriptions to all these models.
Yeah. So we don't have we don't have a subscription to Anthropic, so we can't demonstrate it for you because they're releasing this new thing, this this deep, long thinking, slow thinking, whatever it is they're thinking of it is, only to their premium customers, which I get. You know, they they wanna make those special. They wanna make those, somebody to believe in. But then the problem is they're stuck in is in this commodity market.
Then does this does this make somebody switch over? Oh, they've gotta subscribe now to try that. Is this a new customer routine? No. It's just another they're all doing deep thinking now.
Mhmm. And this is their deep thinking is long thinking. Yeah. And, and it's, it's, you know, it's, it's all about marketing and it's a marketing problem. I think if you can't try it, then they're not going to convert anybody to a new customer.
I don't think. Yeah. And it's funny that you say their thing, their deep thinking is long thinking. I mean, I I look at this, and I wonder how it's really different from the other deep thinking things too. Because when I use those, it takes a long time.
And I don't know. If you use this, are you able to tell it? Like, can I actually go in here and say, take thirty minutes? You know? Like, is is that a a a frontier that will come up on where it's like, you know what?
I want to assign an amount of time that I want you to keep working until that time is done versus you deciding you found enough. I want you to keep looking. And is is that compelling enough? You know, if if you ask the people who are really steeped in this stuff and use something, news like this or actually, I see this a lot from ChatGPT. A new ChatGPT model will come out, and someone will go online, like, boom.
Mic drop. Done. It's this is the best in the world. AGI is here. Sing from the rooftops.
And it becomes the biggest news, apparently, supposedly, to that person. But to everyone else, like, yeah, it's another update to another model that I don't use. So by the way, last week, we wanted to demonstrate perplexity as deep thinking. Mhmm. And, right before the show, or I think we've not even been in the first few minutes of the show, I put a question into it, and I talked about it last week about the sequence from, the beginnings of binary math through Morse code, through Bonnet Mhmm.
And and and and on. So I we talked about this last week. So it came up with this phrase I really liked, and I wanted to put it in. So AI inside is now in a very long discursive footnote of mine in the book, which is why I was doing it, because I I I love this phrase, and I didn't know how do I attribute it. We talked about that last week.
So I went to a friend of mine, Matthew Kirschenbaum, who wrote the book, Track Changes, who I just saw speak in Princeton last night. And on Facebook, I said, well, how do I do this? It would sound really silly to say perplexity says or as perplexity thinks. Yes. So he said, that that there is kind of a beginning of a consensus about having to cite these things and that it is proper to cite them.
But he said in this case, because it's a story, a discursive footnote, that is to say a very long one telling the story is the way to do it. So I told the story in a footnote I wrote last week, and I just I had fun doing it Mhmm. As as geeky as that is. Mhmm. Because it was it was this the the test I did for the show, but then it became part of my book.
So That's great. Yeah. I mean, in in a couple of years, I'm guessing, we'll kind of know what the answer is to the question that you're that you're I think sooner than that. I think sooner than that. I think that yeah.
I think there is a need for that. More and more. Like, the the need for that is coming up more and more. So What's appropriate transparency? What's what's a responsible transparency?
What would be considered plagiarism? Can you plagiarize AI? Interesting stuff. That's that's a great question. Can you plagiarize AI?
I mean, yeah. I don't I don't know. As of now, the product of AI, except except for one case, the law has changed a little bit a few weeks ago. We talked about but generally, you can't copyright the product of AI. Yeah.
Okay. So, AI doesn't own it, but it's more of a of an ethical situation. If I pawn this off as my thinking or my writing Yeah. Well, you know, but I think that's unique to writing. It's different.
We're gonna talk a little later in the show about science and AI. And, and of course, citation is important among scientists too. So it's a really interesting question about how to be appropriately transparent for this tool. Absolutely. Absolutely.
Rabbit hole there. Not not going anywhere either. So, yeah, interesting stuff. Okay. Well, so then you don't care that Anthropic tested out, 3.7 Sonic on a Game Boy version of Pokemon Red.
I I take you as a as a huge Pokemon fan, Jeff. Is that not accurate? Nope. No. Nope.
Neither am I. Nope. Neither am I. I saw that. I was like, okay.
That must mean something to someone, but I don't know. Okay. Well, there you go. Anthropic's latest model matters to some, but not all. Chegg.
Okay. Is it Chegg or Chegg? Is it GIF or Jif? To me, it's GIF. I'd I'm gonna go with a hard c h.
It's check. Yeah. That's what I'm thinking too. This is a company that I'm not familiar with, which I'm kind of surprised that I'm not because I have kids who might benefit from from a product like this, but it's an education and homework technology company, that is now apparently taking Google to court, claiming that AI summaries that are listed up at the top of Google search that we've seen for quite a while, gotten very used to seeing those nowadays, is having a direct impact on its business. Chegg blames Google's, quote, hollowed out information ecosystem, yowch, for a 49% year over year traffic drop as of January, '80 '7 percent year over year stock drop to a record low of of around a dollar per share.
So Chegg is obviously in a position where it needs to figure this stuff out because its business is is, is is is, flapping in the wind at this at this stage, but blaming Google for it because of, AI, summaries. So this struck me as familiar when I saw the story and the rundown. And I went went back in a very simple little Google search because Google's really handy and remembered that it was Chegg that wind similarly when Chat GPT came out. That's right. Yeah.
So it you know, so a 49% drop in traffic now at the time in in, 02/2023, they had a 48% drop in stock because the CEO came out and said rather, I think not wisely, publicly, because I said, this could make a big change for us. And the stock market said really? Right. It sure could. Because people can use this to study in all kinds of new ways.
So if Chegg just wants to make a business model out of whining, I don't think it's gonna get him very far. And and I I think this this reflex to sue because of a new reality, very much like publishers in the news business are doing, I don't think is a good is is well, I definitely think it's not a sustainable business strategy for the future. Maybe you might want a suit. Maybe you might get a bucket of money, but but then it's that's it. That's all you're gonna get.
It's not a strategy. So I'm not impressed with Chegg, and I think they're just not updating for the future. Yeah. Yeah. Whereas Grammarly Grammarly on the other hand has really updated with technology and tried to understand how to change its business and has made new business opportunities, in this world.
That's true. And so I think there's a contrast there. So sorry, Chegg, but, crocodile tears. Little tiny island. Oh, poor Chegg.
Yeah. I mean, having not really recalled like, I like, I did I did read up on the whole Chat GPT comparison thing, and maybe I heard about that when that happened. But Chegg, just as a business, did not really ring any bells for me. But from that perspective of not really having a long, you know, an elongated knowledge of their business and their approach and their and what they do and and all that kind of stuff, I wondered the same thing. Is this just an example of the fact that certain business models thrive and certain business models die when new technologies come along?
And that's Right. Kind kind of how it happens. Now they are arguing that Google is wholesale, using a 35,000,000 questions and answers that, that they have hosted on their site. They argue that Google is using that in their datasets and then offering that information in their AI summaries, which, you know, is, I guess, another part of this that that really ties into other lawsuits out there, whether it's okay to, you know, train on that data and then offer it up in a different way through you know, is it transformative enough in in other words? Well and and, you know, the problem for a company like Chegg is they're dealing with, facts that are they they cannot be unique to Chegg because Yeah.
Chegg is trying to teach you things that are generally known. It's educational. Generally known. I can find it, and I can find another way to teach people that. I'm I'm having another senior moment.
Did I ever tell you this story? I was at Google once. I'll tell you this story, and I had I've I've had many other Google stories. I I was sitting in a room, a strategy group. Oh.
And I had a senior moment. I have them more these days. And as I had the senior moment and I couldn't remember something, I I turned to the room and I just said, don't get old. And one of the Googlers without missing a beat said, we're working on that. So, anyway, who's the wonderful guy we had on the show who does educational videos who wrote a book about that?
Sal Sal Khan. Sal Khan. Thank you. Sal Khan. From Khan Academy.
Khan Academy. Right? That's another example of somebody who's impressively updated with the opportunities of AI. He's rebuilding the Khan Academy around AI and around what it can do. Jag is sitting back saying, oh, no.
You're leaving this out. Mhmm. Sorry, guys. Sorry. I just I just really don't have much sympathy for you.
Yeah. Well, there you go. First and also, is this the this can't be the first lawsuit that's targeting AI overviews, like, specifically. Right? Like, I have to imagine no.
What they're targeting in general I think I think it's a good good question, Jason. I think generally what they're targeting is this is the scraping, is the is the access. Right? New York Times says you you took our step. All targeting.
Yeah. And and so they're what they're targeting is the learning structure as opposed to the display structure. Right. You know, and Chegg the the other part about Chegg is that it's it's a it's a textbook rental business. And the whole textbook publishing and rental business really screws over students.
Yeah. That's true. Right? Publishers put an extremely high price on because they know it's gonna get rented again, so they wanna try to get money out at the beginning. And then the rental companies come along, and they pump up every single time Mhmm.
And turn over the same dollars again and again and again. And, again, I'm not terribly, sympathetic. Yeah. Sorry, Jake. But I've beaten up enough now.
I guess we just move on. We can move on. We can have a moment of silence for Chegg. Maybe not quite yet. They're not going away.
They they haven't gone away entirely yet, but it's not looking good for Chegg. We are gonna take a break, though. And, hopefully, it won't be a break of silence. Although, although we'll talk about that a little bit later in the show. But and, you know, until then, take a listen to this.
And when we come back, we're gonna talk a little bit about Perplexity's new agentic search browser that's coming up. Alright. Perplexity, we already kinda talked about them a little bit. But this and and to be quite honest, we don't have a whole lot of information. We have bare bones information here, just concepts of a plan, essentially.
Perplexity has a new agentic search browser in the works. They teased it this week, called Comet, which has a sign up page. If you're interested, you can go and sign up and be hopefully included in the beta, whenever that happens. This perplexity actually says that users can you can gain quicker access by sharing Comet on social media and tagging perplexities. They want you to do their their hard work for them in raising awareness, which apparently we are doing a little bit of too.
But, essentially, this would be a browser, a web browser with generative AI built into the experience. And, perplexity is not alone. There are other companies that do that are, purporting to be working on this. Dia from the browser company. They're the makers of the ARC browser, which let's see here.
I think I can open up that page, which, again, is still kind of like in a wait list stage. But, essentially, it would be a browser, that that would, you know, be heavy on personalization to every user. The address bar would become sort of an action bar. It would be kind of like, yes, it browses the web, but it also integrates the AI functionality. And and then also and then I was like, okay.
Well, I would be really surprised if Google didn't do this with Chrome at some point. And I guess I failed to realize that if you go into your omnibox or whatever they call it, the address bar, and you put in at, the at sign, Gemini, and space, it issues a Gemini command. And so you kind of do have Gemini integrated into the browser. It's not scanning what's on your screen and everything, which I would imagine is the sort of thing that perplexity would do with Comet and that, you know, Dia is gonna do, to really kind of integrate, you know, all components of what you're doing in the browser. But, I thought that was interesting.
A little little quick tip. Yeah. So, I saw that, perplexity is impressing me all around. We talked about this last week, and and just this week they they keep on making news. And some of sometimes it's a gimmick, like, we're gonna buy, TikTok or whatever.
TikTok or whatever. Right? Was it TikTok? They were they were TikTok. You're right.
Yeah. I was gonna say Twitter, but of course it wasn't that. And this week they announced a an investment fund, and they they put out a, kind of Chinese free version of DeepSeek, and they announced they they teased this browser. So they're really on top of things in interesting ways. And I think that unlike Anthropic in the in the first segment, we have a new version.
It's dot five six seven eight. You know? Perplexity is trying to do things that make sense to me as a user, in terms of how they bring the AI out. However, the, AI browser, I I didn't grok. I didn't kinda get it.
I was confused. But then when I watched the d a video, I got excited. And if you go to I think I put it at one forty five in the d a video. The cursor. The cursor.
Okay. So they just say, hey. You got a cursor. There's the cursor. It is it's there all the time.
But he gives a demo, in a second. I should have gone farther up, where you're you're typing in something right now. Apple released the iPhone and you can't remember when, so you go to the cursor and you kinda right click on it and give me an idea, and it'll say in 02/2007 and the rest of the phrase. Right? So the cursor becomes a multi, dimensional tool Right.
Itself. It's not just placement. It's also gathering context that comes before it or Right. Yeah. It knows it knows the context of that browser, and it knows you over a period.
Yeah. Interesting. So you also, give an example later in the video where, you have a whole bunch of of tabs open for a gift for your kid and you send it to your spouse saying, you you tell, the browser, take all the gifts that are in those tabs and put them in links, pardon me, in, this email and send it. Oh, and that's what I just showed if you're watching the video version, I think. Right.
That yeah. Another example is you've got to, pardon me for one second here. Let me just get rid of this frog. Mute. And I will dance while Jeff gets rid of the frog.
There we go. I think. So another example was basically a newfangled mail merge. If you're if if all you guys remember did did you know what mail merge is, Jason? What?
No. I feel like I should. Oh, you see. This goes this is an uncle Jeff moment. Okay.
So back in the early days of word processing, you had a letter and you wanted to send it to 25 different people, but you wanted to say at the top, dear Jason, dear Jeff, dear so and so. Right? And so you had a separate list, of names in those days, not in a spreadsheet, but the equivalent. Okay. And then if you if you formatted it correctly, the name, pick up the address, even in the text of the email, you could leave a variable and it would pick up that.
You know, you owe me x dollars. Right? That was mail merge. It was a big deal at the time in plain old word processing early on. Word star, those kinds of things.
So this, you don't do the list. You just say it knows who has to come and it'll make an email for each person and give you your time. You gotta show up and, it just does it on its own much more fluidly and easily. Mhmm. So I think that's that's compelling.
That's interesting because it knows you. It knows where you are. It, intuits that, and it gives you the command structure. We're now the one thing is you've gotta to send that email to your spouse with all the gifts. You've got to tell it to do that.
Mhmm. But not a big deal. And so I found that interesting. So another another uncle Jeff moment. Pardon me.
I mean, I I get the concept of a mail merge. I'm just used to it being a computerized process and not something you have more more manual. You'd be more manual. Yeah. I mean, I'm so old.
How old am I? I'm so old that I trained people on the first, newsroom computers, and included the Chicago tribune. And I had to, I had to instruct people about what is a cursor. The cursor was a new concept to people. No.
You can't. You've got to tell the computer where you want to do it. Yeah. And there were no there was no mouse or anything. You had up and down keys.
No. You gotta go up and go when you wanna insert that word, you gotta go right there. That's where you're in straight, then you type it in. Right? Yep.
And by the way, there was an insert key. If you didn't, it would write over. Oh, yeah. I remember the insert key. Yeah.
Right. The insert key. Then I had to tell people, when you get to the end of the line, don't hit the return key. Just trust me. Trust me.
Just keep typing. No. Really. Just really. Trust me.
Keep typing. See what's gonna happen. Wow. Woah. When it goes over.
And I didn't need to do anything. No character. No bill. Wow. Right?
So my point is the cursor is a relatively recent invention in our culture. Yeah. Right? It's from the, and popularly, the cursor is nineteen seventies to nineteen eighties. Right?
And and so the cursor has not really been rethought since then. Mhmm. There were a few variations in its early browsers of, it could be different shapes or other things like that, but it hasn't really had that kind of multidimensional functionality. So I think that's kind of a breakthrough from Dia, and I'm gonna guess that that's what, perplexity is working on and what Google is gonna work on and others gonna work on. But it really does make you rethink, this.
And so so my my suspicion, oh, an AI browser is like putting Sriracha on everything. Right? Everything on Sriracha for a while then. Right? It was the hot flavors.
Oh, yeah. It's an AI browser. But this actually impressed me and makes me, eager to see what gets invented next. So I think this is this is AI with sense. When I think of the kind of the value of perplexity and how I use it and the fact that I use it more now in in ways that I might have used Google a year ago.
Right? Like, more and more, I'm I'm doing that. And I think that's part of their kind of approach and their strategy is is that it is, you know, it in some ways, they've marketed themselves as a replacement for search because it's it's search, but, you know, with a lot more tools to gain extra, you know, information around a particular topic and and organize that and and compile and everything. So from that perspective, I'm like, okay. Well, I'm already opening a perplexity instance or the the perplexity site or whatever to do browser y type things or things that I would do on a search engine through my browser.
So maybe it does make sense. My concern is that I also happen to have the perplexity app, Mac app installed on my computer. Maybe it's not a concern. It's just a a recognition of the fact that I never use it. I always open up the browser.
And so if it's a secondary thing, am I really gonna use a new an entirely new browser? I mean, rarely do I ever switch browsers, and I think I'm probably not alone. We picked our browser, and we've been with it forever. You know? Well, I don't because I have a Chromebook.
So it's Yeah. Yeah, that's true. That's true. It is what it is. Thanks to you.
Yeah, you're, you're right. Jason is even even short of having to open a separate app, having to do a plugin. Yeah. How many how many times over the years that I was working at entrepreneurial journalism that people say, I wanna create a plug in. I mean, stop.
Stop right there. The the the barrier to get somebody to download that and even on your browser, even the browser you already chosen, no. No. Not gonna happen. And how many plugins have I installed thinking this is the solution to all of my prayers?
Exactly. And, you know, it just ends up being forgotten for the most. How many tap or tab organizers and that kind of stuff. So extensions and that kind of stuff don't work. So I guess the only hope then is to try to convince someone that, and for the first time in how many years.
Right? Back in the day, early Twit days, to reminisce some more, it was a big deal when Firefox came out with a new browser. It was a big deal when Chrome came out. Oh, yes. It was It was terrible.
Microsoft was gonna win this forever, thought the EU. And so, I think we got we got inured to the idea that browsers could be new, but maybe they they really, really can be. If I if I were at Google and I watched that DIA video and they weren't already, testing everything you see there, I'd be pissed. I'd I'd have to imagine that Google is you know, whether they intend on releasing it sooner, later, ever, that's a different question. Guaranteed they're working on, you know, how do we how do we integrate generative AI into the Chrome experience more than just having, like, a Gemini button in the corner?
You know? How do we actually Yeah. Right. Exactly. Into the experience?
And I think probably that that is a difference of of a browser like this because I imagine again, imagine because we don't actually know, but I imagine perplexity's, browser and Dia, possibly as well, is going to have some sort of multimodal understanding of what is on what is on the screen inside of that browser. That's probably part of the big reason why. It's the agentic aspect of this, which is Yeah. If you're using this browser, then this browser knows the pages that you're on and knows the contents of those pages through and through and can, you know, you can assign tasks to the browser to do things for you you don't want to do or whatever the case may be. Right.
And Google's office has been thinking about that. We've talked about this in the show, over time, the last IO on your phone. Right? You you have a browser, you have a page up and, it knows the context. You want to say find me someplace near that.
Right? And it will know what you're looking at. But the phone and the browser are kind of different. And I think they've abandoned thinking about the browser because everything went mobile. Everything went mobile.
But mobiles, you know, a lot of the use of the phone is a browser. Mhmm. The browser still matters. The browser is the the gateway to the web. The web's not dead.
I mean, I use browser for probably 80% of what I do on a computer on a daily basis. Oh, same here. Yeah. Super super important. It's not going anywhere.
But is it evolving? You know? And that's, I think, what what perplexity hopes to be doing. Certainly. Yeah.
Interesting stuff. I'll be curious to see more. I'm not entirely sold on the fact that I would start using unless it unless it really just does the trick, and I don't you know? We'll find out. The UK is delaying its plans to regulate AI to at least summer twenty twenty five.
Apparently, Trump's arrival at the White House is being blamed for a forced rethink of the bill. This comes at a time when you also put something in about the daily mail and their front page, I don't know, their campaign against parts of this bill anyways. Tell me a little bit about that. Shall I shall I do I'll I'll read the Daily Mail, the appropriate to the big big type. Don't let big tech steal UK's creative genius.
The Daily Mail campaign. Our creative genius leads the world. Don't let big tech steal it. They love to be redundant on their pages. Yeah.
This is a media company. We also see that there are artists who are objecting. So Kate Bush, and a thousand artists have put out a silent AI protest album, arguing that AI is silencing them. Paul McCartney, Elton John, Abbas Bjorn Ubedis, Julianne Moore, and folks I haven't heard of because I'm too old, have done this. So it's a big fight in The UK because I was very surprised that The UK's opening bid in this discussion was openness for training.
But Yeah. The media industry and the entertainment industry are not happy about that. So the war is on. What all that has to do with Trump being in office and why The UK is delaying, I don't fully understand. Don't I don't understand that either.
I mean, other than the fact that it seems like at this moment in time from just like a a power vacuum perspective, everybody looks at the kind of, I don't know, the potential influence of Trump and and, and says, okay. Wait a minute. Maybe we need to read the room a little bit so that we can be sure to be with him instead of, you know, upset him or I'm not entirely sure. The political side of things sometimes a little lost on me, and and I'm kind of okay with that. But, yeah.
Yeah. Yes. It's it's interesting. I think what's interesting, this album, is this what we want? You know, all the tracks on the album, yes, they're silent.
Right? Like, so the album is is an album of silence, essentially, which, by the way, John Cage did that, I don't know, how many decades ago with with his four minutes and something seconds. So that's my little nerdery, moment to insert in there. Uncle Jason. But what I think is interesting I actually think it's an interesting, exercise is that it isn't just like Dead Silence.
They went into all the different recording studios with the with the recording gear and recorded studio silence in different recording studios, which I think is just so so, like, audiophile nerdy. I love it. I'm like, I kinda want the album just to, like, know what that studio sounds like when no one's making noise. That's interesting. It has its own signature.
It, like, that's that's cool to me. But, yeah. Okay. So so you know, and this doesn't mean that, it's not happening. It just means that it's put to the summer, and I don't know.
Maybe there will be a sequel to this album because they still need to make the case. We'll find out. Google Research created an AI coscientist system, and, that's what it's called, coscientist. It's driven by Gemini two point o. It's meant to assist, researchers in hypothesis curation.
It's essentially it's it's a research partnering system. So if you were to have research partner working with you, that's what this AI system is essentially meant to be, generating hypotheses, you know, assessment of those, simulated debates, refinement, verifying claims, breaking down complex problems, it all sounds so so complicated and everything. And I'm sure scientists would, you know, would would understand a little bit more than I. But, but I think that the aspect of this that I can grasp ties into what I I personally feel AI is really good at is being some sort of our of a collaborative tool to tap into to say, what are the things that I might be missing? Can, you know, can you shed some light on something that that might have gotten past me?
And I think that's that's valuable. Yeah. I found two things really interesting about this. The first is they they position this as a research ideas tournament. My head turn it or you call it a contest.
But I think it's it's that there are various ideas and for, for whatever reason, it will prioritize them. And then it's up to the scientists to judge. Is that worth my time Yeah. To, explore? And maybe I wouldn't have thought of exploring that path, but for some reason, probably not explained, but for some reason, this system, suggested that I go down this path.
And if a scientist goes down these paths often to dead ends, then the tool is not gonna be useful. But if the scientist is inspired to try something they wouldn't have other otherwise tried for success, that's good. The other thing that's interesting to me is the way they're structured. To the scientist, I think this is pretty much, you know, one output, but the way Google's, structuring this is a series of agents within a supervisor agent, a generation agent, a review agent, ranking agent, evolution agent, proximity agent, meta review agent. So that those are, you know, it's it's is that just object oriented programming, or is it more than that in that they each agent is trained for its task and is specialized and has its specific input into the tournament.
And so I think it's an interesting architectural question about how this operates. That is super interesting. I like that. You know, and again, not not expecting one single model or system to know everything, but really getting specialized with each of those individual agents, having them work together, and and everything. So they tested this in a few different challenges.
One of them was drug repurposing. The AI suggested existing drugs to fight leukemia, which, when they tested it effectively killed cancer cells at safe doses. They used it for target discovery, so they identified new treatment targets for liver scarring that actually proved effective on human liver tissue. And then evolution mechanisms, in forty eight hours, it successfully proposed the same solution to how superbugs dodge antibiotics that professor, Panettas at Imperial, College London spent ten years figuring out. So they kind of Jeez.
Assigned it to the same task that this professor did. And in forty eight hours, it came up with the with the the same solution. I hope your professor said, you know, that two week vacation I have a meeting to take, but I've been too busy. I think I'm gonna take it now. I I think it's time.
Yeah. Yeah. It also proposed four other promising ideas his team had never considered, by the way. So it was, like, oh, and by the way, here you go. I'll be out all week.
You know? And it Yeah. Started smoking its cigar. So I I I a professor I know at Stony Brook, is working on, on ALS, a terrible disease, and he has a new, view of how it starts. And so he has a hypothesis that he's worked on and, now the next step is to figure out what drugs might delay that triggering thing.
Right? So you can you can just see how valuable this is. You've got the hypothesis. Now run through these things for me and help me. It's still his job to do this, but but this is where this is where AI shines.
Mhmm. Totally agree. It's not gonna come up with the answers, but it's going to help the, scientist, think through, paths. Yeah. And then possibly Who can't be, enthusiastic about this?
Yeah. Absolutely. Totally agree. I think it's a really good example of what AI systems can be really great at and will probably continue to be really great at, as we as we move forward. Another venue for AI, much to the chagrin of of real hardcore, gamers out there probably, is Muse.
Microsoft Research unveiled Muse, which is an AI model designed to generate gameplay videos based on a, quote, world and human action model, WHAM, which is what you can call it, which is a very gamery acronym, trained on seven years of gameplay data from, the game Bleeding Edge, and it predicts how, how the game world will respond to player input. It, generates gameplay options from pathways. You could start it all from, like, a single frame. So you start with a frame, and you say, alright. Create a a gaming pathway.
My dog is whining right now. Apologies. He really wants to go somewhere. So it could take a single frame and then create kind of, like, gameplay options from there. It, it gains an understanding of the physics within the video game.
And, really, Microsoft is positioning this as a way for game developers to, get some assistance in trying out new new ideas. In some ways, it's very similar to what we were kinda talking about. It's like, I'm a game developer. I wanna look at different ways of of looking at this game that I'm working on. What can you give me?
And they can use that to kind of chart you know, open up new possibilities that they hadn't considered. It's kind of the same. I leave this one to you, Jason, not being a gamer myself. Well, I'm not a huge gamer myself either. I'm just kind of I'm kind of interested in the idea of an AI creating a convincing or, you know, playable, three d world, in which to navigate.
You know, these these games require so many resources, and I'm not convinced. Bronson, it's okay, dude. Chill out. You're fine. Come put your head next to me.
It's okay. You know, there's so many people that that are required to make a really compelling game, and it's so incredibly costly. And I don't, in my mind, believe that these AI systems are gonna replace developers and and, you know, suddenly AI is gonna create the most compelling video game experience in the world. But I do see how tools like this could be really useful to someone who is creating a game. You know, it's a very creative, avenue to explore, any anytime you have the opportunity to kind of have your eyes be open to new possibilities and something creative, and this is just the video game version of that that kind of conversation.
That's all. Yeah. I, this leads me back once again to Jensen Huang's last, keynote for NVIDIA where he talked about the digital twins. I'm fascinated by this, that the matrix exists and we're not in it as I've joked, and that there's that that the machine is thinking of all the possibilities that could happen to the car or in the warehouse or so on. So I guess game development's the same way.
Yeah. For sure. In a sense, like, here's the setup, and gee, what could happen now, and what where might that go? The the AI whether you're moving stuff around a warehouse or whether the car is trying to avoid a snowbank, I guess it's a similar computational challenge. Anytime we bring up this idea of of AI creating anything related to video games and stuff, like, I'm I'm fascinated by it.
But real gamers, I always hear from them. Like, I why would we ever want this? This is horrible. This is awful. I'm not saying that that that this would create great games.
I'm just saying it's a really interesting avenue to explore. So Microsoft's, you know, kind of taking that one step further and and making it an actual effort, and I think that's really interesting. We'll see what that leads to over time. Let's take a super quick break, and then we'll round things out with a few interesting articles, one from the Washington Post that talks all about inspo. We'll tell you what that means next.
You know, if you're of the younger generation, you already know what inspo means. It means inspiration. And, I I find this topic fascinating because, I've I've actually considered this. Like, in the world, in the in the in the time that we're in right now where generative AI, especially, like, image generation and video generation, generative AI is obviously not perfect. It creates things, but it's it's, like, convincingly inaccurate in so many different ways.
Right? Like, it it wants to believe that it's creating things that could really exist in the real world. And to, like, at first glance, when you see some of these images, you're like, oh, that's really cool. But then when it comes to really turning that into reality, like, I've often wondered, like, are is generative AI creating a style that five or ten years from now, we will be already be so influenced by that things in the real world will look different because of what we've gotten used to to what we see in gener out of generative AI. And there's an article on the Washington Post that talks about AI inspo, which is essentially like people like, one of the examples that this article that you put in there, by the way, talks about is, like, hairstyles.
Like, someone comes into a salon with an image that says, I want my hair to look like this. And the person who's cutting the hair is like, you know, they can recognize that it's an AI generated image. And on one hand, it looks perfect. It's like it's like a certain type of perfection. But on the other hand, it's like, yeah, but, like, that doesn't actually work.
Like, there's or or fashion. There's no support structure to make that fashion item that AI generated actually work without some serious modification in the real world. I think that's a really interesting kind of concept. Yeah. It's interesting on on on so many fronts.
The first is that AI does not understand reality, which we talk about all the time. The ball falls off the table and AI thinks it's gone forever. And so AI doesn't know what hair can do and not do. There was a story I put in the rundown, I think two weeks ago, that that we didn't talk about, where, a lot of work has been done to get AI to understand, black hair. Because it's it's it's so much of what is out there for training and stuff is white people and white people's hair.
And black hair is complex and to figure out how to animate that. And there was a there was a breakthrough on this. I don't have the clip at hand. But the problem then becomes is that AI is gonna suggest things that can't be done in reality because because AI doesn't know. Point one.
Point two, I like the I like the issue you raised, Jason, about what impact this may have on our culture and on our taste. I mean, back in the day, when Anna Wintour, the editor of Vogue said, chartreuse is in and everybody's gonna be chartreuse. And this is the length of skirts and this is this and this is that. Fashion was set top down by the designers and by a few, fashion journalists like Anna Wintour, between Vogue and Harper's Bazaar. Well, what happened when social came along?
The culture, made itself fashion and fashion's edge was determined on Instagram and Pinterest and YouTube and TikTok. And so that was really important. So now, I hadn't thought of this until you mentioned that. It's interesting that will AI then in turn influence that in the next generation? Is there an idealization of what people will expect?
And the other thing is we have seen some companies try to to combine AI with basically CADCAM and manufacturing. You know, if you really want, a table and you know what it want to be and you can use AI to design it, can you then get it made? Well, not if AI doesn't understand reality and can't do that. Right. But one can imagine it's not a far step to put in the constraints there to say, well, a table can't really work with just two legs.
Right. You know? So Mhmm. Rule is at least three, and and and go from there. And so I could see, hairstyle is a little different because it is you the hair you you you have.
Yeah. I could see this becoming a way that people everyone could design what they want. Now as we know in the early days of desktop publishing, that is gonna turn out some really ugly crap. Yeah. People will make up awful things.
They used every possible font and the early web pages and everything flashed and taste will go downhill. But in the long run, it's a really interesting and interesting, possibility that I can't design, but I know what I like. Can I get the I the AI to express that for me? Yeah. Yeah.
If people are turning to AI as kind of like an infinite and immediate kind of source of inspiration for certain things, how does that then translate into the real world? That is Right. Kind of a a fascinating idea and concept. And I have to imagine that at some point, if we aren't already like, people are probably experimenting with this already anyways. But at some point, what you're talking about kind of that that basis in reality and physics and, you know, what what is really needed for something to be supportive in the real sense and not just look interesting in a digitally created image, I think at some point, you know, that some of those those kinks get get worked out.
And, yeah, this could could it could be, an entryway into a new sense of of style or design or whatever. You know? Right now, it's it's also there's also a lot of backlash to images that are created with AI. If it has that overly polished, overly glossy kind of, I don't know, plastic look to it that is, you know, I believe often pretty easy to spot in AI, that is enough to turn people off immediately. But, as it gets better and more refined and everything, maybe that goes away and it just becomes, oh, that's a really cool image.
That's a really cool design. How can I make that real? And I don't know. I want to come back. It comes it comes back to the discussion we have with Lev Manovich about a month ago is does AI have an aesthetic?
Yes. Can you push AI to do something that's better than just the lowest common denominator of all that came before? Right. Will it invent things? Will it be creative?
Will it's aesthetic speed pleasing? That's something that he writes about, with his coauthor in the book is that one thing AI can try to do is to understand what would please you. That's what any ideas board or anything else. Doesn't mean it's going to be beautiful. It doesn't know beautiful, but it does.
It is programmed to try to please you. It wants to give you what you're looking for. Yeah. Which could given the taste of American of of humans, it could be awful, but we'll see. Right.
Interesting stuff. And then finally, I'm really happy you put this one in here as well. O'Reilly, Tim O'Reilly, at o'reilly.com has a fascinating, I found, piece that's all about, quote, the end of programming as we know it. So it's he's not and I think this is the real critical kind of, thesis of of what he's writing here. He's arguing that, yes, generative AI, is influencing art.
It's influencing the practice of programming and and all this kind of stuff, but that that is nothing new. This has all happened before. We've seen it before. Major shifts in programming techniques and languages and stuff, when they've happened in the past, didn't automatically signal the end of programming and won't do so now according to Tim. It might signal the way, the the end of programming as we know it, the way that we're used to, but it has always evolved with the tools and that generative AI this moment with AI creating code is no different.
And, right. Yeah. I think I think he's spot on with that. I totally agree with that. Yeah.
I think so too. What struck me about it is that is that, there was there was a role that we used to call back in the early days of newsroom computing systems integrator. Right? It's not just newsrooms. It was anything.
Right? Well, I I have an IBM mainframe. I I need software for it. I need to do this for it. I need to do that for it.
You would hire someone to make all this stuff work for you. Got it. Okay. And it was a function. Right?
It was it was definitely a function. One of the editorial systems that I used, the San Francisco Examiner and the Time Inc was called Systems Integrators Inc. And that was the whole thing of what they did was to pull together these pieces. They bought they bought a tandem computer. They didn't make the mainframe.
But what having bought the tandem computer, having made the the the, the terminal and so on and so forth, they made it work together. And so that's part of what I got from Tim's piece here is that there's going to be all these tools out there. And we talked earlier in the show about all these different, LLMs plus plenty of other tools that it's a skill just to bring them together. I had a colleague of mine at CUNY Newmark named Jeremy Kaplan. And Jeremy was amazing.
Jeremy knows every possible tool there is out there for journalists, but you hit tool exhaustion real fast. 100%. Yeah. Yeah. Right.
And so you need somebody to say, I, this is what I need to do. Could you find me the right tool and just, just help me do it? Yeah. And I think that that's kind of what programming and my, that's my reading of Tim is that even though you may not always be doing code, and even though I can tell the LLM what I want, I may not know well enough that I need someone to integrate it for me. So, yeah, I think he's Tim Tim is very smart about the stuff he's been around, and so I think he's right.
Yeah. He's a legend. Yeah. I also learned something that I hadn't heard before. CHOP, chat oriented programming.
I hadn't heard that as Yeah. Kind of a a name That's new. For that. And, yeah. Yeah.
It's I just think it's an interesting concept. What did I see? A a quote that said, so he argues that this isn't a reason why this time is different. The fact that you can use your voice and use words to program, he says, quote, that same breakthrough also enables new kinds of services and demand for those services. It creates new sources of deep magic that only a few understand.
So basically saying, like, just just because it lowers the barrier for people, suddenly people who aren't actual programmers can, you know, use their voice to to ultimately instruct a system to create the code that they're looking for. Just because of that doesn't mean that that skill goes away. It opens the door for new kinds of services on the other side of it. And, I think we talk about that on this show a lot, which is Yeah. This doesn't signal the end.
It signals the beginning of something different, and that's not that's not possible. For certain skills. I mean, mean, it's the same thing with journalism school is that I'm arguing that that we shouldn't be teaching people the skills that were needed for jobs that existed now or ten years ago. Yeah. You know, you go way back.
So, like, my students get amazed when I tell them the jobs that used to exist. Right? Well, you just told me the job that used to exist. Right? You don't like the right?
The the systems whatever, like, that's the system. Integrator. System integrator. I don't know that that that necessarily exists in the same way now, but it's okay. Like, it exists in the same way as the person that evolved into something else.
There used to be people whose full time job on newspapers was to write captions for photos. Yeah. There are people whose full time job was to go to the composing room because we weren't allowed to touch type because the unions and they would tell the the typesetter where to put it. There was a full time job of the typesetter would put type there. Right.
All these things that existed in the past that were skills you had to train for. In the case of typesetting, you had to, you have to be a, a, an apprentice for six years before you were allowed to really do the job. Gone. And and how you adapt to that, is fascinating. This is my my book about the line of type.
Is it a great measure about that adaptation and about labor and what happened? And I think we're in a similar spot right now. And if you try to hold on to the skills that you thought you had, you're not gonna do so well. Yeah. Evolve with, with it, you know, move more.
Yeah. And if you're, if you're like the, the human Chegg and you wanna sue everybody because you're pissed somebody came in and replaced you Yeah. Sorry. That's the the maybe the easier approach to do. I and I can understand the the reaction to want to do that, but do that and reinvent yourself.
Get to learning. Get to using. Be part of the deep magic, ultimately, that, Tim O'Reilly was talking about. Interesting stuff. I love the conversation.
Thank you, Jeff, for, for the hour of, fascinating discussion around this moment in artificial intelligence. Really appreciate you each and every week. jeffjarvis.com is the place where you all should go to, catch up on Jeff's books now to prepare yourself for Jeff's books soon, like, on the laptop. The Web We Weave, Gutenberg parenthesis, and magazine all found at your website, Jeff. Thank you, Jason.
Thank you, sir. For everyone watching who hasn't already subscribed, go to AIinside.show. There you can find everything you need to know to follow us on the interwebs, and to subscribe to the podcast. You can also just kinda play it from the side. If you go into individual episodes, you've got the embedded video version if you prefer to, watch it as opposed to listen.
It's all there. Aiinside.show. And then finally, patreon.com/aiinsideshow. This is for the super fans out there who really want to support us on a deeper level. Ad free episodes, Discord community.
You get an ex an AI inside t shirt. You get a sticker actually for most levels, but if you go in and you commit to the executive producer level, which, oh my goodness, thank you for those of you who have done that, you get an AI Inside t shirt. Executive producers of this show also get called out on the show. So we're gonna name you DrDew, Jeffrey Marraccini, WPVM 103.7 in Asheville, North Carolina, Dante Saint James, Bono De Rick, and Jason Neiffer.
It is so great to have six of you on the executive producer level. Thank you all. Love you all so much for your support. And thank you for those of you watching, listening, subscribing wherever you get this podcast. We don't care.
We appreciate you. Thank you for joining us each and every week. We will see you next time on another episode of AI Inside. Take care everybody. Bye bye.



