Hosts Jason Howell and Jeff Jarvis dive into OpenAI’s desire to buy Google Chrome, Perplexity AI’s talks with Samsung and Motorola, Google DeepMind’s claim that AI could cure all disease in a decade, and the Oscars’ decision to allow A.I.—with caveats—and more.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow
Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice!
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
CHAPTERS:
00:01:58 - OpenAI would buy Google's Chrome, exec testifies at trial
00:13:49 - Perplexity AI in Talks to Integrate Assistant Into Samsung, Motorola Phones
00:23:13 - AI could cure all disease in a decade, says Google DeepMind CEO— Perplexity’s Aravind Srinivas agrees
00:29:27 - Draft executive order outlines plan to integrate AI into K-12 schools
00:36:48 - Google just fired the first shot of the next battle in the AI war
00:49:18 - IEEE: Google Succeeds With LLMs While Meta and OpenAI Stumble
00:50:04 - Columbia student suspended over interview cheating tool raises $5.3M to ‘cheat on everything’
00:58:31 - Oscars OK the Use of A.I., With Caveats
01:01:09 - ChatGPT burns tens of millions of Softbank dollars listening to you thanking it
01:03:12 - To stop scraping, Wikipedia releases Kaggle dataset
Learn more about your ad choices. Visit megaphone.fm/adchoices
AI Inside #65
Summary (aiinside.show)
Google and Chrome
● The Department of Justice has suggested that Google sell Chrome,
potentially resulting in a new owner for the browser.
● OpenAI is interested in acquiring Chrome, according to Nick Turley, the
head of product.
● Other potential buyers, such as Perplexity, could also be interested in
acquiring Chrome.
AI and Browsers
● The hosts discuss the potential for AI to replace browsers or make them
less important.
● They consider the idea that agentic AI could change the way people
interact with information and applications.
● The conversation touches on the concept of a "browser" and its role in
accessing information.
AI Generations and Education
● The hosts talk about the concept of AI generations and how they might
impact education.
● They talk about the possibility that AI could cure all diseases in the next
decade, according to the CEO of Google's DeepMind.
● The conversation emphasizes the need for responsible AI development
and the importance of considering its potential impact on society.AI and Human Interaction
● The hosts explore the idea that AI could change the way humans interact
with each other and with technology.
● They discuss the potential for AI to assist with tasks such as research and
writing.
● The conversation touches on the concept of "learning" and how AI might
approach it differently than humans.
Actionable Items
● The hosts encourage listeners to explore the topic of AI and its implications
for society.
● They suggest that listeners consider the potential consequences of AI
development and the need for responsible innovation.
● The conversation highlights the importance of ongoing discussion and
education about AI and its impact on society.
Conclusion
The podcast AI Inside provides a platform for exploring the world of AI and its
implications for society. The hosts inspire listeners to critically assess the
possible repercussions of AI development, while emphasizing the importance of
responsible innovation. The podcast covers various AI topics to educate listeners
about the fast-changing field of artificial intelligence.
Key Points
● The potential for AI to replace or change the way we interact with browsers
and information.
● The importance of responsible AI development and consideration of
potential consequences.
● The need for ongoing education and discussion about AI and its impact on
society.
● The potential for AI to assist with tasks such as research and writing.
● The concept of "learning" and how AI might approach it differently than
humans.
Mentions
People:
1. Jason Howell - host of the podcast
2. Jeff Jarvis - host of the podcast
3. Nick Turley - head of product at OpenAI
4. Yann LeCun - from Meta
5. Linda McMahon - Education Secretary
6. Sam Altman - CEO of OpenAI
7. Chung-In Lee
8. Neil Shanmugam
Businesses/Brands:
1. Google
2. OpenAI
3. Perplexity
4. Motorola
5. Microsoft
6. AOL
7. Prodigy
8. Gemini
9. Meta
10. CNN
11. IEEE
12. Twitter
13. X (social media platform)
14. Business Insider
15. Patreon
Further Reading:
1. Article on AI generations (mentioned but not specified)
2. Interview with Yann LeCun from Meta (mentioned but not specified)
3. Washington Post story (mentioned but not specified)
4. Book by Jeff Jarvis (mentioned but not specified, from 2011)
Transcript
Jason Howell: Hello, everybody, and welcome to another episode of AI Inside the
Podcast, where we take a look at the AI that is layered throughout so much of the world,
the technology and beyond. I'm one of your hosts, Jason Howell, joined as always by
Jeff Jarvis. I can hear you now. How are you doing, Jeff?
Jeff Jarvis: Hello there. Sorry about that. I have, Jason is like in five places in my life
right now. I'm on YouTube. I'm watching the stream on Twitter. There's Jason. Jason's
everywhere. You're checking all the things. I know when I cut to you, I heard myself
echoing in the background. I'm like, "Oh, is that me now or me then?" See, it's the
Matrix.
Jason Howell: Yeah, it is the Matrix. And you know what? I'm getting comfortable living
in the Matrix. This is just, this is the year 2025. 2025 is the year where we realize we are
living in a simulation. Before we get started, huge thank you to those of you who support
us on our Patreon. That's Patron of the Week. Dan Merchant is one of the amazing
patrons that support us each and every week. Just go to patreon.com/aiinsideshow if
you want to take part in that. We really do appreciate it. And again, just a quick call out.
Leave us a review or a star rating on your Podcatcher of Choice. But the reviews are
really helpful. We have gotten at least one that I've seen in the last week that is more
current. And that's kind of what I'm hoping for, is to kind of freshen up the reviews and
get some newer reviews. So even if you have an older one, renew it, you know, refresh
it, whatever. We really appreciate it. But let's jump in because this is a news episode we
got a lot of news to talk about. And I think maybe we start with Google. And I say
Google ahead of OpenAI, even though OpenAI is at the start of this news story as well.
But Google's antitrust trial is, would you call it the punishments phase? I don't know. It's
kind of like how can we punish you, Google?
Jeff Jarvis: It's the woodshed phase, yeah.Jason Howell: Yes, exactly. The company was found guilty of its, well, found guilty of
unlawful practices in online search and advertising in the U.S. And as a result, the
Department of Justice has recommended, I kind of feel like it's not going to happen, but
has recommended that Google, you know, if we get our way, Google is going to have to
unload Chrome. They're going to have to find a new owner for Chrome. And it sounds
like OpenAI has an executive that testified at the trial. Hey, we'd be interested. We'd be
down. That's Nick Turley, head of product, who testified that the company would be
interested in owning Chrome. Just saying, if you're selling it, we'd certainly be interested
in owning it. What do you think about that, Jeff?
Jeff Jarvis: It's a stunt. It's just like perplexity saying they're going to buy TikTok. It's
now the way to punish everybody is to make them sell something. And then everybody
jumps up and says, oh, I'll buy it. And it's a kind of ridiculous story cycle that we're in
now. And OpenAI, A, I trust Google with Chrome a lot more than I would trust OpenAI,
period. B, I think it's a stunt. I think it's just for the what it worked. But we'll see all over.
We're doing it right now. But it was in that we're doing that because it was in the news
all over. And so that's where it is. And C, I wonder what the real value is to OpenAI.
Sure, it could insert itself in that browser, but hello, antitrust. It's the same problem that it
tries to fix. Then it's even worse because the company will use it for its own purposes
and not allow others in. Google has always allowed others in. But that's a 20-year
problem, Jeff. That's 20 years from now when they finally say, oh, actually, that was a
bad idea 20 years ago. We'll just make Microsoft and the browser in the past. Yes. My
browsers have been such a focal point because I think they're the main, they're our
main entry into it. You know, I remember many, many years ago at the beginning of the
web, my son, we ran focus groups when I worked for advanced publications in
Cleveland. And the people in the focus group said, you know, there's this amazing thing
on this online. It just has everything. It has the weather. It has sports. It has news. It has
fun. What's that? It's called Netscape. And, you know, we've seen for a long time,
people don't understand the brands that underlie the web and everything else. I think
that's probably less the case now. But I think there's this kind of naive view among both
regulators and media that the browser is everything, whether it was Microsoft or
whether it's now Chrome. So it seems like a kind of a silly moment in all of this and
serious stuff going on with Google. Absolutely.
Jason Howell: I see that Chris and the questions asked whether Google could make
another browser. I don't know because there is no decision yet whether this is actually
going to be the path that was a recommendation. And I don't know what limitations there
might be. But once again, I would trust Google more than I would trust OpenAI.
Jeff Jarvis: Yeah. And, you know, what does it do to all the rest of the services? The
browser is key to everything we use, to email and docs and drive and maybe not maps,translate. All these services we use are out of this hub of the browser. And if you're
trying to split that off, it's like saying the phone company can own the handset, but
somebody else has to own the phone. For those of you who remember the old days
where those were two different parts, I'm sorry, I just dated myself. And there was a wire
that went into the wall. Yeah, Curly Coron too. For you kids, I'll show you in my Jeff's
museum later.
Jason Howell: Well, I think what you were just talking about kind of illustrates both
sides of it. You know, you were kind of saying the browser, like it's long been seen as
this very, very important thing, but I don't know that that's really, you know, maybe the
case anymore. Maybe I'm getting, you know, your words mixed up a little bit, but also it
is really important and we do channel and funnel so much through it. And so I can see
why a company like OpenAI might love to have Chrome as their kind of anchor for, you
know, especially when we're talking about the agentic AI ambitions, you know, to have a
browser that you can just then completely connect your AI service and all those agentic
qualities directly into, you know, Perplexity has its Comet browser that it's doing this. I
think Perplexity would be another, you know, kind of interesting party to want to own it. I
don't think that they've explicitly said, "Hey, we'd be interested." But I wouldn't be
surprised if they do. And nor do I think any of it matters because I think at the end of the
day, Google's not going to have to sell Chrome. That's my hunch. That would be like,
you know, to your point, the kind of tangled web of everything that is connected that
would interplay in that move just seems, that seems like a lot. And I know that the DOJ
in a case like this, at least my recent understanding of this is that they shoot for the
moon and often they end up somewhere in between where it was and, you know, where
they're shooting for. And I don't think that that place in between necessarily means
Google has to get rid of Chrome.
Jeff Jarvis: Yeah, it's entirely different DOJ than it was when this case was brought. So
who knows what that will mean?
Jason Howell: Yeah, that's true. They're continuing on the theme that they had of
Google bad, but we'll see. But, you know, you raised two other points that I think are
really interesting, Jason, and one argues with the other. The one is that there's finally
competition in browsers. Right. Perplexity is going to create a browser. They have
reason to do so. And if open AI is hunger and effort browsers, it could make one. It
wouldn't cost them all.
Jeff Jarvis: Oh, they're going to. It's trivial.
Jason Howell: Yeah. So on the one hand, just as the argument is that this is
anti-competitive and we have to get, we have to pull it away from Google. There's
competition. The other contrary argument is that we talk a lot about how, whether agenerative AI replaces search. What if it replaces the browser? What if a genteck AI
replaces the browser in essence? It makes the browser far less important because your
pathway to applications and to information and to functions is going to be otherwise, it's
going to be through command structures, new command structures, voice, and so on
and so forth. Whether that happens or not, we can predict, you know, till the cows come
home. Yeah. But the idea that the browser, it's exactly the same as the Microsoft fight.
The browser was the key to everything. And then it wasn't for Microsoft. They lost the
browser war. There was competition and there's competition still. So, yeah, I want to
agree with you that I don't think they'll be forced to do this. These days, I can't predict
anything.
Jason Howell: Yeah.
Jeff Jarvis: But the other thing that obviously bothers me is I'm a girl book guy. Right.
And, um, yeah, what does it mean to have to sell Chrome apart from, uh, is that the
browser alone? Is that the OS? Uh, what does that really mean? Yeah. Good question.
Yeah. Go ahead. Sorry.
Jason Howell: No, no, no, no. I want to hear what you had said.
Jeff Jarvis: Just one other thing is the other day just occurred to me. Um, when, when
Google originally used to go to Google.com and there was the blank on the page and
you type that in, right? And when Google went to the, what's it called Jason, the one bar
or the one whatever. Yeah. What did the address bar became everything, right?
Mm-hmm. It was actually, it was confusing. It confused me for about a week. What was
that address bar? Is that where I go to put things in? And when I would go to the, the
course to the Google search page, it would go ahead and put it up into there to train me
and say, this is where you're doing everything. Omni box. Thank you. Um, well, so the
Omni box, the browser is not just a browser. The Omni box is the path to all kinds of
functionality. Um, so anyway, I just, um, yeah, the browser is an, an, a fungible beast
now. When's the last time I went to a Google search page and, you know, clicked on the
search area in the middle of the page and put my search in there. Like it happens, it
happens very rarely and randomly it's, you know, and I couldn't even tell you what is the
circumstance that takes me there. But everything that I do is in that Omni box.
Jason Howell: I mean, and what you're saying also really reminds me of the
conversation that I had at mobile world Congress with, uh, Google's Android head,
Samir Samat, when we were talking about a post app world where agentic AI becomes,
you know, so prevalent, is there a need for apps? Is there a need for applications when
agentic, uh, AI can just kind of go to the places it needs to go to do the things. And I
think Samir's point was, was also, you know, appropriate and spot on, which is that
even, even in that world, there is still a need for companies, for brands, for destinationsto have some sort of a kind of a place to go or a, you know, maybe they've got a brand
that they want to convey. And that's, that's how you do it. Like the agentic AI can do
those things, but it might not necessarily mean that we don't have those other things as
well. Cause they also serve other purposes too. So, so I don't know. Um, I do see if, you
know, obviously this was a, this was a publicity stunt, uh, on open AI's, you know, side
to gain more of the oxygen from the PR room, which it's very good at doing, but no
matter what open AI guaranteed going to do the browser thing, if you know, I guarantee
you they're working on it behind the scenes already. And it's going to be trivial. In fact, in
fact, tell, tell the browser, tell the chat to make the browser software. Mm-hmm. It's fairly
trivial, I think.
Jeff Jarvis: Yeah. And I guess what's coming, what's coming to me right now also is
that there's some, um, there's some overlap here between like how necessary is, is it to
AI to have its own browser in the same way that how necessary is it to AI to have its
own piece of hardware, like a rabbit are one or whatever, you know, it's everybody's
looking for these different ways to make it, um, I don't know, make it more of an
immediate utility. And it seems to be doing all right in the form that it is right now, but
you know, we should pay tribute to the browser was a paradigm, paradigmatic shift. Uh,
I was working at Delphi for one horrible month, which was Delphi internet, um, before I
got the hell out and they were going to have a GUI because everybody had the GUI,
right? Uh, there was a, well and prodigy and so on. And you had to have your graphical
user interface. And then along, I was, I remember the day when somebody came in and
said, you got to see this thing and it was the browser, the first crude blue browser. Uh,
but it, it immediately said that changes everything. That's a pathway to famous. And
that's what our browser is. It's not a program in and of itself. It's just a way to get to stuff.
Mm-hmm. Yep. Interesting stuff there. Um, and then we were talking a little bit, you
know, I mentioned perplexity perplexity comes up a lot these days on the show. Uh, I
think this is interesting. Perplexity, uh, is working on some big deals right now. One of
them, we might actually hear more about tomorrow with Motorola. This was a deal that I
guess Motorola has an event tomorrow. It's expected that the event is going to be about
their new razor phone. And according to Bloomberg, probably going to get some
information around a deal that Motorola has struck with perplexity to have its perplexity,
uh, agent preloaded on Motorola devices. Gemini, I'm guessing would still be present on
the device. I don't think that this is necessarily saying that Motorola would not have
Gemini installed, you know, and Google's Gemini is out and perplexity is in. But I think
this brings the option of another AI assistant or at least the app onto the device. And
then, and then there's Samsung, which apparently is in early talks with perplexity as
well. Right now, I think according to the case that we were just talking about, it was
revealed that Samsung has a two year kind of licensing deal with Google. And
Samsung's been working really closely with Google on a lot of things. And one of them
is bringing Gemini into Samsung phones, but it turns out perplexity is talking andpotentially making deals with Samsung to bring the perplexity agent onto Samsung
devices, potentially in place of Gemini. And as all of these court cases happen and start
to kind of strip apart the status quo of how Google does its business and strikes as
deals, this could be something that we see more of in the next couple of years. So back
to the prior conversation, this is, I didn't expect this to be tied in, but it, but it is. Let me
ask you a question because you're a pro phone user, right? You study phones and how
they operate and your use of them is critical to your research, right? When you think of
doing something on your phone, what proportion of the time do you, I think these are
three choices. You go to directly to an app. You go to the browser. Do you go to the
assistant?
Jeff Jarvis: Oh, that's a really great question. Uh, trying to break it down. I mean, I
probably, whoo, that's a, that's a fantastic question. I don't, okay. I'll start with assistant. I
don't use the kind of baked in shortcut for assistant very often. Right. How do I, um, I
mean, and I, and I, that's really slowed down. I think that was different when Google
assistant was more, um, leveraged and kind of a little bit more dependable and, and
new, cause it was the new thing. I wanted to get in the habit and I certainly had that
habit for awhile. I don't use it as much and I definitely don't use it as much with Gemini.
It's not something that I go to regularly every once in a while. I do. I'm almost more
inclined to launch the app, honestly, to do that either with Gemini or with perplexity,
which I do launch perplexity and it does have the voice assistant capability inside the
app. And so I will sometimes launch that. Am I going to, like I could probably open up
my pixel and you know, in the settings and assign the perplexity voice agent as my main
agent, but I choose not to, I couldn't tell you why. I think it's part of the reason is
because Gemini is kind of tied on a little bit deeper level to the Android operating
system. Like perplexities, voice agent is great for search. It could be called an antitrust
trial just for that comment. But it's true.
Jason Howell: You know, this is, this has been Google's big, you know, strategy for
better or for worse. And I think, you know, it's, it's starting to kind of bite them in the butt.
It is the intertangled web at the same time I'm a consumer and I kind of want those
conveniences. So, um, that's part of the reason why I don't kind of remove Gemini from
that placement and put perplexity in its place. And then how often do I go to the
browser? Oh, that's, that's, uh, almost always if I go to the browser, it's because I used
the Google search on the, you know, that's on the home screen to like ask a question of
something or to, you know, if I, if I really need to go to directly to a website, I guess I just
put it in there. Um, I don't know. That's interesting. It's hard to, it's hard to give concrete
answers on that. I don't know how to answer that other than when I need to. So, so I'm
an old fort. So I mean to go to the browser. Okay.Jeff Jarvis: Oh, certain apps that I use, obviously, you know, the weather app or
whatever, but I tend to my reflex is what I'm saying is I want to, I want to get
somewhere. I want to look up something. I want to do that. I go to the browser. If I'm
using, uh, the app, it's really more voice search. Um, you know, I'm sitting at the dinner
table and how old Brooke shields now? I don't know. I'll ask, you know, Hey, G hell, it
was Brooke shields. That's really not, um, AI. It's really not the agent. I don't think it's
been, you know, just, yeah. That's hard. Yeah. Just voice search. Yeah, it really is. Uh,
so I'm not using the agents much at all, which the reason I asked that and the answer is
what's the bet. If you're trying to deal with Samsung, what's the bet of what the, what the
user use spaces is going to be of these things of, of a default agent versus a default
browser. Um, I don't know. But again, it goes back to our prior discussion. The browser
may not be the, for me, it's still the hinge point of everything. For you.
Jason Howell: No, no, definitely not. The browser is definitely not the hinge point for
everything, but I'm, you know, I've got my feet in all different ponds, I suppose, but, um,
but I do think, and we talked about this a little bit last night on, and on the Android
faithful podcast as well about like having a perplexity as a voice assistant tied into my
phone by default does eliminate, like I said, some of those deeper kind of connections
to say Android operations or settings or some of Google's kind of, you know, kind of the,
the special sauce that Google has integrated with their services into Gemini and stuff.
So having perplexity in that spot eliminates or reduces some of that functionality. But as
we know in AI, everybody kind of has their favorites or they have their, their AI models
that they turn to for very specific certain things and, you know, and their go tos. And so
that might not matter as much. It might not matter to every user. The fact that when they
use their, their voice assistant, it can't know what to do with turn the lights on in the
living room or whatever, where Google's can. It might just matter that like 90% of the
time when I want to use voice AI, you know, search, it's because I'm researching or it's
because I'm doing this thing and therefore I want that to be assigned to something that
has a different skill set that's more tailored to what I actually need, not, you know, doing
what Google thinks I need.
Jeff Jarvis: Yeah. Yeah. Yeah. Well, that's a really good point is that the, the, the
assistant, aka agent, will be charged with having more knowledge about you. It will be
more personalized necessarily. That'll probably be killer. The killer, not killer Apple, the
characteristics that makes it win. Right. It also has a lot of stickiness to it. Open AI is
really working hard on this right now. You know, they're, they're really kind of doubling
down and opening things up as far as memory for users. And turns out that becomes
really, really handy and helpful over time. As the model learns what, what you're
constantly looking for when you ask it to do this certain thing, that means as a user,
that's less hoops. I have to jump through to get the answers that I'm looking for. And I
think this is just kind of an interesting fact is, is that the more we give into these modelsand get that, that memory, the more sticky those models become, because why would I
want to pick up, pick, you know, take my toys and go over there. When this one already
knows so much about me, I'd have to start over again, going over there. And I think our
phones are going to get there too. And that'll be really interesting from, from that
perspective.
Jason Howell: Yeah. And I can already hear replaying some of the, the, the battles of
your, one is obviously privacy. You know, it's too much about you. How do you, you
know, what control do you have? And the other is the filter bubble argument will
resurface. And the filter bubble argument was made by Elaine Pariser and then Axel
Bruns wrote a book called our filter bubble was real, in which he had lots of research to
know they're not that Google was not in fact personalizing to the level that was
presumed, was not putting you in a filter bubble, but an agent will. So what was, what
was worried about in a moral panic past will come back perhaps with some cause.
Jeff Jarvis: Yeah. Yeah. Cool. Well, we've got a whole lot more to talk about. We're
going to take a super quick break and then we'll talk a little bit about Demis Hassadis
interview on 60 minutes. That's coming up in a second. Did you, did you get a chance to
see Demis's appearance on 60 minutes?
Jason Howell: No, I've been writing too much about the head of 60 minutes quitting.
Jeff Jarvis: Oh, different 60 minutes story entirely. And I was not aware of that actually.
Jason Howell: Oh yeah. I was on, I was on CNN last night talking about it actually.
Jeff Jarvis: Oh, no kidding. Wow. I'll have to look that up. Well, yes, Google, Google's
deep mind CEO did make an appearance on 60 minutes over the weekend. And I think
it's interesting because there were a lot of, you know, fresh off of our conversation with
Yann LeCun from meta. There were definitely a couple of points throughout the
interview where it was like, okay, I've heard that before Yann was talking about this too,
particularly the fact that, that Demis was basically saying, you know, AG AGI as people
define it very differently or don't is at least 10 years down the line. So Demis and Yann
appear to be on the same kind of timetable as far as when they think this randomly
defined concept of artificial general intelligence will actually happen. And it's not
immediate. It's definitely somewhere down the line, but he did make the prediction that
AI could potentially cure all diseases within this next decade. All diseases, that's kind of
crazy to think. I feel like anytime you put all or nothing, there's at least a small amount of
invalidation in my mind to something like that. Cause really all of them, maybe that's just
an easy way to say most, but just the way you say most would be probably where I
would go with that. But yeah, I don't know. What do you think about that prediction?Jeff Jarvis: Even most is too far. It's a turbocharged view of technological solutionism.
And the argument is that the internet really brought out solutionism thinking it's going to
solve everything and it's going to bring peace and certainly has not. No, that's not. This
is a whole other level where it just, it's part of the AGI ASI presumption that we're going
to get there and it's going to be so amazing. It can do these things and it's being
described to it with no reason, no basis. Will it help with medicine? Yes. Will it help find
different uses of molecules? Yes. Will it do things behind the scenes like protein folding?
Yes. All that's yes. All that's amazing enough. But this just goes overboard in a
ridiculous way in my view. And I think it's harmful in the long run on two sides of it. It
puts a target from his perspective. It's dangerous because I think it puts a target on the
back of the technology he's building. Well, it's going to fail. It's not going to reach the
heights that it's been predicted to do. And on the other hand, it makes it more fearsome.
Oh, it's all powerful. It's kind of, you know, it's God. It's not. Let's take the wonder that
we can have with this on its level. Why does it have to be everything? It's irritating. And I
respect him and I respect his work and he's a genius at this stuff. I'm not taking any of
that away. Just don't oversell it, man.
Jason Howell: Yeah. Hard, hard not to, I suppose, when you're that close to it and you
know, work there.
Jeff Jarvis: Yeah. Maybe, maybe they, well, they believe that they know something that
the rest of the world does not. That's, that's part of the problem. Yeah. I think, I think that
sets up a distance that's that they haven't learned what happened in this period of, of
the arc of internet. Hype. And this is this, this, you know, is it just the internet hype was
hypey enough. This is 10 times hyper year. Wouldn't you agree?
Jason Howell: Yeah. Yeah. I mean, I was a lot younger when, when the internet was
first, you know, coming around. And I certainly wasn't as analytical at that time. I was
probably caught up in the hype more than anything because I was very excited by it. But
it, but it feels that way from my point of view now, you know, at the same time, it's really
impressive. You know, some of the, some of the accomplishments that have happened
here, right? Like he discusses deep minds, alpha fold, mapped more than 200 million
protein structures in a single year. If that was equated to the amount of time it takes
traditional researchers to do their work prior to this, that would have been 1 billion years
of traditional research time. And take the sale. That's just amazing. That's absolutely
amazing. And, and that gives the confidence to say, well, if we're doing that now, then
what are we going to accomplish in the next 10 years? It's going to be, you know, a
million fold what, where we are right now.
Jeff Jarvis: Yeah. And presuming the hockey stick is applicable to everything in life
because it presumes the, the, the basis of if we, you know, if we do this much now, thenyou've, you've given a definition of what this is. And then you multiply it by a hundred
and you say, well, that's everything. No, there's a lot of challenges in life. Yeah. And I'm
glad the technology is. We're both boosters of this to the extent that it does the amazing
things, but the booster, the high end boosterism just drives me nuts. So.
Jason Howell: Well, while perplexity CEO, Ariven Srinivas agrees with Demis called
him a genius after this interview and says he, he should be given all the resources he
needs to realize this goal. So that's perplexed by the way, perplexity entering the
conversation. It seems like more and more right now. They're brilliant at PR.
Jeff Jarvis: They are brilliant. Yeah. We're open AI obviously was brilliant because it
took over the world and has gotten all this money and so on and so forth. But just in
terms of, of, of, and perplexity is not as hypy. Oddly enough, right? It doesn't, I don't
hear the AGI stuff quite as much from them. What I see is we can do this. Oh, we're
going to enter this conversation, letter in that conversation. We're going to buy browser.
We're going to buy tick tock. We're going to agree with our competitors. They just sneak
in the stories. Just brilliant.
Jason Howell: Seems, seems to be the case. Yeah. This next one you put in there and
I did not have this on my radar and I thought this would be a really interesting
conversation. The Trump administration, considering a draft executive order that would
direct federal agencies to integrate AI into K through 12 education here in the US, of
course, it's in a very early form at this point, according to this article in the Washington
Post. It would integrate AI into teaching, also administration tasks, create programs
using AI technologies with partnerships with private companies and nonprofits and, and
schools to create and promote foundational AI literacy. And yeah, interesting. I mean,
this just seems to go deep. And obviously I have not read the draft executive order in its
entirety. I've just read this article to kind of get a general sense of what's going on here.
But I find myself a little conflicted because on one hand, I think it's really important to
recognize, you know, this like inflection point that we're in right now with technology and
to, you know, in many ways, embrace it, get ahead, if not ride that wave. On the other
hand, it feels so sudden and drastic to commit so quickly to the level at which, you
know, this article seems to illustrate.
Jeff Jarvis: Well, I love the word they used in the Washington Post story. It is a
pre-decisional word I hadn't heard before. And I'm sorry. It's a concept of a plan. It's a
concept of a plan. I have to do the joke here because the joke is obvious, but they're
having, they instructs education secretary, Linda McMahon, to prioritize federal grant
funding for training teachers, blah, blah, blah. So she's going to put A1 sauce in our
schools. You see the story last week that she confused. She kept on calling AI A1. And
so A1 sauce had a bonanza with that. And so we're all going to pour A1 sauce over ourstudents. Yes. I've done the obvious joke. But this is it's the problem with all these
executive orders is it's with the stroke of my Sharpie, I can change the world. And more
knows in some ways you doing it. But this is not that easy to just say we're going to put
AI in everything. And the irony here is while and I'm trying to get overly political, though
my views are fairly known, while they're cutting into education in every other way
possible. Right.
Jason Howell: Well, that's part of what feels so drastic, right? It's like a one hand taken
acts to all this stuff. On the other hand, let's replace it with AI.
Jeff Jarvis: Right. And and so deeply like it's just based on reading through this, it feels
like such a deeply embedded kind of solution. Obviously, you know, there's they're
chasing down, you know, countries like China who are pursuing, you know, integrating
AI into their efforts in education. And there's a big sentiment right now in U.S. leadership
that like, well, we can't let China win the AI game. We've got to win. And so let's do do it
by every means necessary. And it's just, yeah, it's such a response. It would be such a
response if it actually passed.
Jason Howell: Yeah. And the fear, I think, is that if you're a teacher, they're going to
come and say, well, yeah, we just we just gave you 20 more students, but no problem.
You got to. Right. Or yeah, preparation's hard. Curriculum is hard. But you got AI now.
So this makes your job easy. And of course, it doesn't. Not at all. This morning I
watched something that's still going on right now, William & Mary College. They did
something about education and AI. And my friend Matthew Kirshinbaum, University of
Maryland, and Rita Rayleigh from UC Santa Barbara had done a piece in the Chronicle
of Higher Education about whether AI will kind of ruin universities. And the joke today
was, well, AI doesn't need to. It's happening elsewhere. But not a joke. But there's
concern in the academe at that level, the university level, about the relationship to these
big centralized companies, about the resources that are provided or not provided, about
the freedom that academics will have to do things and whether they were talking about
whether they could run a model under the desk, which in a way, maybe you can do with
some of the stuff we're seeing. And so there's big concerns at an educational level
about AI all around. Nobody is saying it's not amazing. Nobody is saying it's not a tool
that we should use. Nobody is saying we shouldn't teach our students. But this
presumption that, OK, I can pour the A1 sauce into a syllabus and I'm done is kind of
ridiculous. But there is a demand out there. So at Stony Brook, I wrote a syllabus for a
course in AI and creativity. And last I knew a week ago, it already had 91 students
signed up. And so there's a popular demand and desire for this stuff. And so I think
that's great all around. Just do it smartly. Don't do it as if you think one signature and it's
done. That's all.Jason Howell: Yeah, reactively and swiftly. Although that's proving to be kind of a
hallmark
Jason Howell: of where we are right now is reactively and swiftly. For better or for
worse. So yeah, like I said, I'm a little conflicted on this because I do. What I don't want
is for the U.S. education to to only see the bad potential of AI. Oh, well, students are
going to learn to cheat, blah, blah, blah. Like I do believe that AI and what it you know,
the current state of LLM and everything that it's developing into through agentic and
beyond. Like I don't think this goes away and I don't think that wishing it or pretending
like it doesn't exist does any good. And I don't think that the younger generations
coming up necessarily see it or will see it that way either. They're going to embrace it in
a way that we older people are not going to have an easy as easy a time doing because
it's not our normal. But it's their normal. And so, you know, so there is a need to kind of
embrace and kind of lean into that education piece. Just please do it in a responsible
way that doesn't throw out that involves a lot of other goods and involves the community
and how it's done.
Jeff Jarvis: Yeah. And not just say you're not doing enough. I write more. We need
more. Everybody needs an open AI subscription. There we go. Right. Done it now. Do
all your work on open AI. OK, perfect. We've done it. We've done the AI thing. Yeah. Oh,
that's one way to do it. We'll see. Let's talk a little bit about AI generations because I
thought this article, another one that you put in here actually was that was I don't know. I
appreciated reading through it. I'm having a hard time pulling up here. But if you go to
archive today, you can get on. Yeah. I'm not subscribing to Business Insider.
Jason Howell: Yeah. Well, the problem is I try and pull up. I try and pull up the archive
links on Chrome and for whatever reason, it never works for me. Really? I had to load it
in an entirely different browser in order for it to work. Anyway, that's a little behind the
scenes. But you had put in this this article that talks a little bit about AI eras. The fact
that like not too long ago, we were in the simulation era, which is kind of the AlphaGo
era where models were learning through repeated and digital simulations and
reinforcement learning. And there was all the AlphaGo playing the game. And whoa,
can you believe that the game is capable of playing this so quickly and dominating and
everything? That was the beginning. Then there was the human or rather is the human
data era where we are right now dominated by Internet scale data transformer models,
of course, and where we reside right now. And then Google researchers David Silver
and Richard Sutton have proposed, according to this Business Insider article, a major
shift in AI development with a concept called the era of experience. Yeah. What do you
like? Tell me a little bit about the era of experience and experience and what they say.Jeff Jarvis: So, yeah, I thought this was interesting. And by the way, this paper is going
to be part of a book called Designing and Intelligence from MIT Press. So it's a preprint
from Silver and Sutton. And I agree with where this goes. The funny thing was it repeats
what what Jan Nikodin told us. Yeah. So its credit is given to Google and that's nice
because they don't get much credit in the world as much as they want. But this isn't just
Google saying this. What we it's Jensen Wong, Jan Lekun, Google are all saying that
the next phase has to be experienced to teach AI reality. And that's where you're really
headed. And it's going to happen. Reality world models. Yeah. It's going to happen
through robotics and it's going to happen through digital twins and it's going to happen
through data gathering through classes and all that kind of stuff. But it's got to have
some sense of cause and effect. And it doesn't have that yet. It doesn't know that. So
that's going to be really interesting. So I think I think that the point of the paper is good.
Business Insider does kind of a simplistic view that Google told the world what for. Yeah.
Right. This is where everybody's going. And I think we're waiting for that. I don't want to
say leap. I think it's just I'm going to use the word paradigm again. You know, when I
worked at Delphi way back when they had a five dollar paradigm jar. If you use the word
paradigm, you had to put five not just to put five dollars in it. It was that much. It's an
easy word to lean into. I am so guilty of that in the new paradigm. I've had to try and
back off of that word. So there will be, I think, a paradigm shift. Oh, it's worth 15 bucks
already for this experiential layer. But I don't think we've seen it yet. Apart from robots
obviously learning some things, but in ways we can't touch because we have the robot
or digital twin factories. But we don't touch it because we're not seeing what those
alternative futures are or anything like that. I don't think we've seen a consumer level
version of experience yet. Where, oh, it understands that the egg drops. It cracks. Right.
Right. Right. And so I think that's what I'm kind of waiting for is the application layer of
experience learning. And it could be a ways away. And it's not going to be like
generative AI because I don't think a token based world. This I've got way out of my
depth here. Way out folks. But I think this is part of what Yallan Lekun told us in the
wonderful interview, which if you haven't seen it yet, Jason, we'll give you the link in a
second. Is that when you're just dealing with this abstraction of tokens, is I keep on
saying there's no meaning. Well, reality has meaning in so far as that's an egg and this
is what its properties are and this is what can happen to it. And it has to associate it with
that concept of egg. That's not the case in generative AI. It's not the case in machine
learning as it stands now. It will be in robotics. Right. Hand has to achieve. If it's an egg,
don't push too hard because it'll break. Right. You push too hard, it breaks. Right. So I
won't do that again. I've just learned that about the egg or whatever that however it
abstracts that notion of egg. You know, spheroid weight thing. And so this is a little
fascinating to me. I just love this next part of it, but I don't know when it's going to get to
our actual attention past theory.Jason Howell: Yeah. Yeah. Well, yeah. And I think one thing that was kind of
interesting to me that I mean is probably just a different way of explaining what you
were just talking about is that the current era that we are in, you know, we often talk
about data scarcity, about the fact that these models are so hungry and they just need
so much information to get smarter and smarter. But yet at the same time, we've almost
fed it almost everything we can at this point. The only way that they get better, you
know, leaps and bounds better into a kind of a new paradigm, as you put it, is by
learning these skills and these limitations themselves beyond just the information that
they've been fed. And so I think that's a really interesting thing to me. I think that's the
wrong way to put it for a machine, you know, lived sense.
Jeff Jarvis: Well, right, exactly. Well, even learning is a troublesome word. Yeah. But
this paper at the end of it emphasizes mainly not robotics, but agents. Right. And it says
that that's what in everyday life, personalized assistance will leverage consistent,
continuous rather, streams of experience to adapt to individuals' health, education,
professional needs, and long-term goals. And that's what I think is the best way to do it.
It was she who said that the creativity is leached out of the models because they've put,
they've modified it down so there's no unpredictability. Because unpredictability is where
you get to problems, hallucinations, all that kind of stuff. So you've got to leave in
mistakes to learn. So you've got to tell it to go off and find the plane ticket, and it doesn't
find the plane ticket, and then it has to, that's part of the process of learning, is failing.
And it's really interesting.
Jason Howell: Absolutely. I mean, absolutely. In the human experience, so much is
learned through failure, even though it's incredibly uncomfortable. But that's part of the
reason why you learn so much from it. It's profound. And yeah, so that's necessary. And
do we, as humans who have created this thing, do we have the patience for failure with
these systems? And it's largely, it seems like people express that they don't because
they continue to harp on AI systems that aren't 100% information accurate 100% of the
time. There's not going to be that way. Same as humans. Humans aren't either. We're
patient with humans because we realize it's part of the human condition to be imperfect,
but we aren't with the machine. And maybe we need to give the machine a little bit more
grace than we do right now.
Jeff Jarvis: Well, if it can't fail, it can't learn. If it can't fail, it can't get that experience.
And so do we have that tolerance for that failure? How do we build that in? Because I
think we have this idea that the machine is a machine, so it can't make mistakes.
Mm-hmm. Interesting stuff. Now, this next one, oh, and you put in another link here. Did
you want to talk about it real quick?Jason Howell: Only parenthetically is that as a business insider gave Google credit for
this thing that we just spent the last time I was talking about, similarly, IEEE,
interestingly, came in because Google often is said to be behind, behind OpenAI,
behind others. IEEE came in and said Google succeeds with LLMs while Meta and
OpenAI stumble. That's the first time I've really seen major credit being given by
somebody of as much stature as IEEE, saying that just talking about the model, just
talking about the performance, I don't really want to go into any depth here, but it was
interesting to see a slight vibe shift there. Google's getting some good juice here.
Jeff Jarvis: There you go. You get what you deserve, Google. You go, Google. This
next one, oh, boy. Got thoughts on this one. A 21-year-old former Columbia University
student has raised $5.3 million in seed funding for his startup called Cluely. It's an AI
tool designed to help users secretly "cheat on everything." So exams, interviews, sales
calls, first dates, as shown by the verifiably creepy promotional video that they shared
on X, that I'm pretty sure only incels will find appealing. The app concept was born out
of founder Chung-In Lee and co-founder Neil Shan-McGum's, I'm sorry if I
mispronounced your name, their tool called InterviewCoder that they developed while
studying at Columbia University. Did they develop this for their work at Columbia
University or was this on the side? Because they were ultimately suspended from the
university and I couldn't figure out if this was something that -- I'm guessing that's a
connection, but it's not clear.
Jason Howell: Yeah, it's not clear. But anyways, the app was designed to allow users
to cheat undetected. They were embroiled in disciplinary proceedings at Columbia over
the AI tool. Right. And they both dropped -- evidence dropped out. So did they create
the tool on their own outside of the university or was it something that they created as
part of their studies? It began as a tool for developers to cheat on knowledge of leaked
code, a platform for coding questions that some in software engineering circles consider
outdated and a waste of time. So maybe it was their way to just say, yeah, but this goes
to the definition, what is cheating?
Jeff Jarvis: Well, yeah. Is it cheating when you use a calculator? Right. And that's kind
of part of what they're saying. Right. The story I tell in my book that no one bought
called Public Parts is that Mark Zuckerberg, when he was still in Harvard there, he had
an art class. And the final in the class would have to be writing things about all of these
pieces of art. And everybody knew that. And so they would do study groups. And so he
organized a study group so that everybody was sharing the best of this. And the
argument in the book that Zuckerberg made was that at the end, everybody did better.
By using social, by not seeing it as competitive, by collaborating, they all learned more.
And he had a study class. But he said that the grades for everyone in the class went up.
So was that cheating? Or was that a smart use of social collaborative thinking? Is itcheating to use the technology? Or is it a smart use of technology as an aid to you? I
think we have to re-examine the notion of cheating. What does cheating mean? I think
that merely is a kind of interesting question. I just asked myself. But I'll ask you too. Is
cheating being unfair? Is cheating being opaque? What constitutes cheating?
Jason Howell: Yeah, is cheating? Yeah, because I mean, I think when I think of
cheating in my older kind of school time paradigm, I think of this is a question that wants
to know my knowledge of something. And instead of sharing my knowledge of
something, I'm sharing what I've written down or what I'm reciting or regurgitating from
this thing in a moment where I was expected to know it instead. But now instead of
knowing it.
Jeff Jarvis: Right. But now leave school. You have a similar task. Right. If you get the
answer you need, is that cheating? Does it matter? If you're tasked with a job and you're
able to do the job, does it matter if you knew the answer or if you sought the answer?
Right. Now, when it gets to dating, that is creepy because it's Sierro de Bergerac. Am I
really dating you or am I dating the app? I mean, that just felt like incredibly deceptive.
That promo video is the guy is sitting at a table with a, you know, I don't know if it's a
blind date or a first date with an attractive woman, of course, and she's asking him
questions. And then you see his kind of like Terminator view coming up of the A.I. kind
of coming up with the answers that he can feed to her. So he's essentially cheating on
the questions that she's asking, lying about, you know, being being an untruthful or
dishonest about his age. When she asked and says, well, you look kind of young, are
you sure you're 29? And he's, you know, he's being fed all this information. And then
when she decides to walk out, then then like the A.I. kicks in to like win her back. And
so he recites that from a very heartfelt place and almost gets her to the point to where
she finally realizes, I just need to get out of here and leave. And it was just kind of like, I
don't know. I don't think that does anything to endear me to what you're talking about,
because I do agree with what you're saying. Like there was a time when calculators
were probably seen in the same.
Jason Howell: Oh, they were. Same, same perspective. A spell check. I mean, you
know, for my preparation for these shows, often I'm using A.I. tools to research, which I
would have had to do manually and by hand earlier. I would have to like do a Google
search and find the stories and collect them, open them in many windows, read through
poll information. Instead of taking 20 minutes to do that, I can take five minutes or
maybe even less and have it pull back those things. And so you could see that as
cheating for these shows, but it doesn't mean that I don't synthesize the information and
do something that I mean, these shows are prime example. Hopefully you get benefit
and value out of it. And if you do, then it's just an example that it kind of doesn't matter.
For those of you listening, you're watching. I hope you think that. Oh, good. Jason andJeff read some stuff that I don't. I don't need to read now. Of course, that pisses off
media hearing it said that way. But it's true. You don't have time to read everything. And
maybe in some cases you say, oh, that's interesting to me. I'm going to look it up. I want
to learn more, but that's our choice. It's the same exact problem we get to with search
and media right now and social media right now is is everything need not be the
destination. So anyway, yes, so these students are out. I say more power to them. I
mean, yeah, I bet they've. Got a got a pathway here. I think this will be interesting to
watch. Just just drop the like the manipulative kind of aspect, you know, with with like
dating and stuff. Because all right. How about the other example they give the main
example they give us sales calls. Is that bad?
Jeff Jarvis: I believe you get lied to. Yeah, it's yeah, I suppose it's bad if it's dishonest.
But if it's not. And it's targeted to what my needs are and sells me what I want. Yeah,
totally. If I'm a sales agent, I'm going to go through training in order to effectively sell and
effectively say the right things and effectively not say the wrong things and recognize
cues and all this kind of stuff. If there's a tool that enables me to do that part of my job
better, I don't see anything wrong. I mean, the key to all sales things. There's a guy
named Jeffrey Gittamer who writes sales books, like my first book. And so I watched
how this operates. And same as in what I teach in journalism is listening. It's listening to
people understanding what their needs are, empathizing with those needs and trying to
come up with solutions for those needs. And if your solution is in fact legitimate and
good, you make a say. There you go. Right. That's okay. In fact, so we hear a lot about
how this is going to come to customer service. And phone mail jail and all the hell we go
through. Right. So the agent is reading the script. Get off the damn script. And the fear
is that I will be even worse than that. But it may be far better than that. It may
understand my need better. It may be more responsive to that need. It may be able to
get to a solution faster. I was going to say maybe faster. Sometimes it's painful. The only
if the AI is given the true agentic power if it has agency to do so. Yes. Yeah. Yeah. Very
interesting. Let's take a super quick break. Then we got a few more stories around
things out, including Oscars kind of becoming a little bit more welcoming to AI.
Jason Howell: All right. The Academy of motion picture arts and sciences officially
updated its rules to allow films that are using generative AI to compete for Oscars. So
basically coming out with an official stance to say, Hey, you know what? Just because
AI tools were used to which by the way, it's, I mean, it's, it's, if it hasn't, you know, just
overtaken or at least, you know, highly influenced how these movies are made, it's
going to at a very swift, swift move. But this just ensures that, but they're basically
saying like, it's okay. As long as there's a human involved, they say they do emphasize
that the films where human creativity and human involvement are central will be more
heavily considered, heavily considered. So not like requirement. But the filmmakers do
not have to disclose the use of AI that had been considered as one thing. And that's notthe case here. So basically they're saying at the end of the day, what we've been talking
about AI is just another tool and yes, you can use it. Just be responsible. And hopefully
you've got humans also doing things on these things too. You can still win appeal. It's
here. If you use a typewriter.
Jeff Jarvis: Yeah, right. Hey, that's a great example. Yeah. Yeah. Yeah. When word
processing came in, it wasn't, it wasn't to the level of moral panic, but it was, um, some
fear that somehow this was too easy. Somehow this was, this was going to change
things. And it does change the way I wrote. It meant because I wrote in the old
typewriter days. Mm-hmm. So it did change immensely. It made it easier. It made it
faster. It made it, it gave me more power. The barrier. It levels the playing. It lets too
many people in.
Jason Howell: Yeah, right. It doesn't quite gate keep the way we used to have it.
Jeff Jarvis: Bingo. Bingo. Yep. Yep. Yeah. Well, so I'm a Howard Stern fan and, um, uh,
he complained when the public podcast started. And I think, I think I had this argument
with him once on, on the air. Oh, podcast is nothing. You've got to learn radio. You've
got to work your way up or just full old fart, right? Right. Now you see the Joe Rogan's
of the world are huge and even he has to admit that. Okay. So he's still there. Yeah.
Yeah. Howard Stern. Is he, is he still rocking? I haven't listened to the show in many
years.
Jason Howell: He's on serious. He's got to pay. Yeah. Yeah. He's a, he's become an
amazing interviewer speaking.
Jeff Jarvis: Yeah. Yeah. Cool. Yeah. I used to really be into his show. He used to love
it. I need to check it out again. Uh, and then finally, yeah. Okay. We're, we're back to
open AI, but I thought this was a good way to lead back.
Jason Howell: Yeah. They begin and end with open AI these days, but I thought this
was a good way to, to kind of round out the show. Uh, CEO Sam Altman shared that
users say, please, and thank you to chat GPT. As we know, we've talked about it before,
and this results in tens of millions of dollars in operational costs. Says there are a
significant energy costs to processing, uh, every word that is typed into a chatbot, of
course, please. And thank you are also words that enter in there. He couldn't help
himself in saying it's still a good idea to be nice. You just never know someday the robot
might have mercy on your soul. Couldn't help, but kind of get that, that, how many
useless words? Oh, you know, there's a paradox of text. Cause I'm writing this book
about the line of type. And if you go back to the difficulty of writing in the past, whether it
was, uh, by, by scribal quill, right? Or by setting type one letter at a time, all that was
really laborious. Yet people were very long winded then. When we get to this age of theinternet, and especially things like Twitter, where we could go on as long as we want.
And suddenly we, we come up with new ways to be as, um, economical with our
language as we can be. It's just kind of interesting to me. So, um, uh, the one hand, I
think that we were used to using the least words possible for both Twitter and Google
search. And now AI comes along and says, no, say more, but whenever time you say
more, it costs money. It costs energy. I mean, it all costs money. I think people are
dumping incredible quantities of data into their LLMs, you know, and a short one or two
letter nicety is, is, is not moving the needle here. I mean, I guess, you know, in the, in
the sense that everything at this scale adds up to some large number, but large number
by comparison to the actual large number that is the overall cost of all words and
everything. It's just a, I mean, it's a spec. It's a grain of sand.
Jeff Jarvis: Yeah. And there's new efficiency to be, and I remember when, when, when
search and web came up with caching, that was a big deal. Save effort. Speaking of
which there was a story that didn't make it run around, but I'm trying to mention real
quickly because we talked about this a few weeks ago where, uh, sites are being driven
mad by AI, uh, scrapes scrapers coming in and costing them a huge amount of
bandwidth. And so Wikipedia and the Wikimedia foundation finally said, Oh, to heck with
this. And so they've put up, uh, a, um, uh, 461,000 freely accessible data sets here.
Don't scrape us. Go there. Don't take it. Okay. We've got, we've talked about this not too
long.
Jason Howell: Exactly. And so I was talking about how if, if news and other sites did
this and say, here, just take it. It's okay. Here it is. But stop scraping me every day.
Cause it's costing me money. And I think this is as, as, as so often the case, Wikimedia
foundation is ahead of the rest and thinking smart, um, about this technology. Don't
scrape me, bro. So you can, you too can go get that data on Kaggle. Is it Kaggle or
Kaggle? I guess two Gs. I guess Kaggle. Uh, I would say probably Kaggle. It appears to
me that it's Kaggle, but who the heck knows? Uh, interesting. Cool. Well, we have
reached the end of this episode of AI inside Jeff Jarvis. Thank you so much for being
with me for another hour of, of getting smarter on artificial intelligence and everything.
Uh, the web we weave is a wonderful book that everybody should read. You can go to
JeffJarvis.com to find that the Gutenberg parenthesis in pay magazine. Uh
Jeff Jarvis: Yes. And a magazine, you cannot find public parts here though. You said
that was that nobody read that. I didn't, you can probably find an eBay. I don't know.
Let's see if you go to Amazon. As public parts was of course my Howard Stern joke
because he wrote private parts. Yes. Yes. Indeed. Okay. You can still get the audio
book. Yeah. Let's see. I got it. You got it. There you go. Yeah. Yeah. Hardcover $6
paperback $12. Yeah. These are, are you, you can get it on audio book. Of course. No,
maybe. Yeah. No. It's there. You go. You go deep into the catacombs of Jeff's work. Youcan cool. And this is from 2011. So, yeah. Hey, you've been writing a lot of books for a
long time. It's worth mentioning your whole catalog from time to time. Thank you, Jeff.
So much fun.
Jason Howell: Thank you, Jason. Always a big time. Thank you to, uh, to everybody for
visiting the site, of course, uh, where you can go to, you know, find all the ways to
subscribe to the show, ai inside.show. And then of course there is the Patreon,
patreon.com/ai inside show. And I will just go ahead and throw that up on the screen
along with our amazing executive producers, Dr. Doo, Jeffrey Maricchini, WPVM 103.7
in Asheville, North Carolina, Dante St. James Bono de Rick and Jason Nifer, by the
way, he corrected me on, uh, on how to say his name. Jason Brady are amazing,
amazing patrons that, uh, that, you know, support us on a, on a deeper level as
executive producers. So patreon.com/ai inside show. But I think that's about it y'all.
Thank you so much. Thank you again, Jeff. It's a lot of fun. We'll see everybody next
time on another episode of AI inside. Bye everybody.
Jeff Jarvis: Bye everybody.