Out With Agents, In With Superintelligence
January 07, 202500:52:20

Out With Agents, In With Superintelligence

[00:00:00] This is AI Inside, Episode 50, recorded Tuesday, January 7th, 2025. Out With Agents, In With Superintelligence.

[00:00:11] This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show.

[00:00:17] If you like what you hear, head on over and support us directly. And thank you for making independent podcasting possible.

[00:00:30] Hello, everybody, and Happy New Year. It is me, Jason Howell, for another episode of AI Inside, the show where we take a look at the AI that is layered inside everything.

[00:00:40] Last year was the year of the agent. This year, who knows? Maybe my co-host knows, Jeff Jarvis. Welcome back to the show and Happy New Year, sir.

[00:00:50] Good to see you. Maybe agents will actually arrive this year.

[00:00:53] Maybe, yeah. Last year was talking about how cool agents are.

[00:00:56] Yeah, exactly.

[00:00:57] Yeah, this year it'll be that, or it'll be, what does Sam Altman say that we'll talk about in the future? He says it's not AGI, it's...

[00:01:08] Superintelligence?

[00:01:09] Superintelligence. That was the word. It was on the tip of my tongue. So last year was the agent's hype. This year is the superintelligence hype.

[00:01:17] I just finished watching Jensen Wong's keynote, which we'll talk about in a second, but it was agent-heavy. Lots of agent stuff that they're delivering, I think.

[00:01:27] Yeah, actually delivering. Yeah, I saw part of the keynote as well last night. Some really interesting stuff. I'll make it super quick. Thank you to everybody on our Patreon for supporting us to do this show, patreon.com slash AI Inside Show.

[00:01:40] And actually, one of our patrons, the one I was going to call out today, Daniel Croft, is here. And I would dare say, Daniel, you are here, I think, every time we do a live show.

[00:01:50] Yeah, thank you, Daniel. You are a loyal fan and friend and supporter. Thank you.

[00:01:54] See? Proof. I'm putting it up on the screen. Often, Daniel is like the first to connect to the live stream as well. So, Daniel, you're awesome. Thank you for your support.

[00:02:03] Thank you to everybody who catches us live, who downloads the episode, and, of course, who supports us on Patreon at patreon.com slash AI Inside Show.

[00:02:12] And I think that's about it. Why don't we dive right in? Because we've got a limited amount of time.

[00:02:16] Because here at the top of the show, I thought we could talk about the Consumer Electronics Show. It is effectively happening.

[00:02:24] I think today is like the official first day, but media has been there the last couple of days at different events.

[00:02:29] I'm on my way as soon as this episode ends, and as soon as I upload the audio podcast, I take off to the airport, and I'm going to be there for just a few short days.

[00:02:39] But so who knows what I'll see that we'll talk about.

[00:02:43] I think you're going to see AI everywhere, AI inside and everything.

[00:02:47] Yeah, it's everywhere. I mean, I've been getting so many pitches for the last month about all things AI.

[00:02:54] I mean, it's numbing. At a certain point, it's just like, oh, I don't even know which direction to focus.

[00:03:00] So today, let's focus on the real obvious ones.

[00:03:04] And I think probably the biggest indicator of the direction of AI is NVIDIA.

[00:03:08] Like you said, you were watching the keynote. It sounds like you probably saw more of it than I did.

[00:03:14] I watched the whole thing.

[00:03:15] You watched the whole thing.

[00:03:16] Two hours.

[00:03:18] Yeah, and there was a lot in there.

[00:03:20] The part that I saw was about the RTX Blackwell GPUs.

[00:03:25] And I'm not a huge GPU or CPU.

[00:03:30] I can't talk about these things to the depth that a lot of people can.

[00:03:33] But I do think that what's interesting is these are not inexpensive GPUs.

[00:03:39] $1,999 for the 5090 GPU.

[00:03:41] And what they were showing off was they had this demo video of an AI-driven neural memory that's integrated here to do things like improve and enhance the graphical fidelity, facial animations, and more, which is neat on its surface.

[00:03:58] But all of this in real time, what they were showing off was just really impressive.

[00:04:02] And that seems to me to be, if not this or that, you know, that one of these future years, the focus is going to be, I think, largely on AI processing happening in real time and, you know, potentially improving the things that we see and we hear and all that stuff.

[00:04:20] Yeah, what struck me, because, yeah, he introduced a couple of new chips.

[00:04:25] He introduced, I think very importantly, Project Digits, which is a supercomputer the size of a Mac Mini on your desk for $3,000 that has the entire stack of what NVIDIA does.

[00:04:39] And Jason, my big takeaway, I think, is that I keep – I've asked this on the show before.

[00:04:45] I don't understand.

[00:04:46] I thought NVIDIA was a hardware company.

[00:04:48] I keep underestimating how much they're a software company and how much their models and their work with models really matters in this and how much they're delivering to – everywhere that their chips are out there in the cloud, their software, their models are along with it.

[00:05:05] So they talked about delivering agentic libraries, vision, languages, animation, biology, and next physical, which we'll talk about in a second.

[00:05:14] He talked about how agents are your employees.

[00:05:19] He said that every software developer in the world will have an agentic buddy, a coworker.

[00:05:24] And at Shipstead, which we've talked about many times, they have an editing buddy for people.

[00:05:29] I think that's another way to look at it.

[00:05:31] So he said that the IT department will be HR for digital employees, onboarding them.

[00:05:37] And before with ChatGPT and such, you asked a question.

[00:05:40] It did its thing.

[00:05:41] It's air quotes thinking.

[00:05:43] It came back with an answer.

[00:05:44] Now it's going to be much more about going to a different agent than a different agent than a different agent, which we've demonstrated on the show before how that might work.

[00:05:52] He said –

[00:05:53] Which is great if it gives you the information you're actually looking for.

[00:05:55] In the end, and if it knows where to go, and if the answer that the other one asked, the answer in an answer is correct, yes.

[00:06:02] Is the same, yeah.

[00:06:03] He said that Windows WSL 2 – and I'm not a Windows guy, so I don't know anything.

[00:06:08] He said it's just built for this.

[00:06:10] It's perfect for this.

[00:06:11] Somebody in the audience screamed out something about Linux, and he said, well, Linux is good.

[00:06:18] Just not talk about Linux today.

[00:06:19] Yeah, not today.

[00:06:20] Not right now.

[00:06:22] Interestingly, then he went to a lot about physical things, about robotics, including cars.

[00:06:27] And he said that there was – we've gone from language models to what he called world models.

[00:06:33] And Cosmos is a new thing that they're openly licensing.

[00:06:37] It was trained on 20 million hours of video.

[00:06:40] It's rag for the physical ground truth.

[00:06:44] So what we'll say is that now, just as we know that language models don't really have any sense of meaning, don't know what words mean, don't have any sense of that, but rag helps them get toward a ground truth.

[00:06:56] This is the example he gave is when a ball goes off the table, we know that it still exists.

[00:07:02] AI doesn't.

[00:07:03] So AI has to be trained that it still exists.

[00:07:05] And so they've done tons of training on this so that it can have some sense of real.

[00:07:10] So then where this goes next is that he shows factories and how the factories do all these things, right?

[00:07:18] And then he shows cars and how they do all these things.

[00:07:20] And there's multiple computers involved, obviously.

[00:07:23] So I'm getting to the point here.

[00:07:25] He said that there's three computers for a car.

[00:07:27] One is training, right?

[00:07:29] One is implementation, the chip in the car.

[00:07:33] But the other one that's so important is the digital twin.

[00:07:37] Every factory has a digital twin where it's going through every possible simulation of what might happen next and how it arrives at those tokens to come to a decision to say this is what's going to happen next.

[00:07:51] And the same with a car, that there's a digital twin that's happening where it's trying to figure out all this stuff based on its training, based on what's going on.

[00:07:59] And so my somewhat joke here is we always ask whether we're in the matrix.

[00:08:04] No, we're the real world, but there's a huge matrix out there that's building where all these simulations are going on, where the digital twins are going on, where it's unlimited parallel universes with parallel action.

[00:08:17] It was blowing the brain, as you can tell.

[00:08:21] So he said the cars for NVIDIA now are a $4 to $5 billion market already.

[00:08:26] He predicted it would become a multi-trillion dollar industry.

[00:08:30] They put out a new car chip called Thor, which is really a universal robotics computer.

[00:08:36] It can go in a car, it can go in a robot, it can go into anything.

[00:08:40] He talked, and this is what we've talked about before, I'm almost done, about synthetic data.

[00:08:46] And he showed how they manufacture synthetic data for driving training.

[00:08:51] Yeah, okay.

[00:08:52] Which was interesting, right?

[00:08:54] So they'll have a street scene, and then suddenly it's a snow scene, or suddenly there's a construction scene there.

[00:09:00] Or suddenly there's a deer in front.

[00:09:02] Right.

[00:09:03] But I'm still concerned about, well, how do we know that that has ground truth, that that's real, that it's not training, that a deer looks like a child and you can run it over.

[00:09:13] I don't know.

[00:09:16] He said the chat GPT moment for general robotics is just around the corner.

[00:09:20] Well, what is it around the corner?

[00:09:21] AGI, super intelligence.

[00:09:23] It's all around the corner.

[00:09:24] All this stuff.

[00:09:25] The future is around the corner, depending on how you define the corner.

[00:09:28] Right.

[00:09:29] So those are the notes that I took.

[00:09:32] The beginning was the hardware.

[00:09:33] Then all of that was software.

[00:09:35] All of that was the structure and the stack that they give you.

[00:09:40] So I found it fascinating.

[00:09:41] I also think that Jensen Wong is the keynote king forever.

[00:09:47] Yeah, he's great.

[00:09:48] Everybody else brings out other people.

[00:09:50] He did solid for two hours.

[00:09:53] Yeah.

[00:09:53] Informative, funny, clearly stated.

[00:09:57] I mean, I didn't understand everything, but he's pretty amazing.

[00:10:02] And I thought it was funny, too, that when they had an image of agents, they were all variations of his head.

[00:10:08] So the guy's got an ego.

[00:10:10] You know, he's got an expensive jacket on.

[00:10:13] Yeah.

[00:10:13] I mean, how do you not have an ego wearing that jacket on stage?

[00:10:16] Although he did kind of poke fun at himself wearing that jacket at the very beginning of the presentation.

[00:10:21] Right before the presentation, NVIDIA stock hit a new all-time high.

[00:10:28] I think it's down a little bit today.

[00:10:30] Yeah, it's down 5% today because I think it's profit-taking.

[00:10:34] But it was impressive.

[00:10:36] And I think that it really said this is the future of – I'm glad you're going to CES.

[00:10:42] Because CES was always, oh, we have a new record player.

[00:10:45] Right.

[00:10:46] And now it's going to try to be at the heart of this.

[00:10:49] And putting Jensen Wong in that opening position I think was an important effort by them to say we're part of the club.

[00:10:55] So that's my report from having watched.

[00:10:58] I hope that I get – my time there, I'm there 48 hours, a little more than 48 hours.

[00:11:05] And I was talking in pre-show that there is almost not an hour where I don't have something booked.

[00:11:13] Really?

[00:11:13] I've probably overcommitted myself is what I'm thinking.

[00:11:16] But it's going to be interesting.

[00:11:18] And I hope that I'm able to swing by wherever NVIDIA is positioned and check out – I don't know.

[00:11:26] Check out Digits, I guess.

[00:11:28] It's that $3,000 AI supercomputer.

[00:11:31] It's like the size of a Mac Mini.

[00:11:33] And yet on your desktop, you can do some crazy 200 billion parameter model, one petaflop of AI performance.

[00:11:42] So Grace comes to the desktop.

[00:11:44] Yeah, a cloud computing platform that sits on your desk.

[00:11:47] Yes, yes, exactly.

[00:11:48] And again, it comes with their stack.

[00:11:51] And so that's what's pretty amazing about it and what people can do.

[00:11:54] And as a teacher at now a STEM university at Stony Brook, imagine if students had open access to this.

[00:12:03] It's phenomenal.

[00:12:04] It's just phenomenal.

[00:12:05] Oh, well, they'd get that thing to write all of their reports is what they do, Jeff.

[00:12:09] We don't want that.

[00:12:14] There are going to be changing the world.

[00:12:16] So yeah, it was really, really interesting.

[00:12:17] And I've got to say that every Jensen Wong keynote blows my mind in some way or another.

[00:12:23] Yeah, he's great.

[00:12:24] He really is.

[00:12:26] And some, you know, some, like I don't want to, what comes to mind is Sundar Pichai.

[00:12:32] Love the guy.

[00:12:33] But sometimes when he's on stage, there's a lack of energy or excitement.

[00:12:40] And I feel like Jensen Wong is kind of the opposite on his stage presence.

[00:12:45] Whether you understand what he's saying or not, he does a really great job of selling it.

[00:12:50] Oh, he does.

[00:12:50] And he's obviously committed and passionate.

[00:12:52] And you feel that coming in.

[00:12:55] You want someone like him, you know, championing his products in the way that he does.

[00:13:00] Yeah.

[00:13:01] And there's a lot going on right now.

[00:13:03] We're not going to get political here.

[00:13:04] But there's a lot going on with what the various technology moguls are doing and who they're giving money to and who they're paying obeisance to and so on and so forth.

[00:13:12] And I haven't seen anything about him in politics.

[00:13:15] I hope I don't.

[00:13:16] Because he strikes me as somebody, and I've been fooled in the past by other technology moguls in Silicon Valley.

[00:13:22] But so far, I like him.

[00:13:24] Mm-hmm.

[00:13:25] Yeah.

[00:13:25] So far, I kind of trust him that he's being smart with us.

[00:13:29] I think he and Jan Lacoon are the two people I trust most in this AI world to just be straight shooters.

[00:13:36] Yeah.

[00:13:37] We'll see.

[00:13:37] Curious to know what everybody watching and listening thinks.

[00:13:41] Contact at AIinside.show.

[00:13:43] Send us an email.

[00:13:44] Let us know what you think.

[00:13:45] Like, is Jensen Wong?

[00:13:47] Like, do you agree with what Jeff is saying there?

[00:13:50] Because I totally agree, too.

[00:13:51] And part of it is just like, well, I just like the guy.

[00:13:55] It seems like things are right.

[00:13:56] But if we're missing something, let us know.

[00:13:58] Contact me.

[00:13:59] And you can go to the NVIDIA site and watch the keynote or fast forward through.

[00:14:02] And it's entertaining.

[00:14:04] Yeah, for sure it is.

[00:14:05] To us nerds, it is.

[00:14:06] But not to real people.

[00:14:07] It's impressive.

[00:14:08] Samsung also held its own event and unveiled its AI for All,

[00:14:14] which, you know, Samsung is really all about its ecosystem and has been.

[00:14:22] I often say Samsung really wants to be Apple when it comes to ecosystem and continually tries.

[00:14:28] And so this is really bringing its AI into more of its product ecosystem,

[00:14:33] not just its smartphones, which we're already used to seeing, but a lot of its other places.

[00:14:38] So they have, you know, Vision AI-enabled TVs with real-time translation.

[00:14:45] What is it?

[00:14:46] AI upscaling for better picture quality.

[00:14:49] Content summaries instantly.

[00:14:51] Co-pilot built into the TV.

[00:14:53] That sort of stuff.

[00:14:54] You've got a new Galaxy Book series.

[00:14:56] Galaxy Book 5 AI PC lineup.

[00:14:59] So you've got your integrated AI search functionality.

[00:15:02] Photo editing capabilities driven by their AI.

[00:15:05] And then home appliances, of course, because this is the Consumer Electronics Show,

[00:15:10] as well as smart home devices.

[00:15:12] So laundry appliances with AI features.

[00:15:16] AI-enabled art frames.

[00:15:18] Home security equipment with AI capabilities.

[00:15:21] So the whole gamut, Samsung just basically doing what I think they're all working towards.

[00:15:26] But Samsung just has this broad ecosystem in which to bring its AI into,

[00:15:32] whether it makes sense or not.

[00:15:34] You had to see that coming.

[00:15:36] You know, as you're talking, Jason, what occurs to me is,

[00:15:39] one of the other things they did was a Samsung refrigerator

[00:15:41] that would allow you to order groceries direct from Instacart.

[00:15:44] We've heard about, do you really need connectivity and AI?

[00:15:47] I feel like this has been a CES recurring theme.

[00:15:49] Exactly.

[00:15:50] It's always different like that.

[00:15:51] It's in your dishwasher.

[00:15:53] It's in your toaster oven.

[00:15:54] It's in your refrigerator.

[00:15:56] And that's all kind of shruggy and silly.

[00:15:58] But in the TV, it starts to make sense.

[00:16:03] Yeah.

[00:16:03] Right?

[00:16:04] That you could interact with the content that you're watching in different ways.

[00:16:08] Oh, for sure.

[00:16:10] It's really interesting to me.

[00:16:11] We just bought my new wife.

[00:16:12] My new wife.

[00:16:13] Bought my wife a new...

[00:16:14] She's not my new wife.

[00:16:15] She's been my forever wife.

[00:16:16] We bought my wife a new monitor.

[00:16:19] And I've not bought one in so many years.

[00:16:21] I didn't realize they're basically smart TVs now.

[00:16:24] Yeah.

[00:16:24] And they come to streaming, right?

[00:16:26] So the connection of computer and television and AI

[00:16:29] and interacting with the content that you're interacting with as well,

[00:16:37] that starts to get really interesting.

[00:16:40] And I'm also looking for an excuse to buy a new TV.

[00:16:43] There you go.

[00:16:44] Always a good excuse.

[00:16:45] So tell me what you find.

[00:16:46] Yeah.

[00:16:47] I don't know.

[00:16:48] Yeah.

[00:16:49] I wonder how much...

[00:16:50] Well, actually, now that I think about it,

[00:16:52] this leads right into our next story,

[00:16:54] next announcement really well.

[00:16:56] Because maybe you don't need to buy a new TV.

[00:16:58] Maybe you just need to buy a new Google TV device,

[00:17:02] set-top box, whatever.

[00:17:04] Because Google's kind of doing this also.

[00:17:06] They're essentially bringing Gemini to Google TV,

[00:17:10] which up until now,

[00:17:12] Google TV had Gemini kind of sprinkled in there, I would say.

[00:17:17] You know, some AI-generated content summaries

[00:17:20] and like a wallpaper or a screensaver AI generator image thing.

[00:17:27] But it didn't have Gemini, the large language model, integrated.

[00:17:31] It still had offered the Assistant, the Google Assistant.

[00:17:36] And in my recent reviews of this with the Google TV streamer

[00:17:40] and a few other devices,

[00:17:41] I've noted kind of a little bit of that confusion there

[00:17:45] is that when you actually want to interact with the device,

[00:17:48] it's still Assistant.

[00:17:49] But they tout, you know, Gemini integrated.

[00:17:53] And it's kind of like, well, that's a half-truth.

[00:17:55] Like, yeah, there's pieces,

[00:17:56] but you're not really working with Gemini.

[00:17:58] Well, now Google is basically saying coming,

[00:18:00] you know, later this year,

[00:18:03] Gemini will replace the Google Assistant.

[00:18:05] But what does that mean?

[00:18:07] In just the TVs or in everything?

[00:18:10] No, I believe the platform itself.

[00:18:12] Google TV, the platform.

[00:18:14] Yeah, so if you've got a TV and it gets an update,

[00:18:18] technically that would lead me to believe

[00:18:20] that you would get Gemini, you know,

[00:18:21] because I'm guessing that there's no real hardware necessity,

[00:18:25] like an AI chip in order to do the large language.

[00:18:28] Well, there's the Google TV.

[00:18:29] That's if your TV is a Google TV.

[00:18:32] But if you're not and you buy the Google TV streamer,

[00:18:35] the old Comcast.

[00:18:36] Yeah.

[00:18:37] Chromecast.

[00:18:38] Chromecast, yeah, yeah.

[00:18:38] Right, right.

[00:18:39] I guess you would need that.

[00:18:41] For sure.

[00:18:42] Right.

[00:18:42] You need some sort of device running Google TV.

[00:18:44] Is that only the new,

[00:18:46] formerly known as Chromecast,

[00:18:48] Google and its frigging brand?

[00:18:49] Yeah.

[00:18:49] Now, what is it, TV streamer is it called?

[00:18:52] Is that what it's called?

[00:18:52] Yeah, it's the Google TV streamer.

[00:18:54] Right.

[00:18:55] I'll be curious to know what you need for that

[00:18:57] and when you're going to get it.

[00:18:58] Is it,

[00:18:58] because I have not bought the new streamer

[00:19:01] and it's making me think that if I need to get Gemini there,

[00:19:04] I may have to buy that.

[00:19:06] Yeah.

[00:19:06] It's like a hundred bucks

[00:19:08] and we've got it that we use.

[00:19:10] Google had sent that to me to test out and everything.

[00:19:13] So we've been using it for a while.

[00:19:15] And yeah, it's fine.

[00:19:16] It's great.

[00:19:17] But I do think that integrating Gemini

[00:19:20] for the voice interaction,

[00:19:22] you know, it's kind of like you knew it was going to happen eventually

[00:19:25] and now Google's basically saying it's going to happen.

[00:19:27] And that will give you the ability to talk to it

[00:19:30] with more complex queries,

[00:19:32] not following the syntax.

[00:19:33] Assistant, time and time again,

[00:19:35] you run up against the,

[00:19:37] oh, it doesn't know how to do that sort of thing.

[00:19:39] And Gemini seems to be a lot less,

[00:19:41] you know, likely to fall into that trap for the most part.

[00:19:44] And I wonder how well integrated it'll be

[00:19:46] with things like programming grids

[00:19:48] and searching for programs

[00:19:50] and answering questions about what programs there are.

[00:19:53] This is when it becomes an agent.

[00:19:55] I love,

[00:19:56] if it can interact with that.

[00:19:57] I love this TV show.

[00:19:58] Can you find all the different channels

[00:20:00] that this TV show plays

[00:20:02] and tell me which,

[00:20:04] or yeah, yeah, something along those lines.

[00:20:06] Although when I say that,

[00:20:07] then, you know, DVRs for many years,

[00:20:11] when DVR was a thing

[00:20:13] that we all had to have,

[00:20:14] you know, had the ability of scouring all the channels

[00:20:17] and just recording anytime Friends was on.

[00:20:19] So I don't know how that makes it any better or easier.

[00:20:21] But doing it all with your voice,

[00:20:23] I guess that's a part of the difference

[00:20:24] is it eliminates the friction

[00:20:27] of having to go in and manually do things.

[00:20:30] You can just kind of speak in normal plain,

[00:20:33] your plain language

[00:20:34] and have it do the thing you want it to do.

[00:20:38] So there's that.

[00:20:39] And I'm sure Samsung, you know,

[00:20:40] kind of similar touching back

[00:20:42] on what we were talking about.

[00:20:43] I think it's kind of go very similar direction.

[00:20:45] And then LG had an announcement

[00:20:51] also along the TV front,

[00:20:53] newest OLED TVs

[00:20:55] integrating the new Alpha 11 Gen 2 processor for AI.

[00:21:00] So again, you're going to get that

[00:21:02] improved image processing,

[00:21:04] upscaling, AI sound optimization.

[00:21:07] LG is opting to bring CoPilot in as well,

[00:21:12] which I think we were talking about with Samsung,

[00:21:15] as well as the LLM chatbot on device.

[00:21:18] So they're all moving in this direction.

[00:21:20] We're, you know,

[00:21:21] we've used our voice to interact with our TVs before.

[00:21:25] The interaction quality

[00:21:26] is possibly going to improve a little bit

[00:21:29] as more of this kind of like LLM technology

[00:21:32] gets integrated into it.

[00:21:33] Yeah, this separate standalone talk to me device,

[00:21:36] Madame A and such probably goes away.

[00:21:39] And is the TV that device then?

[00:21:41] It'll be interesting.

[00:21:43] Yeah.

[00:21:44] And then LG,

[00:21:45] just lastly,

[00:21:46] lastly real quick,

[00:21:48] affectionate intelligence.

[00:21:50] Doing what Apple did with Apple intelligence

[00:21:52] instead of artificial intelligence,

[00:21:54] let's make it our own.

[00:21:55] So their take is affectionate intelligence.

[00:21:58] And this is all about Furon.

[00:22:01] Furon? Furon? Furon probably.

[00:22:04] It's AI platform that combines LLM,

[00:22:06] spatial sensing and customer lifestyle data

[00:22:09] in real time to, quote,

[00:22:12] understand and empathize with you.

[00:22:14] So...

[00:22:15] I don't know that I want that.

[00:22:18] I don't know.

[00:22:19] I don't want to hug my AI, no.

[00:22:22] Yeah, you'll be in your room,

[00:22:23] you'll cough,

[00:22:24] and it will sense that you coughed,

[00:22:26] and it will adjust your thermostat for you.

[00:22:29] Are you okay?

[00:22:30] Are you feeling okay?

[00:22:31] Here, let me just take care of this.

[00:22:33] Yes, I know,

[00:22:33] you're going to be coddled by your AI.

[00:22:35] Yeah, no.

[00:22:36] No, it's not.

[00:22:37] You're not my mother.

[00:22:38] It's not.

[00:22:38] Or suggest errands for you to run

[00:22:40] when you have free time.

[00:22:41] It sounds like it's just, like,

[00:22:43] creating work for me.

[00:22:45] It does sound like my mother, yeah.

[00:22:47] Yeah, it's like you're nagging AI.

[00:22:49] It's not affectionate.

[00:22:51] It's nagging AI.

[00:22:54] All right.

[00:22:55] Well, so, and then who knows what else?

[00:22:57] It is the first official day of CES.

[00:23:00] I'm sure in my time there,

[00:23:02] I mean, no question,

[00:23:02] I'm going to be meeting with some people

[00:23:04] about some AI products,

[00:23:05] so I'll have some stuff to talk about

[00:23:06] next week on the show.

[00:23:08] You might want to look up there

[00:23:10] as an automated opening doggy door.

[00:23:13] Oh, definitely.

[00:23:14] That'll be useful,

[00:23:16] so you can see that.

[00:23:17] And we'll see what other kind of funny stuff

[00:23:19] you find at CES.

[00:23:21] Always funny stuff.

[00:23:22] That can be kind of the challenge,

[00:23:23] is how do you filter through the funny stuff

[00:23:25] to get to the real kind of, like,

[00:23:27] groundbreaking revolutionary stuff,

[00:23:28] you know, if it exists.

[00:23:30] So that's the challenge.

[00:23:32] We'll see.

[00:23:33] All right, we're going to take a quick break,

[00:23:34] come back, talk about non-CES-related stuff,

[00:23:38] starting, of course, with OpenAI.

[00:23:39] That's coming up in a second.

[00:23:44] Sam Altman started the year with a tweet

[00:23:47] that seems to suggest that he believes

[00:23:50] the singularity is near.

[00:23:53] And the reason I know this is because

[00:23:55] his tweet literally said,

[00:23:56] near the singularity,

[00:23:58] unclear which side.

[00:24:01] And, you know,

[00:24:02] that just makes me feel like

[00:24:04] Sam really knows how to get people talking.

[00:24:06] Like, he knows how to say the thing

[00:24:07] that's, like, going to spur...

[00:24:10] Give him the venture money.

[00:24:11] Yeah.

[00:24:12] Yeah.

[00:24:13] Yeah.

[00:24:13] Make sure, remind everybody

[00:24:15] just how noticed he is

[00:24:16] when he says stuff like this.

[00:24:18] But as Gary Marcus always points out,

[00:24:19] there's no definition to these things.

[00:24:23] Singularity, superintelligence, AGI,

[00:24:24] they all equal BS

[00:24:26] unless you have a definition

[00:24:28] and we can demonstrate where it is.

[00:24:31] Yeah.

[00:24:33] And we're not there at all.

[00:24:37] Did I put up the Hank Green thing?

[00:24:39] So Hank Green put up a new test for AI.

[00:24:42] We love Hank Green.

[00:24:44] And after further interrogations and suggestions,

[00:24:49] he says,

[00:24:50] in the spirit of the Turian test,

[00:24:52] I propose the Cripslock test.

[00:24:54] No system can be called AGI

[00:24:56] if that system cannot reliably determine

[00:24:58] whether or not it is spewing absolute bullshit.

[00:25:03] Are you real?

[00:25:06] Fair.

[00:25:07] I think that's totally fair.

[00:25:08] I like it.

[00:25:09] I like it.

[00:25:09] I like it.

[00:25:12] Yeah, I wish I had written it down.

[00:25:15] I came across something that was talking about how,

[00:25:19] God, I'm going to do it absolutely no justice

[00:25:22] because it's broken in pieces in my mind.

[00:25:24] But essentially stating that we,

[00:25:27] you know, often AI is spoken about

[00:25:30] as if it's, you know, of supreme intelligence.

[00:25:34] And it's almost like AI is good at the things

[00:25:36] that can be difficult for humans,

[00:25:38] but is horrible at the things that come easy to humans.

[00:25:44] Like, you know, like the assumptions

[00:25:46] that we as humans make

[00:25:48] when the ball falls off the table

[00:25:49] and that we know that it still exists.

[00:25:50] That's easy for a human to know that.

[00:25:53] Right, and very hard for them, yes.

[00:25:54] Very hard for an AI.

[00:25:56] You know, and then on the flip side,

[00:25:58] can AI solve a complex math problem

[00:26:00] faster than I can?

[00:26:01] Yeah, probably, or whatever the example is.

[00:26:04] And I think that's pretty appropriate

[00:26:05] when you're talking about super intelligence and AGI.

[00:26:08] It's like, okay, well,

[00:26:09] you're talking about it completing everything,

[00:26:12] you know, that a human can,

[00:26:14] and that's just simply not true.

[00:26:16] Like, it can do some really amazing things,

[00:26:18] don't get me wrong, but it's not.

[00:26:21] It's the wrong scale.

[00:26:22] It's the wrong effort.

[00:26:26] The cultural ego of wanting to replicate ourselves

[00:26:30] and then prove you're the one

[00:26:31] that can outdo even ourselves

[00:26:34] is meaningless

[00:26:36] and doesn't really get us anywhere, I think.

[00:26:38] And I think that we then lose

[00:26:39] the opportunities that come from

[00:26:41] the cool stuff it can do.

[00:26:43] Benedict Evans, who I quote constantly,

[00:26:45] who I think is a wonderful analyst,

[00:26:47] said on Blue Sky,

[00:26:48] a gentle suggestion,

[00:26:50] any creative or intellectual work

[00:26:52] that can be replaced by a pattern-matching machine

[00:26:54] producing the average

[00:26:56] of what people would probably say

[00:26:57] might not have been very creative

[00:26:59] or intellectual to begin with.

[00:27:02] Which I think is another good rule here.

[00:27:04] Yeah.

[00:27:05] And some people who get scared

[00:27:06] are saying, oh my God,

[00:27:07] it can do what I do.

[00:27:08] Well, maybe it should.

[00:27:10] Yeah.

[00:27:13] That's a statement

[00:27:14] that will get some people not very happy.

[00:27:17] Yeah, a few people complained.

[00:27:19] Yeah.

[00:27:20] Yeah, very true.

[00:27:22] All of this with Altman

[00:27:25] followed the announcement

[00:27:26] of an improved version of the 01 model.

[00:27:29] This was the last gift,

[00:27:31] put that in air quotes,

[00:27:33] of the 12 days of shipness,

[00:27:35] Bonanza.

[00:27:36] Yeah, we missed that, yes.

[00:27:38] which is not public yet.

[00:27:41] Notably, they skipped 02

[00:27:43] because it's a mobile carrier in the UK

[00:27:45] and they didn't want to get confused.

[00:27:47] So jumped from 01 to 03

[00:27:51] and, you know,

[00:27:52] and actually,

[00:27:54] this announcement came a day

[00:27:56] after Google announced

[00:27:57] its own thoughtful model

[00:27:59] called Gemini 2.0 Flash Thinking.

[00:28:02] OpenAI says 03 is 20% better than 01.

[00:28:06] So, you know,

[00:28:07] a little bit more better.

[00:28:08] Yeah, and those measurements

[00:28:09] always throw me too.

[00:28:10] It's against,

[00:28:11] it's against a benchmark test

[00:28:12] and that's fine,

[00:28:13] but what does it mean

[00:28:13] to be 20% better?

[00:28:14] Yeah, better.

[00:28:15] Better how.

[00:28:15] What does it mean to be faster

[00:28:16] doing what it does?

[00:28:17] I don't know.

[00:28:18] Better how, yeah.

[00:28:20] And then finally,

[00:28:22] Altman followed all of this up

[00:28:23] with a blog post

[00:28:24] titled Reflections

[00:28:27] and here we go.

[00:28:29] He says,

[00:28:30] we are now confident

[00:28:31] we know how to build AGI

[00:28:33] as we have traditionally understood it.

[00:28:35] We believe that-

[00:28:36] Wait, wait, wait.

[00:28:36] Stop there.

[00:28:37] Stop there.

[00:28:38] Okay.

[00:28:38] AGI as we have traditionally understood.

[00:28:41] There is no tradition for AGI.

[00:28:42] You're making it up as you go

[00:28:45] and it's not defined

[00:28:46] as he does.

[00:28:48] Yes.

[00:28:48] Yeah, it's so true.

[00:28:49] So true.

[00:28:50] He says,

[00:28:50] we believe that in 2025

[00:28:51] we may see the first AI agents

[00:28:54] join the workforce

[00:28:55] in quotes

[00:28:57] and materially change

[00:28:58] the output of companies.

[00:29:01] I mean,

[00:29:02] we're seeing that a little bit.

[00:29:04] We're seeing a little bit.

[00:29:04] A little bit now.

[00:29:05] Yeah, a little bit.

[00:29:06] So I guess you're saying

[00:29:07] in 2025

[00:29:08] the year of the agent,

[00:29:09] the actualized agent maybe,

[00:29:12] that that might become more true.

[00:29:14] And I wouldn't say

[00:29:15] that that's false.

[00:29:16] I mean,

[00:29:17] I do think that

[00:29:17] there is going to be

[00:29:18] work of AI agents

[00:29:20] that will join the workforce

[00:29:23] or replace jobs.

[00:29:25] I think that's inevitable.

[00:29:26] Or again,

[00:29:27] become an aide to the job

[00:29:28] that we certainly see.

[00:29:29] Become an aide to.

[00:29:30] Yeah,

[00:29:30] that's a better way to put it.

[00:29:31] Yeah.

[00:29:32] For sure.

[00:29:33] He also says,

[00:29:34] we are beginning to turn

[00:29:35] our aim beyond that

[00:29:36] to superintelligence.

[00:29:37] There we are again

[00:29:38] in the true sense

[00:29:39] of that word.

[00:29:41] Okay?

[00:29:41] There is no true sense

[00:29:42] of that word.

[00:29:43] I don't know.

[00:29:44] I don't know.

[00:29:45] What is the true sense

[00:29:46] of that word?

[00:29:47] Tell me what the word means

[00:29:47] and I'll tell you

[00:29:48] whether there's a true sense

[00:29:49] to it.

[00:29:50] So true.

[00:29:52] Yep.

[00:29:53] So,

[00:29:54] and yeah,

[00:29:54] and the separation

[00:29:55] between AGI

[00:29:57] and superintelligence,

[00:29:59] what exactly

[00:30:00] is that?

[00:30:01] I don't even know

[00:30:02] because again,

[00:30:03] there is no real firm

[00:30:05] definition of these things.

[00:30:06] Exactly.

[00:30:06] It's a moving goalpost.

[00:30:08] Exactly.

[00:30:10] Yeah,

[00:30:11] and he goes on

[00:30:12] and in the end

[00:30:13] basically says

[00:30:14] that we're special.

[00:30:16] So,

[00:30:16] so part of the

[00:30:17] open AI news

[00:30:18] is that they are

[00:30:19] in fact starting

[00:30:20] this shift

[00:30:21] from not-for-profit

[00:30:22] to for-profit

[00:30:23] and that's going to be

[00:30:23] a lot of court cases,

[00:30:25] a lot of McGill's

[00:30:26] going around.

[00:30:26] That's part of what

[00:30:27] he's doing here

[00:30:28] is kind of setting that up.

[00:30:29] But he's also,

[00:30:31] however,

[00:30:31] trying to argue

[00:30:32] that he's a special company

[00:30:33] and he shouldn't be

[00:30:34] like other companies.

[00:30:36] And

[00:30:38] why?

[00:30:39] Yeah.

[00:30:40] Because now

[00:30:41] is the perfect time

[00:30:42] to do that

[00:30:43] with the incoming

[00:30:44] administration

[00:30:44] here in the U.S.

[00:30:46] being seemingly

[00:30:47] pretty friendly

[00:30:48] to AI development

[00:30:50] and friendly

[00:30:51] to,

[00:30:52] you know,

[00:30:53] just technology companies

[00:30:54] in general

[00:30:55] as you talked about earlier,

[00:30:57] you know,

[00:30:57] some of them

[00:30:58] being friendly

[00:30:59] in the opposite direction

[00:31:00] as well.

[00:31:01] So it's going to be

[00:31:02] really interesting

[00:31:03] to see

[00:31:04] how that impacts

[00:31:05] the development

[00:31:06] of these companies,

[00:31:07] you know.

[00:31:08] Yeah.

[00:31:09] And what kind of,

[00:31:10] what kind of,

[00:31:12] I don't know,

[00:31:13] what kind of freedom

[00:31:14] they have as a result

[00:31:15] to do what they want

[00:31:16] or to make this sort of case

[00:31:18] so that they can get

[00:31:19] what they want.

[00:31:20] So if you were going

[00:31:21] to rate,

[00:31:22] this is absolutely unfair,

[00:31:23] I'm popping this on you.

[00:31:24] If you were going

[00:31:25] to take,

[00:31:26] let's say,

[00:31:26] open AI,

[00:31:28] perplexity,

[00:31:28] Anthropic,

[00:31:29] Google,

[00:31:30] Microsoft.

[00:31:32] Let's just take

[00:31:32] those five companies.

[00:31:33] Okay.

[00:31:34] Rank them for me

[00:31:35] in terms of

[00:31:36] the companies

[00:31:37] you think

[00:31:38] are less BS,

[00:31:40] I'm sorry,

[00:31:41] meta,

[00:31:41] number six meta.

[00:31:43] Oh boy.

[00:31:44] So,

[00:31:45] open AI,

[00:31:47] perplexity,

[00:31:48] Anthropic,

[00:31:50] Microsoft,

[00:31:51] Google,

[00:31:51] meta.

[00:31:53] Which ones

[00:31:53] do you trust

[00:31:54] to be doing

[00:31:55] real

[00:31:56] quality work?

[00:31:59] Oh my goodness.

[00:32:01] That's such a hard question.

[00:32:02] It's so unfair.

[00:32:03] I don't know that I have

[00:32:04] an answer either,

[00:32:04] but I'm trying to get

[00:32:05] my head around,

[00:32:06] well,

[00:32:06] I guess you have to

[00:32:07] include Jensen Wong

[00:32:08] too to a extent,

[00:32:09] right?

[00:32:09] Who seems to be

[00:32:11] BSing us?

[00:32:12] And I think

[00:32:12] Altman is trying

[00:32:13] to BS us.

[00:32:14] I mean,

[00:32:15] I feel the BS meter on,

[00:32:17] yeah,

[00:32:17] I don't know if I can

[00:32:18] rank them necessarily,

[00:32:19] but I can say

[00:32:20] when I look at these,

[00:32:21] I feel BS.

[00:32:24] Man,

[00:32:24] to a certain degree,

[00:32:25] there's just a lot of,

[00:32:26] there's a lot of

[00:32:27] seeming snake oil

[00:32:28] in AI

[00:32:29] as far as how

[00:32:30] any of these companies

[00:32:31] really talk about it.

[00:32:33] They all want to

[00:32:33] talk about it

[00:32:34] to a certain degree

[00:32:35] that,

[00:32:36] you know,

[00:32:36] they want to

[00:32:37] overpromise,

[00:32:37] yeah.

[00:32:38] I was going to say

[00:32:38] it can't possibly

[00:32:39] live up to,

[00:32:41] but they're really

[00:32:42] determined to make

[00:32:43] it seem

[00:32:43] to be that.

[00:32:44] I get that,

[00:32:45] I get that

[00:32:46] from OpenAI,

[00:32:47] I get that

[00:32:47] from Anthropic,

[00:32:49] I get that

[00:32:50] from Google.

[00:32:53] Yeah,

[00:32:53] I mean,

[00:32:54] I don't know

[00:32:54] that there's a

[00:32:55] company here,

[00:32:56] you know,

[00:32:56] that I'm looking at

[00:32:57] that I look at

[00:32:58] at least to a

[00:32:59] certain degree.

[00:32:59] To a certain degree,

[00:33:00] yeah,

[00:33:00] it's my unfair

[00:33:01] question.

[00:33:01] I think,

[00:33:03] as I said earlier,

[00:33:04] I think Jan Lacon

[00:33:04] and believe it or not

[00:33:05] Meta,

[00:33:05] and Meta's doing

[00:33:06] all kinds of

[00:33:06] other crazy stuff

[00:33:07] right now,

[00:33:08] putting,

[00:33:09] you know,

[00:33:10] Ultimate Fighters

[00:33:11] on its board

[00:33:11] of directors

[00:33:12] and all kinds

[00:33:13] of other stuff.

[00:33:13] But that aside,

[00:33:15] by the way,

[00:33:16] Jensen Wong

[00:33:17] also called out

[00:33:18] majorly to

[00:33:18] Llama and Meta

[00:33:19] and that they

[00:33:20] built much of

[00:33:21] what they did

[00:33:22] on Llama

[00:33:23] and they think

[00:33:23] that openness

[00:33:24] is a model

[00:33:25] that's going to

[00:33:26] make this grow

[00:33:26] immensely.

[00:33:27] So,

[00:33:28] I think

[00:33:28] Lacoon's being

[00:33:29] fairly sober

[00:33:30] and sane.

[00:33:35] High on the

[00:33:36] BS meter,

[00:33:36] I would put

[00:33:37] OpenAI

[00:33:38] and Anthropic

[00:33:39] because they're

[00:33:40] on the BS

[00:33:41] safety thing.

[00:33:43] Perplexity,

[00:33:43] among the new

[00:33:44] guys,

[00:33:44] perplexity is

[00:33:45] interesting to me.

[00:33:46] I don't know

[00:33:46] how to judge

[00:33:46] them well enough

[00:33:47] yet.

[00:33:48] Yeah,

[00:33:49] I kind of feel

[00:33:49] perplexity.

[00:33:50] I mean,

[00:33:51] yes,

[00:33:51] I use them a lot

[00:33:52] but I kind of

[00:33:53] feel like they're

[00:33:54] on a different

[00:33:54] level and not

[00:33:55] necessarily playing

[00:33:56] in the same

[00:33:57] game.

[00:33:58] I agree.

[00:33:58] They're more

[00:33:58] application layer

[00:33:59] kind of

[00:34:01] than

[00:34:02] foundational.

[00:34:02] Which isn't to say

[00:34:03] they're so great

[00:34:03] that they exist

[00:34:04] in a different realm.

[00:34:05] No.

[00:34:05] I just don't think

[00:34:06] they're playing

[00:34:08] softball and the

[00:34:09] other companies

[00:34:10] are playing baseball.

[00:34:10] Yeah,

[00:34:11] I agree.

[00:34:11] Somebody's going

[00:34:12] to hate that comment

[00:34:13] but yeah.

[00:34:15] All right,

[00:34:15] thanks for that.

[00:34:16] Yeah,

[00:34:17] no,

[00:34:17] it's an interesting

[00:34:18] question.

[00:34:18] I'm happy you

[00:34:19] asked it because

[00:34:19] it really got my

[00:34:20] mind going.

[00:34:21] Who of these

[00:34:22] companies do I

[00:34:23] trust when they're

[00:34:24] talking about this

[00:34:25] stuff?

[00:34:25] Yeah.

[00:34:26] it's kind of hard

[00:34:27] to find a level

[00:34:29] of trust in

[00:34:30] any of it really

[00:34:31] because it really

[00:34:32] does often seem

[00:34:33] like you're promising

[00:34:34] way more than you

[00:34:35] can deliver or

[00:34:36] you're making it

[00:34:37] way more altruistic

[00:34:39] than it actually

[00:34:39] is.

[00:34:40] I think we see a

[00:34:40] lot of that.

[00:34:41] Yes,

[00:34:42] yes.

[00:34:43] Yeah,

[00:34:43] we went through

[00:34:43] that and I fell

[00:34:45] for too much of

[00:34:45] it I think especially

[00:34:46] the social media

[00:34:47] companies.

[00:34:49] They're companies.

[00:34:50] They're just

[00:34:51] companies.

[00:34:51] Well,

[00:34:52] for sure.

[00:34:52] Yeah.

[00:34:53] They want to make

[00:34:53] them some money.

[00:34:54] Yep.

[00:34:56] And then I want

[00:34:57] to make sure that

[00:34:58] we talk a little

[00:34:58] bit about DeepSeek

[00:34:59] because you had

[00:35:01] an experience

[00:35:02] with DeepSeek

[00:35:03] and I'm super

[00:35:03] curious.

[00:35:04] You sent me a

[00:35:04] bunch of output

[00:35:06] from your

[00:35:06] noodlings with it

[00:35:07] and I'm just

[00:35:08] kind of curious

[00:35:09] to hear you set

[00:35:10] the stage on

[00:35:11] this and explain

[00:35:11] what caught

[00:35:12] your imagination

[00:35:13] with this.

[00:35:14] Once again,

[00:35:14] quoting Ben

[00:35:15] Evans,

[00:35:15] I saw a blue

[00:35:16] sky,

[00:35:17] whatever we call

[00:35:17] it,

[00:35:18] going across

[00:35:19] saying it was

[00:35:20] the end of

[00:35:20] the year and

[00:35:20] I said,

[00:35:21] whoa,

[00:35:21] they kind of

[00:35:21] snuck in with

[00:35:22] a big AI

[00:35:22] story just at

[00:35:23] the end of

[00:35:23] the year.

[00:35:24] Well,

[00:35:24] what's this?

[00:35:25] Because Ben's

[00:35:26] a smart

[00:35:26] analyst.

[00:35:27] And the deal

[00:35:28] for,

[00:35:28] it's a Chinese

[00:35:29] startup and

[00:35:31] what they say

[00:35:32] is that they

[00:35:32] managed to

[00:35:33] build their

[00:35:35] large language

[00:35:36] model with

[00:35:37] tremendously

[00:35:38] less compute

[00:35:41] and thus

[00:35:42] energy and

[00:35:43] expense than

[00:35:44] all the big

[00:35:45] guys.

[00:35:51] deep seek

[00:35:52] required 2.78

[00:35:54] million GPU

[00:35:55] hours.

[00:35:58] And you'd

[00:35:58] think that's

[00:35:58] a lot,

[00:35:59] but they're

[00:35:59] saying that

[00:35:59] the total

[00:36:00] amount of

[00:36:00] time used

[00:36:01] to train

[00:36:01] the LLM,

[00:36:03] I'm trying

[00:36:03] to find

[00:36:04] the comparison

[00:36:04] here.

[00:36:06] That's

[00:36:06] substantially

[00:36:07] 2.78

[00:36:08] million GPU

[00:36:09] hours versus

[00:36:11] 30.8

[00:36:12] million GPU

[00:36:13] hours for

[00:36:14] LLM 3.1.

[00:36:16] So,

[00:36:17] basically a

[00:36:17] tenth of

[00:36:19] the compute

[00:36:19] to build a

[00:36:21] model that

[00:36:21] is competitive,

[00:36:23] supposedly,

[00:36:24] on all these

[00:36:25] cases.

[00:36:25] And Andre

[00:36:26] Karpathy,

[00:36:27] founding team

[00:36:28] member at

[00:36:28] OpenAI,

[00:36:29] said it looks

[00:36:30] to be a

[00:36:31] stronger model

[00:36:31] at only

[00:36:33] that time.

[00:36:34] So,

[00:36:35] that's a

[00:36:35] big deal.

[00:36:36] If we

[00:36:37] think that,

[00:36:38] A,

[00:36:39] you can

[00:36:39] build a

[00:36:40] model for

[00:36:40] much less,

[00:36:41] that's

[00:36:41] good news

[00:36:41] for the

[00:36:42] environment

[00:36:42] and expense,

[00:36:44] B,

[00:36:45] it's China

[00:36:45] at a

[00:36:45] very highly

[00:36:46] competitive

[00:36:48] position.

[00:36:49] So,

[00:36:49] I went

[00:36:50] into deep

[00:36:50] seek,

[00:36:50] I thought,

[00:36:51] let me try

[00:36:51] it out.

[00:36:52] And it

[00:36:54] wasn't hard,

[00:36:54] I asked

[00:36:54] it to tell

[00:36:57] me about

[00:36:57] itself and

[00:36:58] it wouldn't.

[00:36:58] He said,

[00:36:58] go to the

[00:37:01] documentation

[00:37:04] but then I

[00:37:05] asked it,

[00:37:05] you have

[00:37:06] two choices,

[00:37:07] you can ask

[00:37:07] it to think

[00:37:08] deeply or

[00:37:09] to search.

[00:37:11] So,

[00:37:11] if you ask

[00:37:12] it to think

[00:37:12] deeply,

[00:37:13] it comes

[00:37:13] back with

[00:37:13] this really

[00:37:14] weird thing

[00:37:15] where it

[00:37:15] says,

[00:37:16] well,

[00:37:16] I wonder

[00:37:17] about this,

[00:37:17] it could

[00:37:18] be that,

[00:37:18] it could

[00:37:18] be this,

[00:37:19] it prevaricates

[00:37:21] and then out

[00:37:22] of that,

[00:37:23] and it gives

[00:37:23] you that text,

[00:37:24] text which by

[00:37:25] the way,

[00:37:25] you can't

[00:37:26] cut and paste.

[00:37:28] It is just

[00:37:29] an image.

[00:37:30] Oh,

[00:37:30] yeah,

[00:37:31] right,

[00:37:31] because you sent

[00:37:32] me.

[00:37:32] I had to

[00:37:32] send you

[00:37:33] screenshots of

[00:37:33] it because

[00:37:34] I couldn't

[00:37:35] get the

[00:37:35] text,

[00:37:35] but then

[00:37:36] it gives

[00:37:36] you the

[00:37:37] answer.

[00:37:38] And it

[00:37:39] leads you

[00:37:39] to think

[00:37:40] that that's

[00:37:40] the answer

[00:37:41] that resulted

[00:37:41] from some

[00:37:42] process of

[00:37:43] air quotes

[00:37:43] thinking.

[00:37:44] I have no

[00:37:44] idea whether

[00:37:45] that's the

[00:37:45] case or

[00:37:46] not.

[00:37:46] And then

[00:37:46] I turned

[00:37:47] around and

[00:37:47] I asked

[00:37:47] similar

[00:37:48] questions

[00:37:48] from the

[00:37:49] search

[00:37:49] piece,

[00:37:50] and it

[00:37:51] did a

[00:37:51] pretty good

[00:37:51] job.

[00:37:52] It was

[00:37:52] okay,

[00:37:53] but it

[00:37:53] was just

[00:37:53] a very

[00:37:54] different

[00:37:54] way to

[00:37:55] try to

[00:37:55] interact

[00:37:55] with

[00:37:56] us,

[00:37:56] and it

[00:37:56] was

[00:37:57] trying to

[00:37:59] communicate

[00:37:59] some AI

[00:38:00] humility,

[00:38:01] which I

[00:38:02] found

[00:38:02] amusing.

[00:38:05] Yeah,

[00:38:06] so some

[00:38:06] people who

[00:38:07] were playing

[00:38:07] around with

[00:38:07] this were

[00:38:09] able to

[00:38:09] get it to

[00:38:10] say,

[00:38:10] I thought

[00:38:10] this was an

[00:38:11] interesting

[00:38:11] kind of

[00:38:12] side note

[00:38:12] on it,

[00:38:13] getting it

[00:38:14] to claim

[00:38:15] to be

[00:38:15] running

[00:38:16] chat

[00:38:16] GPT.

[00:38:18] who are

[00:38:19] you,

[00:38:20] what are

[00:38:20] you,

[00:38:20] and

[00:38:22] repeatedly

[00:38:22] enough that

[00:38:23] it was

[00:38:24] notable,

[00:38:24] it would

[00:38:25] say that

[00:38:25] it was

[00:38:25] chat GPT.

[00:38:26] I can't

[00:38:26] remember

[00:38:26] which version

[00:38:27] of chat

[00:38:27] GPT.

[00:38:30] And in

[00:38:30] looking at

[00:38:31] that,

[00:38:31] obviously,

[00:38:32] it might

[00:38:32] not be

[00:38:33] chat

[00:38:33] GPT,

[00:38:34] but it's

[00:38:35] at least

[00:38:36] likely to

[00:38:36] a certain

[00:38:37] degree that

[00:38:37] it was

[00:38:38] partially

[00:38:38] trained on

[00:38:39] output

[00:38:39] from

[00:38:40] chat

[00:38:40] GPT,

[00:38:41] which is

[00:38:41] maybe

[00:38:43] because

[00:38:43] then you

[00:38:44] can get

[00:38:44] some of

[00:38:45] the

[00:38:45] capabilities

[00:38:45] of these

[00:38:46] other

[00:38:46] models if

[00:38:46] you train

[00:38:47] it on

[00:38:47] that.

[00:38:48] But what

[00:38:48] you're

[00:38:48] playing

[00:38:49] there is

[00:38:49] this game

[00:38:50] of telephone,

[00:38:51] right?

[00:38:51] It's a copy

[00:38:52] of a copy

[00:38:52] of a copy

[00:38:53] and how

[00:38:53] does that

[00:38:54] dilute over

[00:38:54] time and

[00:38:55] how does

[00:38:55] that impact

[00:38:55] the quality

[00:38:56] of the

[00:38:57] output?

[00:38:57] Well,

[00:38:57] what's

[00:38:57] fascinating

[00:38:58] there too,

[00:38:58] Jason,

[00:38:59] would

[00:38:59] chat GPT,

[00:39:00] would

[00:39:00] open AI

[00:39:01] scream

[00:39:02] copyright

[00:39:03] violation

[00:39:03] the way

[00:39:04] that publishers

[00:39:05] scream it

[00:39:06] to open

[00:39:06] AI?

[00:39:07] You can't

[00:39:07] train on

[00:39:08] me,

[00:39:08] but I

[00:39:08] can train

[00:39:08] on

[00:39:09] everybody

[00:39:09] else?

[00:39:09] Or

[00:39:09] did

[00:39:09] not

[00:39:09] say

[00:39:10] that?

[00:39:10] I

[00:39:16] see.

[00:39:17] I wonder

[00:39:17] if they

[00:39:18] will come

[00:39:18] back and

[00:39:19] say that,

[00:39:20] and if so,

[00:39:22] people will be

[00:39:23] screaming hypocrite,

[00:39:24] I guess.

[00:39:25] And maybe that's

[00:39:26] as far as it

[00:39:26] goes because

[00:39:27] they still

[00:39:27] don't care.

[00:39:28] Yeah.

[00:39:29] is what it

[00:39:30] is.

[00:39:31] We are going

[00:39:31] to take a

[00:39:32] super quick

[00:39:32] break,

[00:39:33] and then when

[00:39:33] we come back,

[00:39:34] we have a

[00:39:35] little feedback

[00:39:35] that I thought

[00:39:36] was really

[00:39:36] interesting,

[00:39:37] and then we'll

[00:39:37] round things

[00:39:38] out,

[00:39:38] and then I'm

[00:39:38] going to

[00:39:38] head to

[00:39:39] Las Vegas

[00:39:40] for the

[00:39:40] Consumer

[00:39:41] Electronics

[00:39:41] show.

[00:39:42] So hang

[00:39:42] tight,

[00:39:43] we'll be

[00:39:43] back in a

[00:39:43] second.

[00:39:47] All right,

[00:39:48] rounding

[00:39:48] things out

[00:39:48] with not

[00:39:49] an email,

[00:39:49] this was

[00:39:50] actually a

[00:39:51] comment on

[00:39:52] I think

[00:39:52] our last

[00:39:53] holiday

[00:39:54] episode,

[00:39:55] got a

[00:39:57] comment from

[00:39:57] Carrie

[00:39:58] Littleford

[00:39:59] 8980

[00:40:00] on YouTube,

[00:40:01] so you

[00:40:01] can find

[00:40:01] it if

[00:40:02] you go

[00:40:02] to

[00:40:03] youtube.com

[00:40:04] slash

[00:40:04] at

[00:40:05] Jason

[00:40:05] Howell,

[00:40:05] you can

[00:40:05] find the

[00:40:06] AI Inside

[00:40:06] episode.

[00:40:07] I am

[00:40:07] working,

[00:40:08] by the

[00:40:08] way,

[00:40:08] on moving

[00:40:09] the AI

[00:40:10] Inside

[00:40:10] podcast

[00:40:11] to its

[00:40:11] own

[00:40:11] YouTube

[00:40:12] channel.

[00:40:12] I'm in

[00:40:13] the middle

[00:40:13] of that

[00:40:14] process.

[00:40:14] It's quite

[00:40:15] an endeavor

[00:40:15] to move

[00:40:16] things over.

[00:40:17] So in

[00:40:17] the near

[00:40:18] future,

[00:40:18] I'll let

[00:40:19] you guys

[00:40:19] know where

[00:40:20] you can

[00:40:20] find the

[00:40:20] show on

[00:40:21] YouTube

[00:40:21] going

[00:40:22] forward.

[00:40:22] It just

[00:40:23] makes more

[00:40:23] sense for it

[00:40:24] to have its

[00:40:24] own channel.

[00:40:25] Anyways,

[00:40:26] Carrie

[00:40:27] says,

[00:40:27] at the end

[00:40:28] of the

[00:40:28] year,

[00:40:29] Rabbit Tech

[00:40:29] delivered what

[00:40:30] they promised

[00:40:31] and more.

[00:40:31] And what

[00:40:32] Carrie's talking

[00:40:32] about,

[00:40:33] of course,

[00:40:33] is the

[00:40:33] Rabbit R1.

[00:40:35] Which we've

[00:40:35] mocked a bit,

[00:40:36] yes.

[00:40:37] I mean,

[00:40:37] we're not

[00:40:38] alone.

[00:40:38] Everyone

[00:40:39] mocked this

[00:40:39] thing because

[00:40:41] it promised

[00:40:42] this large

[00:40:43] action model

[00:40:43] and it didn't

[00:40:44] deliver on it,

[00:40:45] especially at

[00:40:45] launch.

[00:40:46] And Carrie

[00:40:47] wanted to

[00:40:48] kind of follow

[00:40:48] up and give

[00:40:49] it its

[00:40:49] credit and

[00:40:51] say,

[00:40:51] for millions

[00:40:52] of the 1

[00:40:52] billion disabled

[00:40:53] people in

[00:40:54] the world,

[00:40:54] the smartphone

[00:40:55] is a terribly

[00:40:57] designed device.

[00:40:58] I use the

[00:40:59] R1 in my

[00:40:59] artworks,

[00:41:00] in my arts

[00:41:01] work,

[00:41:02] with folks

[00:41:02] with disabilities

[00:41:03] and it's been

[00:41:05] very welcomed.

[00:41:05] Where using a

[00:41:06] smartphone creates

[00:41:07] suspicion,

[00:41:08] the R1

[00:41:09] creates

[00:41:10] fascination.

[00:41:10] Very good

[00:41:11] results in

[00:41:12] educational

[00:41:12] settings.

[00:41:13] I'm not

[00:41:14] going to

[00:41:14] give my

[00:41:15] phone to

[00:41:15] a bunch

[00:41:15] of use

[00:41:16] to use,

[00:41:17] she puts

[00:41:17] that in

[00:41:17] quotes,

[00:41:18] but the

[00:41:19] R1 gets

[00:41:19] them working

[00:41:20] and learning.

[00:41:21] I do find

[00:41:22] I use

[00:41:23] it a lot

[00:41:23] as pulling

[00:41:25] out my

[00:41:25] smartphone

[00:41:25] to do

[00:41:26] anything

[00:41:26] is just

[00:41:27] so boring.

[00:41:28] I've been

[00:41:29] in tech

[00:41:29] my whole

[00:41:30] life and

[00:41:30] smartphones

[00:41:30] are just

[00:41:31] dull,

[00:41:31] unimaginative

[00:41:32] tech now.

[00:41:34] So,

[00:41:35] Carrie,

[00:41:36] I think she

[00:41:37] delivered a

[00:41:38] couple of

[00:41:38] really important

[00:41:39] and valuable

[00:41:40] points as far

[00:41:41] as the

[00:41:42] device.

[00:41:42] I think it

[00:41:43] also points

[00:41:43] to the

[00:41:44] fact that

[00:41:45] when we

[00:41:46] think of

[00:41:47] the

[00:41:47] R1,

[00:41:47] we think

[00:41:48] of the

[00:41:48] disastrous

[00:41:49] launch.

[00:41:49] I think

[00:41:50] it's pretty

[00:41:50] safe to

[00:41:51] call it

[00:41:51] a

[00:41:51] disastrous

[00:41:52] launch.

[00:41:52] There

[00:41:52] were worse.

[00:41:53] Humane

[00:41:53] was worse.

[00:41:55] There

[00:41:55] were worse.

[00:41:56] But,

[00:41:56] you know,

[00:41:57] that's where

[00:41:58] it's sharing

[00:41:58] its spaces

[00:41:59] with Humane.

[00:42:00] You probably

[00:42:00] don't want to

[00:42:00] be on that

[00:42:01] list at

[00:42:02] all.

[00:42:02] But I

[00:42:03] do want

[00:42:03] to give

[00:42:03] Rabbit

[00:42:04] some credit.

[00:42:05] Late

[00:42:05] November,

[00:42:06] Rabbit launched

[00:42:07] Teach Mode

[00:42:08] beta,

[00:42:09] and this

[00:42:10] allows users

[00:42:10] to create

[00:42:11] and instruct

[00:42:11] AI agents

[00:42:12] on digital

[00:42:13] interfaces like

[00:42:14] websites with

[00:42:14] the R1

[00:42:15] large action

[00:42:16] model.

[00:42:16] So,

[00:42:17] essentially,

[00:42:18] with that

[00:42:18] update,

[00:42:19] delivering

[00:42:19] kind of on

[00:42:20] the initial

[00:42:20] promise of

[00:42:21] the device

[00:42:22] and giving

[00:42:23] users the

[00:42:24] ability to

[00:42:25] set up these

[00:42:25] large action

[00:42:26] models with

[00:42:26] the device

[00:42:27] and still

[00:42:28] developing,

[00:42:29] still building

[00:42:30] it out.

[00:42:30] I have not

[00:42:31] used the

[00:42:32] LAM,

[00:42:33] and I have

[00:42:34] the device.

[00:42:34] I need to

[00:42:35] pull it off

[00:42:36] and use it

[00:42:37] to see,

[00:42:38] you know,

[00:42:39] is it any

[00:42:40] good?

[00:42:40] Well,

[00:42:40] I think the

[00:42:41] great lesson

[00:42:41] here for

[00:42:42] us is,

[00:42:43] and I thank

[00:42:44] Carrie for

[00:42:45] this,

[00:42:47] is that

[00:42:48] we shouldn't

[00:42:49] judge devices

[00:42:50] and their

[00:42:50] use and

[00:42:51] their utility

[00:42:51] only on the

[00:42:52] basis of

[00:42:53] our use,

[00:42:55] that there

[00:42:56] are other

[00:42:56] use cases,

[00:42:57] other people,

[00:42:57] other needs,

[00:42:59] and other

[00:42:59] perspectives,

[00:43:00] and so

[00:43:00] trying to

[00:43:01] keep an

[00:43:01] open mind

[00:43:02] to imagine

[00:43:02] what those

[00:43:03] other uses

[00:43:03] might be

[00:43:04] is really

[00:43:05] important.

[00:43:06] I don't

[00:43:06] think the

[00:43:07] rabbit set

[00:43:07] out to

[00:43:08] be the

[00:43:08] device for

[00:43:09] disabled

[00:43:09] people,

[00:43:09] but if it

[00:43:10] succeeds at

[00:43:11] that,

[00:43:11] well,

[00:43:11] hallelujah.

[00:43:13] Yeah,

[00:43:14] I mean,

[00:43:14] anytime you're

[00:43:15] creating a

[00:43:15] new form

[00:43:17] factor,

[00:43:17] a new

[00:43:17] technology,

[00:43:18] whatever,

[00:43:19] there's the

[00:43:19] ways that

[00:43:20] you intended

[00:43:21] for it to

[00:43:21] be used,

[00:43:22] and then

[00:43:23] inevitably

[00:43:24] people find

[00:43:25] other ways

[00:43:25] to use it,

[00:43:25] and sometimes

[00:43:26] those ways

[00:43:27] become the

[00:43:28] selling point,

[00:43:28] and as a

[00:43:29] creator,

[00:43:29] that's got to

[00:43:30] be a really

[00:43:30] interesting

[00:43:31] position to

[00:43:31] be in.

[00:43:32] Like,

[00:43:32] I created

[00:43:32] it for this

[00:43:33] thing,

[00:43:33] but actually

[00:43:34] people are

[00:43:34] using it

[00:43:34] for that,

[00:43:35] and I never

[00:43:36] thought about

[00:43:37] that,

[00:43:37] and then that

[00:43:37] becomes your

[00:43:38] business.

[00:43:38] That's so

[00:43:39] fascinating to

[00:43:39] me.

[00:43:39] I always

[00:43:40] argue,

[00:43:40] when I wrote

[00:43:41] the book,

[00:43:41] What Would

[00:43:41] Google Do?,

[00:43:43] which is

[00:43:43] what brought

[00:43:44] me into all

[00:43:44] this,

[00:43:44] I said that

[00:43:45] you're not

[00:43:45] really a

[00:43:45] platform unless

[00:43:46] people find

[00:43:47] ways to use

[00:43:48] what you've

[00:43:48] built in a

[00:43:49] way you

[00:43:49] could not

[00:43:49] have imagined,

[00:43:50] because then

[00:43:51] it becomes

[00:43:51] theirs,

[00:43:52] right?

[00:43:53] And so I

[00:43:54] think that's

[00:43:54] the case here.

[00:43:55] So thanks for

[00:43:55] finding that,

[00:43:56] Jason.

[00:43:56] That was

[00:43:58] illuminating.

[00:43:58] No,

[00:43:59] I thought it

[00:43:59] was a

[00:44:00] wonderful

[00:44:00] comment on

[00:44:02] the video,

[00:44:03] on the

[00:44:03] episode,

[00:44:04] so Carrie,

[00:44:04] thank you for

[00:44:05] dropping that

[00:44:05] on the YouTube

[00:44:07] video,

[00:44:08] and I'm

[00:44:09] looking at

[00:44:09] all the

[00:44:09] different

[00:44:09] directions.

[00:44:10] If you're

[00:44:10] emailing us,

[00:44:11] if you're

[00:44:11] dropping

[00:44:12] comments on

[00:44:12] our video,

[00:44:13] you know,

[00:44:14] if it makes

[00:44:14] sense,

[00:44:15] we'll pull

[00:44:15] it into the

[00:44:16] episode.

[00:44:16] I thought that

[00:44:17] was a really

[00:44:17] great opportunity

[00:44:18] to kind of

[00:44:19] do a check-in

[00:44:20] on the

[00:44:21] Rabbit R1.

[00:44:22] All right,

[00:44:22] I'm looking at

[00:44:23] the clock.

[00:44:23] I probably need

[00:44:24] to round this

[00:44:24] out because I

[00:44:25] need to do

[00:44:26] some post-work

[00:44:26] on this before

[00:44:27] I can drive

[00:44:28] to the airport

[00:44:28] and start my

[00:44:29] 48 hours of

[00:44:30] no sleep.

[00:44:32] And very

[00:44:33] sore feet.

[00:44:35] Yeah,

[00:44:36] I know,

[00:44:36] I'm still

[00:44:36] kind of

[00:44:37] waffling on

[00:44:38] which shoes

[00:44:39] to wear,

[00:44:39] and,

[00:44:39] um,

[00:44:40] you know,

[00:44:41] I gotta go

[00:44:42] with the most

[00:44:42] comfortable.

[00:44:42] Absolutely.

[00:44:43] They aren't

[00:44:43] the most

[00:44:43] stylish,

[00:44:44] so,

[00:44:45] uh,

[00:44:46] JeffJarvis.com

[00:44:46] is where

[00:44:47] folks can

[00:44:47] go,

[00:44:48] where everybody

[00:44:48] should go,

[00:44:49] um,

[00:44:50] to check out

[00:44:51] all of your

[00:44:51] work,

[00:44:52] The Web We

[00:44:52] Weave,

[00:44:53] of course,

[00:44:53] the Gutenberg

[00:44:54] Parenthesis,

[00:44:55] and Magazine,

[00:44:56] all found

[00:44:57] on your website.

[00:44:59] And y'all

[00:44:59] should do that

[00:45:00] for sure.

[00:45:01] Thank you,

[00:45:01] Jeff.

[00:45:01] Thank you,

[00:45:02] boss.

[00:45:03] Always a

[00:45:03] pleasure being

[00:45:04] able to do

[00:45:04] this show

[00:45:05] with you

[00:45:05] each and

[00:45:05] every week.

[00:45:06] I learn

[00:45:06] so much.

[00:45:07] Um,

[00:45:08] jeffjarvis.com,

[00:45:09] also AI

[00:45:10] Inside.show.

[00:45:11] If you want

[00:45:12] to subscribe

[00:45:13] to the podcast,

[00:45:14] you can do that.

[00:45:15] Oh,

[00:45:15] I need to fix

[00:45:15] that thumbnail

[00:45:16] down there.

[00:45:17] See,

[00:45:17] I learn so many

[00:45:18] things when I go

[00:45:18] to my own

[00:45:18] website.

[00:45:19] Uh,

[00:45:20] but AI Inside.show

[00:45:21] has all the

[00:45:22] ways to subscribe,

[00:45:23] all the ways

[00:45:23] to follow the

[00:45:24] show,

[00:45:25] information for

[00:45:26] live,

[00:45:27] uh,

[00:45:27] also has a

[00:45:28] link to our

[00:45:28] Patreon.

[00:45:29] So if you go

[00:45:30] to patreon.com

[00:45:31] slash,

[00:45:31] uh,

[00:45:32] AI Inside

[00:45:32] Show,

[00:45:33] you can

[00:45:34] support this

[00:45:35] show.

[00:45:35] We really

[00:45:36] appreciate it

[00:45:36] if you do

[00:45:36] that.

[00:45:37] You get ad-free

[00:45:37] versions of

[00:45:38] the show,

[00:45:38] you get a

[00:45:39] Discord community,

[00:45:40] we do some

[00:45:41] Hangouts,

[00:45:41] you get an

[00:45:42] AI Inside

[00:45:42] t-shirt if you

[00:45:43] become an

[00:45:44] executive producer,

[00:45:45] and I'm

[00:45:46] happy to say

[00:45:46] that we have

[00:45:46] a new

[00:45:47] executive producer.

[00:45:48] We've got,

[00:45:48] you know,

[00:45:49] the ones

[00:45:49] that you've,

[00:45:50] that you've,

[00:45:50] you're familiar

[00:45:51] with at this

[00:45:52] point,

[00:45:52] Dr. Dude,

[00:45:53] Jeffrey Maricini,

[00:45:55] WPVM 103.7 in

[00:45:56] Asheville,

[00:45:57] North Carolina,

[00:45:57] Paul Lang,

[00:45:58] and our

[00:45:59] newest executive

[00:46:00] producer,

[00:46:01] Dante St.

[00:46:02] James.

[00:46:03] Dante,

[00:46:03] thank you so

[00:46:04] much.

[00:46:04] Thank you,

[00:46:04] Dante.

[00:46:06] Great to

[00:46:06] have you on

[00:46:07] board.

[00:46:08] We love,

[00:46:08] we love having

[00:46:09] executive producers.

[00:46:10] It really does

[00:46:11] help this show

[00:46:11] out.

[00:46:11] So thank you

[00:46:12] everybody,

[00:46:13] patreon.com

[00:46:13] slash AI

[00:46:14] Inside

[00:46:14] Show,

[00:46:15] and,

[00:46:16] uh,

[00:46:16] really,

[00:46:17] I think that's

[00:46:17] all there is to

[00:46:18] it.

[00:46:18] I'm going to go

[00:46:18] to Vegas now

[00:46:19] and hopefully

[00:46:20] not get sick

[00:46:21] and see lots

[00:46:21] of things

[00:46:22] and I'll have

[00:46:22] much to report

[00:46:23] on next week's

[00:46:24] episode of

[00:46:25] AI Inside.

[00:46:26] Thank you again,

[00:46:27] Jeff.

[00:46:28] Appreciate you.

[00:46:28] And we'll see

[00:46:29] y'all later.

[00:46:30] Bye,

[00:46:30] everybody.

[00:46:31] Bye,