Jason Howell and Jeff Jarvis discuss the winners and losers in AI for 2024, the persistence of AI hallucinations, the most useful AI tools for creators, and their expectations for AI development in 2025.
🔔 Support the show on Patreon!
CHAPTERS:
0:00:00 - Show begins
0:02:21 - The winners
0:02:35 - Nvidia
0:05:05 - OpenAI GPT-4o
0:08:17 - Hopfield and Hinton (Nobel Prize)
0:09:08 - Alphafold (Nobel Prize)
0:11:22 - xAI Colossus Data Center
0:13:27 - Open Source AI
0:17:25 - Anthropic
0:18:10 - OpenAI's ongoing soap opera
0:21:43 - The losers
0:21:59 - Intel funding has slowed
0:23:33 - Google/Gemini (woke, "glued pizza)
0:25:58 - Persistence of hallucinations
0:26:42 - Scarlett Johannson and OpenAI
0:27:48 - Microsoft Recall
0:29:05 - Big Tech stock decline regarding AI
0:29:49 - Stability losing Mostaque, Inflection AI losing Suleyman
0:30:28 - Devices of the year
0:31:10 - Rabbit R1
0:32:57 - Humane AI Pin
0:34:20 - IYO One
0:37:12 - Ive/Altman hardware rumor
0:38:24 - Cool tools
0:38:41 - NotebookLM
0:41:42 - RunwayML
0:42:00 - Udio and Suno
0:42:33 - Perplexity Pro
0:43:18 - GPT-4o, Sora, Ideogram
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] This is AI Inside episode 49, recorded November 22nd for January 1st, 2025. Year in Review, part 2.
[00:00:11] This episode of AI Inside is made possible by our amazing patrons at patreon.com slash AI Inside show.
[00:00:17] If you like what you hear, head on over and support us directly. And thank you for making independent podcasting possible.
[00:00:27] Hello, everybody, and welcome to yet another episode of AI Inside, the show where we take a look at the AI that is layered throughout so much of the world.
[00:00:34] I'm one of your hosts, Jason Howell, joined by Jeff Jarvis. How you doing, Jeff?
[00:00:39] I gotta figure out how do I point to Jason that way.
[00:00:42] Yeah, you got it.
[00:00:42] It just makes no sense.
[00:00:43] Yeah, okay.
[00:00:45] You got it. Good to see you. Of course, we did, you know, pulling back the curtain, we did just record part one, which y'all heard and watched last week.
[00:00:54] This week, it's a new part of our kind of retrospective look at the year in artificial intelligence of 2024 to see kind of where we've come.
[00:01:06] And kind of along the way, we're also kind of looking at where things are headed. It's just kind of part of the conversations that were.
[00:01:13] And it's a year for the show, and it's the end of the year, and it surprised us both. We thought, maybe we can stretch out one episode to create this, and we thought, no, it's two, because there's so much to talk about.
[00:01:26] Could you imagine if we had tried to do all of this in one episode, though, Jeff? We probably would have said like five things for each story. It would have just been like, and that happened.
[00:01:34] Next, next.
[00:01:35] Oh, and that happened. Oh, and that happened, you know?
[00:01:38] So this actually-
[00:01:38] I'm at a distinct advantage because I'm a fast-talking New Yorker. I can get more words in than you can. You're a slow-moving Californian.
[00:01:46] Yeah, it's true. A little too relaxed sometimes when it comes to that.
[00:01:50] Real quick, before we get started, thank you to everybody who supports us on Patreon, patreon.com slash AIinsideshow.
[00:01:57] BV Muir is one of our patrons, and we appreciate you. So thank you for being with us and giving us a little support.
[00:02:04] And with that being said, why don't we get into this week's kind of aspect or episode looking at the year of 2024 and AI?
[00:02:15] And I think we'll start by looking at the winners and losers. Maybe focus a little bit on the winners, you know, the ones who really have benefited this year, because there are definitely some standouts.
[00:02:27] And I just listed a couple, but I'm sure there's more. But I think probably has to be at the top of the conversation is NVIDIA.
[00:02:37] Man, NVIDIA, just when we talk about the backbone of so much of what is happening in the world of AI, NVIDIA time and time again throughout this year has been just kind of the recurring story of, oh, yeah, and of course NVIDIA was there.
[00:02:52] It's almost like NVIDIA is always in the room where it happened.
[00:02:57] NVIDIA just doubled its profit, and the market was disappointed, and it lost a percent.
[00:03:03] I don't understand how that works, man.
[00:03:05] I don't either. And it's amazing.
[00:03:09] And they had a speed bump. Their new chip was overheating. They got to figure that out.
[00:03:14] But even the market didn't go crazy with that. They figured they'd figure it out.
[00:03:18] Yeah.
[00:03:19] People, there's a, there's, it's with supply and demand. They can't make the chips fast enough.
[00:03:24] I think one of the big questions here is how much competition will come in, because everybody else is trying to build equal chips.
[00:03:32] Google's trying it. Amazon's trying it. Apple's trying it. So we'll see what happens.
[00:03:36] But NVIDIA is still, right now, the clear winner.
[00:03:43] Yeah, yeah. Really interesting. I do think that NVIDIA, I mean, I say this, not having a full understanding of how the chips market works.
[00:03:53] You know, I know some people really follow it closely.
[00:03:55] But it just makes a lot of sense to me that companies like Google would have a lot of interest in owning the stack, as it were, and being able to do it on their own terms.
[00:04:09] And I think some companies are just more equipped to do that than others.
[00:04:13] But I think I'm curious to know how long NVIDIA's kind of tie into all of this lasts before companies are, you know, do decide that they want to kind of command it entirely and be, you know, be in command of their fate, as it were.
[00:04:33] But maybe that won't happen. Maybe that's just not how the chips market works.
[00:04:36] You know, NVIDIA has their scale. They have their partnerships. They have all of their development that powers, you know, and enables them to do what they do so well.
[00:04:46] Maybe that's good enough for other companies.
[00:04:48] Yeah, that's fascinating that they're also, it's a B2C, B2B kind of conundrum here where they're creating competition for their own customers and their models they create.
[00:04:58] But they can do it because everybody needs them.
[00:05:00] Yep. Everybody wants you.
[00:05:03] Ooh. OpenAI GPT-4.0.
[00:05:08] I think this was one of those releases that really, I mean, there were so many pieces of news coming from OpenAI.
[00:05:17] And actually, we kind of have a little separate section, which we'll get to in a moment, about OpenAI.
[00:05:22] But I think GPT-4.0 was one of the many pieces of news coming from the company that really captured people's imaginations around what these are capable of.
[00:05:34] Of course, there was the voice mode, which was demonstrated at one of their events that had a lot of controversy, which we'll definitely talk about.
[00:05:42] But also, again, captured imagination really made people go, whoa, I didn't know that we were going to see that so soon.
[00:05:49] That kind of interactive quality really caught people, I think, a little bit off guard as far as the quality of what they were able to put out.
[00:05:58] Yes, but I think they touted that it could do reasoning.
[00:06:02] And I don't think I've heard an outcry of people saying, wow, it really can.
[00:06:06] I don't think that was oversold.
[00:06:09] And then the other thing that we've talked about a couple times is that all of them are now admitting that they're hitting the wall, bit of a ceiling, in terms of the speed of progress.
[00:06:22] They're making progress.
[00:06:23] 4.0 is neat in all kinds of ways.
[00:06:26] It can do lots of new things.
[00:06:27] I take nothing away from them for that.
[00:06:30] But it's not, once again, it's not AI or AGI around the corner, folks.
[00:06:35] It's not as if this is a hockey stick toward false humanity.
[00:06:42] So they won.
[00:06:43] They did well.
[00:06:45] But I think there's a metering of expectations now.
[00:06:50] Yeah, yeah.
[00:06:51] That is true.
[00:06:54] Probably from a user standpoint, people who use all of these systems, the kind of general purpose, GPTs and LLMs and all that kind of stuff.
[00:07:08] 4.0 is something that the users of these systems are probably pretty impressed by, I guess, based on what I've seen anyways.
[00:07:20] And I've used it a little bit.
[00:07:22] But they all kind of start to blend together a little bit, though, for me.
[00:07:26] And maybe part of the reason for that is that I use perplexity, which actually pulls in all these different systems.
[00:07:32] So sometimes it's kind of difficult for me to really know what is what and what is good versus not because it's kind of a big soup.
[00:07:41] But I know there's been a lot of positive kind of reaction to what 4.0 is capable of.
[00:07:47] The reasoning aspect of it, I haven't had a whole lot of interaction with that, so I can't speak to that personally.
[00:07:53] Well, and to the sameness, let's remember, it all comes back to the parent here, which is Google's own research that opened this up.
[00:08:01] And when you think about it, the fact that that research was made open and that everybody could learn from it and build stuff around it is a pretty impressive thing.
[00:08:10] For sure.
[00:08:12] It's beyond Bell Labs, I think, in a way.
[00:08:15] Yeah, yeah.
[00:08:17] I put this in here because if we're talking about winners, I mean, they legitimately won the Nobel Prize in physics.
[00:08:24] Prize winners, yeah.
[00:08:25] John Hopfield, Jeffrey Hinton, both won the 2024 Nobel Prize in physics for work that they were doing in artificial intelligence.
[00:08:37] Hopfield created a structure that can store and reconstruct information.
[00:08:42] Hinton invented a method that can independently discover properties in data.
[00:08:48] And, you know, is this the first?
[00:08:51] No, it's probably not the first Nobel Prize related to AI.
[00:08:55] It's just a random question, but I don't know.
[00:08:59] I don't know.
[00:08:59] The answer to that.
[00:09:00] I don't know.
[00:09:00] It's probably a good question.
[00:09:02] But, you know, they're winners.
[00:09:04] Also a winner of Nobel Prize, AlphaFold 3, which we, at least at the time of this recording, talked about not too long ago.
[00:09:12] I believe that's Google, right?
[00:09:14] Google AlphaFold.
[00:09:17] Am I right on that?
[00:09:18] Yeah.
[00:09:19] Part of DeepMind.
[00:09:20] Yeah.
[00:09:20] Yep.
[00:09:21] And that's all about, you know, DNA, RNA, predicting the structure of proteins, some big advancements there.
[00:09:32] I mean, in health in general, I just, that continues to be something that's really interesting to me.
[00:09:38] And I could just see how AI really improves or potentially improves kind of the development of knowledge around health.
[00:09:48] Yeah.
[00:09:49] There was, when I went to that first World Economic Forum event in San Francisco, there was somebody put forward a framework that I quite like, which I mentioned in the show during the year.
[00:09:56] But again, it's a year interview, so I'll mention it again.
[00:09:59] You can.
[00:09:59] Yeah.
[00:10:00] Is that this kind of technology can, I think I've got to remember it.
[00:10:08] It raises the floor.
[00:10:11] It makes it easier for people to do things that they otherwise couldn't have done.
[00:10:15] It enables scale.
[00:10:16] We can do more of the same thing than we could before.
[00:10:19] Or it raises the ceiling.
[00:10:21] It lets us do things that we otherwise could not have done at all.
[00:10:28] And so that's alpha fold.
[00:10:30] That's protein folding.
[00:10:32] What all of the scientists I saw at that event, they just said, there's no way we could have done this at all without this machine.
[00:10:39] And I think that's the more interesting part of all this.
[00:10:42] We're all fascinated by the stuff that helps us do things that we couldn't do.
[00:10:46] Gee, I couldn't draw.
[00:10:47] Now I can draw as an individual.
[00:10:49] Or gee, I can write 10 more things.
[00:10:52] With the time it took me to write one thing, I can scale.
[00:10:54] But what's more interesting than anything else is I think it's that highest level where it helps us do things that mankind, humanity, couldn't yet do or couldn't yet do easily.
[00:11:06] That's where I think the real power of this is.
[00:11:08] And it's so theoretical and so beyond my ken, I'll never fully understand the power of that accomplishment.
[00:11:17] But those are winners.
[00:11:18] Yeah.
[00:11:19] Yeah.
[00:11:19] Indeed.
[00:11:20] Indeed.
[00:11:20] Indeed.
[00:11:21] How do you feel about – I put this in here knowing there might be some contention on it, but the XAI Colossus Data Center built in 122 days.
[00:11:31] Do we trust that?
[00:11:34] And if so, I mean, that seems like a big accomplishment if it's actually true.
[00:11:40] Well, I don't know, Jason.
[00:11:41] I'm going to get in my Hyperloop tube and I'll see you in about an hour and a half.
[00:11:45] And then we can have a real discussion about this in your den there.
[00:11:48] All right.
[00:11:49] Cool.
[00:11:50] Yeah.
[00:11:50] Yeah.
[00:11:52] No.
[00:11:53] But I have to admit that just this moment, I realized – because I got a blue check I didn't ask for on Twitter.
[00:12:00] So thus, I have access to Grok, which I forgot that I had.
[00:12:03] I had never used it.
[00:12:04] So for the very first time while we're on the show, I went into Grok, which is his AI, his LLM.
[00:12:11] And I asked the normal first question, who am I?
[00:12:14] Who's Jeff Jarvis?
[00:12:16] And it's good.
[00:12:18] It's pretty good.
[00:12:20] Okay.
[00:12:21] So I've got to play with Grok and see because my reflex is to make fun of it and to say, no, he just BSs us.
[00:12:30] But he does send rockets into space.
[00:12:32] You know, you got to give him that.
[00:12:33] Yeah.
[00:12:35] And so I'm going to play with this more to see what he's really doing with it.
[00:12:40] I don't know.
[00:12:41] Where the real value is going to come from?
[00:12:43] I don't know.
[00:12:44] But he can get money to do anything he wants.
[00:12:46] And so that's a case where he has.
[00:12:48] Indeed.
[00:12:49] I have not used Grok.
[00:12:51] I do not have access to it.
[00:12:53] So I'm curious as you interact with it.
[00:12:57] You know, this kind of touches a little bit of what we were just talking about, the sameness between these things, though.
[00:13:03] You know, sometimes it can be really – like they all do very similar things.
[00:13:07] It really takes a lot of interaction with them to really understand which ones are good for which certain things versus others and stuff.
[00:13:16] True.
[00:13:16] But as you play around with it, I'll be curious to hear kind of what you think of the Grok.
[00:13:21] Yep.
[00:13:21] It's not even on my radar because I'm not – you know, I'm not okay for Twitter or anything like that.
[00:13:26] I don't either.
[00:13:27] Yeah.
[00:13:27] The big winner-loser question that I think is yet to be determined is open source or not.
[00:13:33] Mm-hmm.
[00:13:34] And I think that'll be fascinating to watch, whereas I've been arguing that open source is vital.
[00:13:44] It's important.
[00:13:45] Yeah.
[00:13:46] And that Meta's strategy is like an Android strategy, and you know Android better than anybody I know, where they can scale and take over much of the world and compete with and devalue competitors by making things open.
[00:14:01] And again, it's not really open source.
[00:14:03] What is it we call it, Jason?
[00:14:04] We call it –
[00:14:05] Open-ish.
[00:14:06] Open-ish.
[00:14:06] Open-ish AI.
[00:14:08] Because it doesn't meet all the requirements of open source, but it is openly available.
[00:14:12] Right.
[00:14:12] And people can use it.
[00:14:13] And I was talking to somebody at my university where they – you know, it's a real benefit to a university where you can take a model and put it in to computer – to compute power at a university and computer scientists can play with it.
[00:14:23] That matters greatly.
[00:14:25] Also companies, all kinds of other things are built on top of it.
[00:14:27] And so I think Meta – I think I've long said on the show – is a dark horse here in the AI wars.
[00:14:36] What do you think?
[00:14:37] Did open source make a difference there or not?
[00:14:39] Well, I think it's really interesting from the perspective of when – you know, often when I've thought of open source in AI, I've often thought of it as behind.
[00:14:52] Behind when it comes – you know, behind some of the frontier kind of major models that are touted by everyone else that everybody knows much better because they're easier to use.
[00:15:03] They're more accessible.
[00:15:05] All that kind of stuff.
[00:15:06] And not to mention they've got the financial – you know, this massive financial backing, open AI, that sort of stuff.
[00:15:11] And so often when I think of open source AI, I've thought like, okay, it's wider – it's less specific to those who have the money to use them, but it's not as powerful.
[00:15:27] And I think what's interesting to me is that that gap is narrowing.
[00:15:32] And, you know, and I'm probably wrong.
[00:15:35] It's probably just an assumption that I have about it that is misinformed.
[00:15:39] So some of you out there are screaming like, what the hell is he talking about?
[00:15:44] That's just kind of the perception that I've had of open source AI until recently.
[00:15:49] And I feel like more and more based on especially the work that Meta is doing, these open-ish AI systems are getting – are closing that gap.
[00:16:01] And so, yeah, I think there's a lot to be gained there.
[00:16:05] I'm super curious to see how that develops over the course of the next couple of years and what that means for Meta as a business really kind of leaning into that aspect of things.
[00:16:20] Does that end up being a real winner for their approach to their place in the world of AI?
[00:16:26] I would guess, like you, that it probably is going to be a really big benefit to them using – you know, just using Android as kind of the model.
[00:16:36] I realize they're different, but in some ways it's kind of a similar story, and I'm super curious to see what that means.
[00:16:42] Yeah, and also I think the other thing is that Meta – Google has the most uses for AI.
[00:16:48] It has been using AI for a very long time.
[00:16:51] It's been unsung in that and accused of being behind when I actually think it's way ahead.
[00:16:58] Obviously, we use it in maps.
[00:17:00] We use it in translation like crazy.
[00:17:02] That was really one of the first breakthroughs was to realize a different model for translation and so much of – and ad placement and so much of what it does.
[00:17:11] Because Meta has its own uses, you know, we'll get to the Ray-Ban glasses and AR, VR, and we'll see where that goes.
[00:17:21] Yeah, yeah, indeed.
[00:17:26] Anthropic?
[00:17:27] I don't know.
[00:17:28] Do you think they're kind of in a winner position?
[00:17:32] You know, Claude's been doing well.
[00:17:36] Their agents, you know, got some positive attention.
[00:17:39] It's certainly from a revenue growth perspective.
[00:17:42] Anthropic seems to be doing okay.
[00:17:44] I mean, I don't know.
[00:17:46] I don't know.
[00:17:46] I think that they're trying to be the – they act that they're the safety company, but again, that's in the doer safety definition.
[00:17:54] They have the open constitution to determine how they work, but that's kind of mushy.
[00:18:01] Yeah.
[00:18:01] I'm not sure what I think of Anthropic.
[00:18:04] I think we'll have to wait and see.
[00:18:05] I wasn't sure exactly because we have this whole little block of open AI, and I think open AI's year has been a little bit in the winner category, a little bit in the loser category.
[00:18:20] I think net overall, I think they're doing okay for themselves, for their business.
[00:18:25] But, man, it's just been a whole lot of drama and, you know –
[00:18:32] The soap opera.
[00:18:32] It's a bit of soap opera.
[00:18:33] It's been a soap opera.
[00:18:35] If you just really quickly go through what happened in this year with open AI, right?
[00:18:38] Sam gets fired, disappears, launches a palace coup.
[00:18:44] So there's a palace coup against him.
[00:18:45] He launches a palace coup in return and returns.
[00:18:49] All kinds of big money people got involved in all that.
[00:18:52] Lots of discussion about – I'm using air quotes now – safety and what it means and people being afraid of what he was doing with safety and not.
[00:19:00] There was all of that.
[00:19:02] Then there's the Musk fights, and we learned more about that recently with emails coming out from 2018, right?
[00:19:09] Yeah, like 15 to 18.
[00:19:12] Yeah, exactly.
[00:19:13] But that continues with Musk suing both OpenAI and Microsoft.
[00:19:18] So that's more drama.
[00:19:20] Then we have – as part of that, the fundamental structure of the company, is it not-for-profit or for-profit?
[00:19:28] And it's going to go be for-profit, which I think is more honest.
[00:19:32] But that's going to cause all kinds of issues in how it negotiates its cap table with the likes of Microsoft.
[00:19:37] And so there's all kinds of stuff going on there.
[00:19:40] Then, as you point out here, there's the disbanding of the internal safety research group and all those folks left.
[00:19:47] So we go back once again to this question of what is safety and who is enforcing it and what do we believe them or not.
[00:19:54] And meanwhile, they've gotten tons and tons of money.
[00:19:56] And they've come up with new models.
[00:19:58] And they're still the most PR-benefited AI company out there, I think.
[00:20:05] Absolutely.
[00:20:05] Absolutely.
[00:20:06] As I said before, Google, I think, does a lot that doesn't get credit for.
[00:20:09] Meta is doing a lot with that.
[00:20:12] There's – Anthropic is doing neat stuff.
[00:20:14] Perplexity is doing neat stuff.
[00:20:16] There's other companies out there.
[00:20:17] But just as Google became synonymous with the internet, OpenAI is becoming synonymous with AI.
[00:20:24] 100%.
[00:20:25] And you're seeing – yeah, you're seeing the Altman name attached to so many different things, obviously.
[00:20:33] Of course they want to switch to a for-profit structure because they're already making – on the precipice of making a lot of money.
[00:20:44] And once they switch over there, yeah, it's going to be really interesting to see how – I'm curious to see how OpenAI and the new incoming administration here in the U.S., like how these things start to kind of converge and come together.
[00:21:01] There was something that I was listening to on the radio, and I wish I paid a little bit closer attention to it.
[00:21:05] But here on a local level – and maybe this is in San Francisco – but then suddenly Sam Altman is thrown out as an advisor on that as well because in SF they're saying, you know, we need to embrace AI on a deeper level.
[00:21:21] And so we've invited Sam Altman to be part of the board of this, you know, kind of local effort.
[00:21:27] And yes, exactly.
[00:21:28] And it's just like, man, this is –
[00:21:30] He's everywhere.
[00:21:31] He's such the celebrity, you know, in the world of AI and tech.
[00:21:34] And he's having a baby.
[00:21:36] Oh.
[00:21:38] Well, there you go.
[00:21:38] That too.
[00:21:39] I had no idea.
[00:21:40] Yeah.
[00:21:41] I guess I had missed that.
[00:21:44] Losers, or I guess we're kind of intermixing them.
[00:21:47] These are just a bunch of random things I put in here.
[00:21:49] Tough moments.
[00:21:51] Yeah, tough moments.
[00:21:54] Intel definitely had kind of a difficult year when it comes to AI.
[00:21:59] Definitely the forgotten company, I think, to a certain degree.
[00:22:03] Their funding has slowed when it comes to AI, and that's hurt them a little bit.
[00:22:10] They just can't possibly compete with NVIDIA.
[00:22:14] They lost out in the whole graphic chip world.
[00:22:18] They just didn't get fast enough.
[00:22:20] Management is not where it should be.
[00:22:22] They're thinking of selling off some pieces.
[00:22:24] There's questions – I don't think there's questions of survival per se, but I think survival in its present form is questionable.
[00:22:31] And so, yeah, Intel owned the world, and what happens?
[00:22:35] It's been interesting how fast things change.
[00:22:37] In that world, I guess it's not surprising because technology changes so fast.
[00:22:40] We know that.
[00:22:41] That's acceptable.
[00:22:41] You have a company like IBM that did manage to keep up and change itself radically through time from making mainframes to making minis to making PCs to becoming a service company and a consulting company, and it's managed to still be IBM.
[00:23:00] Whereas, you know, Hewlett-Packard is just a company we all curse when we try to get a printer installed.
[00:23:05] And, you know, it was a huge company.
[00:23:07] And so we could go on and on, and I'm afraid that Intel is going to be a Honeywell, Hewlett-Packard name in the future.
[00:23:15] Yeah, that's kind of the vibe I'm getting to, for sure.
[00:23:17] Which is sad, in a way, because they did some amazing stuff for the day.
[00:23:21] But, yeah.
[00:23:22] Nope, loser.
[00:23:23] That's how the technology world moves.
[00:23:28] Someday we might be saying the same about Google, let's just say.
[00:23:32] Google Gemini, of course, you know, in the wake of the woke Gemini, the glued pizza.
[00:23:41] Yeah, there were the black founders of the nation.
[00:23:44] Though I don't think that was bad.
[00:23:47] I think that was what we should have imagined the nation to be.
[00:23:52] And it wasn't.
[00:23:53] Yeah.
[00:23:53] I think it showed what it could be.
[00:23:54] Because by telling the machine that you can't be biased, ergo, when it presented people who were founders of the nation, it said, okay, we're going to have some equity here.
[00:24:06] If only the nation had, we wouldn't be at the trouble we're in right now.
[00:24:09] So I don't think that's so bad.
[00:24:11] I don't think that's so bad.
[00:24:12] That's true.
[00:24:13] But was it trying to represent as fact that?
[00:24:16] No.
[00:24:16] Like that?
[00:24:17] It was trying to assert an alternative reality.
[00:24:23] Got it.
[00:24:24] However, on the other hand, there's the glued pizza.
[00:24:29] Which, you know, I've never tried glued pizza, so I don't know.
[00:24:34] Maybe it's good.
[00:24:35] I think if you were like a first grader, you might like it because you're used to the taste of glue.
[00:24:41] Can I have pepperoni and glue on that?
[00:24:44] But that was a good illustration, at least, of the issue of sourcing.
[00:24:49] Everybody said, how could it make up something stupid like that?
[00:24:51] Because obviously, we figured out very quickly that it had read documentation of how food set dressers operate.
[00:25:00] Right.
[00:25:01] Yeah.
[00:25:01] And if the question was how to keep the cheese on the pizza, well, you glue it.
[00:25:06] Yeah.
[00:25:06] Because that's what they do.
[00:25:08] And it makes perfect sense.
[00:25:09] But to tie – because again, AI has no sense of reality that we don't eat glue unless we're six years old.
[00:25:14] Yeah.
[00:25:14] It doesn't know, apparently, in that context.
[00:25:17] But, you know, it knows that it found the information out there and it has something to do with it.
[00:25:21] It did.
[00:25:21] Yep.
[00:25:22] Yeah.
[00:25:23] Yeah.
[00:25:23] That's interesting.
[00:25:24] And that just reminds me – not related to AI, but it just reminds me of, you know, when I was a young kid and my world kind of colliding.
[00:25:34] Or not my world's colliding, but the harsh reality of the fact that what you see on TV isn't real, realizing that when you see a cereal commercial and it's got this, like, gloopy milk that it's resting in and everything.
[00:25:48] Yeah.
[00:25:48] That's not actually milk.
[00:25:50] That's glue.
[00:25:51] It's like so.
[00:25:52] Right.
[00:25:52] Right.
[00:25:52] Yep.
[00:25:53] It's true.
[00:25:53] It does happen.
[00:25:54] It's just not the kind of pizza that you'd want to eat.
[00:25:57] The persistence of hallucinations.
[00:25:59] I put this in here just because I think it's always going to be in there, which is –
[00:26:03] It's an ongoing problem.
[00:26:04] Yep.
[00:26:05] It's an ongoing problem, and I think it's a problem that people are hopeful will get solved, and I'm just not convinced that it'll ever get solved.
[00:26:16] Maybe it'll get reduced to a point to where it matters far less, but there will always be the need for humans to make sure that they're checking in on these things and make sure that they're given the information that you actually want
[00:26:31] and that you can actually trust.
[00:26:34] So I think hallucinations, that's an ongoing loser situation when it comes to AI.
[00:26:40] Not getting rid of them.
[00:26:41] Yep.
[00:26:42] Nope.
[00:26:44] Scarlett Johansson and OpenAI.
[00:26:45] I realize we did the OpenAI thing, but we didn't mention that.
[00:26:48] That was kind of a big moment.
[00:26:52] It forced them to change their product.
[00:26:56] I think all signs seemed to point that there was intention to get Scarlett Johansson on board.
[00:27:02] Yeah.
[00:27:03] There was a liking by Altman of the movie that she was in.
[00:27:08] What was it?
[00:27:09] Her?
[00:27:10] Yep.
[00:27:10] She was the voice of the chatbot in her.
[00:27:12] So there were all these signs that pointed to, all right, there was intention to replicate.
[00:27:16] So they ended up removing the voice that was tied to her personality and I don't know.
[00:27:24] Well, it's about the hubris of these technology companies.
[00:27:28] For sure.
[00:27:29] And I think a lot of people cheer that Scarlett, Scojo, Scarlett, Scarlett won over big, huge, powerful OpenAI.
[00:27:43] And I think that makes everybody feel good.
[00:27:44] Yeah.
[00:27:45] Yeah.
[00:27:45] Yeah.
[00:27:46] Indeed.
[00:27:46] Indeed.
[00:27:48] Microsoft Recall, which was an interesting concept.
[00:27:51] Did they end up rolling that out?
[00:27:54] I know that they had stopped it, stopped with the plans.
[00:27:59] I think it might have been a soft rollout, but yeah, it ended pretty quickly.
[00:28:04] Preview will initially be available for Windows 11 laptops.
[00:28:08] Okay.
[00:28:08] So Microsoft begins rolling out Recall feature to developers.
[00:28:12] This was as of November 22nd, which is today, the time of the recording.
[00:28:17] So Recall isn't completely dead in the water.
[00:28:21] But when it was announced by Microsoft and touted as this amazing thing where the AI will essentially kind of monitor how you're using your Windows PC and make it all searchable and interactive and everything like that.
[00:28:36] And people freaked out about that.
[00:28:38] It turns out people don't like things tracking how they use their computer.
[00:28:42] Sure.
[00:28:42] And we have the others who are coming out with things that are going to Google has one where it's going to know what is on your screen and do things on it.
[00:28:50] And this is this goes back to the agent question.
[00:28:52] The trust isn't only whether it'll get it right.
[00:28:55] The trust is whether you think that it will do right by your data.
[00:28:58] Yeah.
[00:29:00] And I think there's going to be a lot of worry there.
[00:29:04] Yeah, for sure.
[00:29:06] And just a few others real quick.
[00:29:08] You know, Amazon, Microsoft, Alphabet, stock declining, even though they're AI, you know, their systems in different ways doing well.
[00:29:18] I can't really speak very closely to that.
[00:29:21] I think the basic question there is people are waking up to realize that we're not really certain what the revenue model is and the business model is for AI.
[00:29:30] Yeah, okay.
[00:29:31] That's how you wrap it.
[00:29:32] And so people are getting cautious, I think.
[00:29:35] They're doing amazing things.
[00:29:38] But the only one you know is making money right now is NVIDIA because people have to buy their chips.
[00:29:43] Right.
[00:29:44] Right, right.
[00:29:47] Mustock's departure from stability.
[00:29:50] And then, of course, this also along that lines, inflection AI losing Suleiman and kind of leaving both companies to figure their way, I suppose, without their marquee CEOs.
[00:30:03] Yeah, it's kind of sad, but I think I'm surprised we haven't seen more failures as is the want in the venture world.
[00:30:11] Indeed.
[00:30:12] Indeed.
[00:30:12] Indeed.
[00:30:13] We are going to take a quick break.
[00:30:15] And then when we come back, we're going to talk about some devices, some tools and picks and that sort of stuff.
[00:30:21] And that's coming up here in a moment.
[00:30:25] All right, let's talk devices because this was an interesting year for devices, although I think at the end of the year, I think my sentiment on AI devices hasn't changed very much, which is I still don't know why I need one of these when I've got one of these.
[00:30:45] For those of you listening, Jason just held up his rabbit and his phone.
[00:30:51] Yes. And I think that at the end of the day, like I think that'll get figured out.
[00:30:56] Like I have to imagine there will be a device that is like, OK, now it makes sense.
[00:31:01] Maybe it's the meta Ray-Bans, you know, as as like an example of what it could be, what a successful device like that could be.
[00:31:08] But the Rabbit R1 came out with two spectacular failure.
[00:31:14] And they continue to iterate on this, by the way, they do.
[00:31:17] They have continued to develop around the large action model and they have the large action model playground and stuff like that.
[00:31:24] They're they're making announcements and everything.
[00:31:26] But I just don't know how much it's really how much they're, you know, kind of recovering any of the the initial interest and excitement that it had before it was actually released.
[00:31:39] So I'm looking at least given this, you can still buy them.
[00:31:43] One hundred ninety nine dollars.
[00:31:46] Free shipping.
[00:31:47] Black Friday deal.
[00:31:49] OK.
[00:31:50] I mean, it's a it's a neat looking device.
[00:31:54] When was the last time you turned it on?
[00:31:57] I couldn't even tell you.
[00:32:00] Yeah, I did like it literally at this point is, you know, and I've said it many times on the show, like when I bought it, it was really more about the perplexity inclusion because I sorry.
[00:32:13] Excuse me.
[00:32:13] I had already decided that I wanted to to check out perplexity.
[00:32:17] And then when this came out, it had like a buy one and you get a full year of perplexity included.
[00:32:22] So I was like, oh, OK, so then I'll get perplexity and I'll get this as a bonus.
[00:32:26] So I almost got it as a kind of the opposite of what they intended.
[00:32:30] And any of the times that I've used it, it just you know, the utility just isn't there for me.
[00:32:35] Now, granted, I have not fired it up since they've started doing the large action model updates around the playground and and that sort of stuff.
[00:32:44] So maybe then it becomes a little bit more useful.
[00:32:48] But I don't know from a distance.
[00:32:50] I'm not entirely convinced.
[00:32:51] But even worse, even worse, ladies and gentlemen, was the poor humane.
[00:32:57] Oh, poor humane.
[00:32:59] And then I don't even know what's what's going on with humane at this point.
[00:33:02] I think they're they're trying to market themselves as a as a platform company as opposed to a hardware company, which was totally their plan for the beginning.
[00:33:13] But, yeah, the AI pin just got completely skewered.
[00:33:17] And I don't know, it's just like looking at the hardware itself.
[00:33:21] Like, I guess it's kind of neat that you could scan, you know, that you could project the the control surface onto your hand and everything.
[00:33:28] But it just looks big and chunky.
[00:33:31] And it's not what I think of.
[00:33:33] It has something neat, right?
[00:33:35] It could project the stuff.
[00:33:36] That's cool.
[00:33:36] It's different.
[00:33:37] OK, it's a different user interface.
[00:33:39] You wear it so it knows stuff.
[00:33:40] OK, that that's an interesting thing to consider.
[00:33:42] So the hype level was high.
[00:33:45] Thus, the cliff to go off was was much steeper.
[00:33:49] You can still buy it.
[00:33:50] You can buy one for four hundred ninety nine dollars.
[00:33:53] You get the first month.
[00:33:54] They did a two hundred dollar discount a couple of weeks ago at the time of the recording.
[00:33:58] Right.
[00:33:58] You get a first month free.
[00:33:59] But the humane plan is twenty four dollars a month.
[00:34:03] For what?
[00:34:05] For what?
[00:34:05] I would love to know how many people are actually using that on a regular basis.
[00:34:09] Well, they got tons of returns out in the wild.
[00:34:12] Yeah, I have neither.
[00:34:13] Yeah.
[00:34:13] Big time.
[00:34:14] Big time.
[00:34:15] So humane loser.
[00:34:18] Yeah.
[00:34:19] Spectacular failure there.
[00:34:21] Curious to see what that leads to.
[00:34:23] If their platform play gets any sort of pickup.
[00:34:27] Yep.
[00:34:28] I.
[00:34:29] Yo one.
[00:34:29] I remember we talked about this at one point.
[00:34:32] Which one is this?
[00:34:33] This is that was the earbuds.
[00:34:36] Yes.
[00:34:37] They were not cheap.
[00:34:38] This was they're not out yet.
[00:34:41] They're still a preorder deposit.
[00:34:43] OK, still reserve reserve yours now.
[00:34:46] Yep.
[00:34:46] No screen.
[00:34:47] Just a big puck that you place into your ear.
[00:34:52] And they the bad pro version of it is six hundred and a half.
[00:34:57] Fifty dollars.
[00:35:00] Wow.
[00:35:00] The IO one preorder deposit is fifty nine dollars.
[00:35:04] I don't know how much it costs.
[00:35:05] They should tell you.
[00:35:07] Dang.
[00:35:09] That's going to be with Wi-Fi.
[00:35:14] That's going to be five ninety nine with LTE.
[00:35:16] That's going to be six ninety nine.
[00:35:19] So and is it are you really going to get Leo still had on Leo Laporte still had on order
[00:35:24] those glasses.
[00:35:26] I forget what they were called.
[00:35:27] But the two round glasses with the round bit here.
[00:35:30] And he ordered them right away.
[00:35:32] And they have to come.
[00:35:34] Yeah.
[00:35:35] You're taking a chance.
[00:35:36] I had ordered.
[00:35:39] Let's see here.
[00:35:40] Is it this one?
[00:35:42] I or.
[00:35:43] Yeah.
[00:35:44] OK.
[00:35:44] So I think we talked about it at one point.
[00:35:47] The limitless AI.
[00:35:49] It's the.
[00:35:50] Oh, yeah.
[00:35:51] You ordered that?
[00:35:52] Wearable pendant that kind of, you know, you just keep clipped on to you.
[00:35:56] And it's just all about like a microphone that takes in your ambient noise.
[00:36:02] Right.
[00:36:02] And allows you to kind of go back in time later and, you know, summarize and document and stuff
[00:36:09] like that.
[00:36:09] And I mean, I ordered that.
[00:36:10] I want to say it was it was not that expensive, but it was months ago at this point and still
[00:36:15] on preorder.
[00:36:17] And.
[00:36:19] Yeah.
[00:36:20] OK.
[00:36:20] Free access to your data for.
[00:36:21] OK.
[00:36:22] I didn't realize there was even a yearly yearly cost to it, but that's.
[00:36:26] Oh, $19 per month.
[00:36:28] Per month.
[00:36:28] Oh, ouch.
[00:36:29] Jeez.
[00:36:30] I just don't know.
[00:36:31] Like this stuff's going to add up.
[00:36:32] So anyways, I've got the limitless coming at some point.
[00:36:35] I think all in all, you know, we've got time and time again, these AI pieces of hardware.
[00:36:42] Really, the only one that exists right now that I feel like there's been much of any consensus
[00:36:48] of positive outlook has been the Meta Ray-Ban.
[00:36:51] Which again is Meta surprise, right?
[00:36:55] And for Meta, you know, it's wasted all kinds of money, I think, on VR.
[00:36:59] Yeah.
[00:37:00] Everyone I've known, talked to, everything I've seen about the Ray-Ban is positive.
[00:37:06] It's pretty positive.
[00:37:07] Yeah, totally.
[00:37:08] And then we've had the news about the Johnny Ive, Sam Altman device.
[00:37:13] Of course, that didn't come out this year.
[00:37:15] At least at the time of this recording, anything could happen in the next month before this is
[00:37:19] released.
[00:37:19] But I highly doubt it.
[00:37:21] Yeah.
[00:37:21] But I am curious, you know, through partnerships like that and just the kind of ongoing expansion
[00:37:30] and development of AI as a technology, like I have to imagine at some point someone gets
[00:37:37] it right or someone comes up or is able to visualize or figure out the thing that an AI
[00:37:43] hardware can do that you can't already do with your phone or how it does it better or
[00:37:48] whatever the case may be.
[00:37:50] And I don't think we're there.
[00:37:51] Maybe it is classes.
[00:37:52] I don't know.
[00:37:52] Because AI is still amorphous.
[00:37:54] It's what does it mean to have access to AI?
[00:37:58] And again, you do it every time you do translation.
[00:38:00] You do it every time you do spell check.
[00:38:03] You do it every time you use maps.
[00:38:04] Yes, you get to do AI.
[00:38:05] But this kind of subset that's undefined, if I were to sit down tonight and try to design
[00:38:13] a device around it, I don't know what I'd design.
[00:38:16] I don't know what I'd want.
[00:38:16] Yeah.
[00:38:17] Right.
[00:38:17] What the heck would that even be?
[00:38:18] Yeah.
[00:38:19] Totally agree.
[00:38:20] Totally agree.
[00:38:22] And then what about some tools?
[00:38:26] Have you been using any tools on the regular that you're like, you know what, these are
[00:38:29] the ones that really have impressed me?
[00:38:31] Or maybe you aren't using them, but there are particular tools that you've seen that have
[00:38:36] come out or shown developments that continue to wow you?
[00:38:41] You use them, I think, a lot more than I do because, again, you're actually creative and
[00:38:44] you're a musician and you do neat things.
[00:38:45] For me, the tool, my winner of the year is Notebook LM.
[00:38:52] Notebook LM.
[00:38:53] And Stephen Johnson, as an author, helped design it around the needs of people like us, people
[00:39:03] who create.
[00:39:05] It was already really impressive.
[00:39:07] Then, of course, they came up with the podcast thing and I put my entire manuscript in and
[00:39:12] came back with a podcast, which we played part of here.
[00:39:15] And everybody's played with that.
[00:39:16] I don't think they're using it much anymore, but it was a great proof of concept.
[00:39:22] But then I was working with some people on a proposal for some money for a new program.
[00:39:29] I'll just say that.
[00:39:30] And I had a whole bunch of notes and mission statement and all that.
[00:39:34] Another professor had a whole bunch of notes and mission statement.
[00:39:38] One dean came in and put it all through ChatGPT and came out with something.
[00:39:43] Another dean did the same thing, put out a bunch of stuff.
[00:39:46] And then the first dean wrote a version.
[00:39:48] So I took all of that and I put it into Notebook LM.
[00:39:53] And with one instruction, I wanted it to emphasize one aspect more.
[00:39:58] It was good.
[00:40:00] The first sentence, I couldn't beat.
[00:40:03] I didn't necessarily use it all, but it helped me see the organization of the whole thing.
[00:40:09] It helped me write some stuff that would have been more difficult.
[00:40:12] I ended up writing it, but it was very impressive.
[00:40:16] So as an aide, I think Notebook LM to me is just a clear winner.
[00:40:21] What about you?
[00:40:22] You've used various of these tools and demonstrated all of them.
[00:40:25] So I've used some of them, but before I get there, I will just say Notebook LM I've used
[00:40:30] a little bit.
[00:40:31] And I think I understand a little bit more about what it could be useful for me or how
[00:40:38] it could be useful for me in kind of hearing what you were just saying about it.
[00:40:43] But also when I had Mike Elgin on when you were out just a handful of weeks ago, he talked
[00:40:49] a lot about using Notebook LM as his kind of fact check source.
[00:40:54] So he might use another LLM to kind of help create and research and whatever.
[00:41:02] And then he takes the output of that, pulls it into Notebook LM with kind of some of the
[00:41:07] data sources and says, all right, now compare this to the facts that we have over here.
[00:41:15] And he says it's really useful for that, for basically taking data from one LLM and then
[00:41:22] fact checking it on your own to make sure that it got it all right.
[00:41:26] So interesting stuff there.
[00:41:28] I think I could probably find some really great uses in what I'm doing out of Notebook LM.
[00:41:33] Yeah, but you could.
[00:41:33] I need to give it more time and kind of integrate it a little bit in order to do that.
[00:41:39] Um, I've been impressed by RunwayML and I think I talked about this already, but consistently
[00:41:45] like when they put out updates and you know, this is again, this is video generation, but
[00:41:49] they're just some of the, some of the stuff that they're doing.
[00:41:52] It's just, it's just whatever, for whatever reason really captures my imagination.
[00:41:57] It's really remarkable to look at.
[00:41:58] Um, UDO and Suno are two music generation services that I've, I've toyed around with
[00:42:05] and played around with from like a collaborative standpoint as a musician, not in creating
[00:42:11] a song top to bottom, but in using it to help me kind of, uh, come up with new ideas
[00:42:17] for a song I might be working on.
[00:42:19] And I found it to be at least UDO anyway, Suno maybe less so, but UDO gives you a lot of
[00:42:25] controls, a lot of extra controls and I found it to be a really useful tool for that.
[00:42:30] Um, perplexity, of course, you've, you've heard, we'd talk about it a million times on
[00:42:35] this show.
[00:42:36] Um, but it's just the more I've used it and the more I've committed myself to how it works
[00:42:43] and, you know, started to really kind of create these spaces within perplexity that
[00:42:49] are very specific to certain tasks that I have.
[00:42:52] It just, I'm, I'm getting a workflow now that is really, uh, much easier for me and, um,
[00:42:59] and, and getting more comfortable with what it's good at versus what it makes more sense
[00:43:05] for me to, you know, run through once and then really scrutinize versus, um, you know,
[00:43:11] other uses for it.
[00:43:12] It's just, it's, it's been really making me pretty happy.
[00:43:16] Um, the more I use it.
[00:43:17] And then, I mean, I, you know, there were a few others in here, which are just, you know,
[00:43:22] uh, I think impressive in their own right.
[00:43:24] GPT-4.0.
[00:43:25] I've had some really good, um, good interactions with that.
[00:43:29] I haven't used Sora as much, but when Sora again is another video generation, but, um,
[00:43:36] but nonetheless, it, it produces some really cool output.
[00:43:38] And then one, one thing that I didn't put on here, which is, um, let's see here.
[00:43:44] It's called ideogram, which is an image generation platform that, uh, I don't know, it's just
[00:43:51] really easy to use and they've really done some really great iteration of it over time that
[00:43:56] I find for like creating thumbnails for some of the stuff that I do.
[00:44:00] Sometimes what I need is just a texture background for a thumbnail.
[00:44:05] And so I might go into ideogram and say, you know what I mean?
[00:44:09] Like, I don't, I don't need you to create an image of a bear doing, you know, climbing
[00:44:13] a tree or blah, blah, blah.
[00:44:15] I don't need that.
[00:44:16] But I do have these, you know, I've, I've, I've done a cutout of you and me and we've got
[00:44:21] the text and I don't want it to be a black background or a plain color.
[00:44:25] I want to have some texture back there.
[00:44:26] So I might ask it to do like concentric, you know, laser shot lines going into the center
[00:44:33] of the image and it gives me four different examples of that.
[00:44:36] And I'll iterate a couple of times and pull that out.
[00:44:38] And that ends up being the background, you know?
[00:44:41] So things like that.
[00:44:42] The one other note of the tools, uh, is that, um, I would say about nine months ago, I think
[00:44:47] I was playing with, we were all playing with all of the visual image tools and playing
[00:44:52] them off against each other.
[00:44:53] I give them the same cue and see what it used.
[00:44:55] And I've used that.
[00:44:56] I've used it to illustrate blog posts and that kind of stuff.
[00:44:58] I still like it.
[00:44:59] I just haven't been playing as much with it.
[00:45:02] Uh, but I think we'll see a lot of development there too.
[00:45:05] For sure.
[00:45:06] For sure.
[00:45:08] So yeah.
[00:45:09] All right.
[00:45:10] I think that's it.
[00:45:11] We've talked about anything and everything from the year.
[00:45:14] It's been a hell of a year.
[00:45:17] Jeez.
[00:45:18] It's, it's been a hell of a morning.
[00:45:19] And this has been like two hours of solid podcasting, which actually I should say, I say
[00:45:24] that as if that's a weird thing, but I was just thinking that you do this every Wednesday.
[00:45:28] It's more exhausting.
[00:45:30] You know, on, on twig, we can just blather about anything that comes along here.
[00:45:34] We had a lot to impart.
[00:45:35] We had a lot to get across.
[00:45:37] And that was just a stone skipping water of everything we've done in this podcast through
[00:45:41] the whole year.
[00:45:42] And so Jason, my friend, once again, I am really grateful that, that you included me in this,
[00:45:47] that you're doing this.
[00:45:48] It's a great tool for me to keep up on things and learn.
[00:45:52] And I'm really grateful to the audience.
[00:45:54] Um, uh, we didn't announce that we were going to record this episode.
[00:45:58] So dear Daniel Kraft has been here the entire time.
[00:46:01] He's really tired now too.
[00:46:03] Um, but it just shows how much we've gone through in this year.
[00:46:06] And, uh, I think we're going to have the same pace if not faster next year.
[00:46:10] So thank you, my friend.
[00:46:11] Yep.
[00:46:11] Yeah.
[00:46:12] Well, thank you.
[00:46:13] I mean, I, I really, I just so appreciate being able to do this podcast with you.
[00:46:17] I learned so much about how you think about this stuff and, uh, yeah, it's just, it's,
[00:46:24] it's an honor to be able to get to sit down with you for an hour and talk about artificial
[00:46:28] intelligence and grow my own knowledge set.
[00:46:30] And hopefully, you know, for the people who are watching and listening, I think that my
[00:46:35] thought with this show and our thought, I think with this show is we're learning about
[00:46:39] it, you know, maybe you are too.
[00:46:42] And if so, let's all learn about it together and ask the, you know, sometimes the stupid
[00:46:46] questions or whatever, because we truly are building up from, from, you know, a certain
[00:46:51] level of knowledge and we want to grow that, uh, to be more.
[00:46:55] And so, um, I certainly feel the effects of that after a full year of doing this podcast,
[00:47:00] I feel now a whole lot more knowledgeable about the industry than I was when I started.
[00:47:06] And that has me excited for the next year.
[00:47:08] So, amen.
[00:47:09] Amen.
[00:47:10] Uh, jeffjarvis.com is where folks can go to pick up his new book, The Web We Weave,
[00:47:16] but you can also get the Gutenberg parenthesis now in paperback and magazine.
[00:47:21] It's all there.
[00:47:22] Jeffjarvis.com.
[00:47:23] Easy peasy.
[00:47:26] And thank you, Jeff.
[00:47:28] Yeah, of course.
[00:47:29] Uh, ainside.show is where you can go to subscribe, uh, to find all of the episodes that we do.
[00:47:36] Of course, some of these episodes that you're seeing, if you're watching the video version
[00:47:38] are a little outdated, but rest assured, if you go there right now, you're going to see
[00:47:42] up to the day, you know, up to the week episodes.
[00:47:45] You can subscribe, you can watch them, you can hit our socials.
[00:47:49] Everything is there.
[00:47:50] You can also link out to our Patreon at patreon.com slash AI inside show, where you can find a
[00:47:56] membership level that works for you.
[00:47:58] This is basically a way for you to, uh, contribute, uh, monetarily to the show on a weekly or monthly
[00:48:04] basis so that we can continue doing it.
[00:48:06] We really appreciate the support of all of you.
[00:48:09] And really this being the end of the year, thank you patrons for everything that you have
[00:48:14] done.
[00:48:14] You've really enabled us and given us the power to do what we are doing.
[00:48:19] And we really hope to kind of grow this.
[00:48:22] And I've got some really interesting plans for 2025 to make this even better as we go
[00:48:29] into the new year.
[00:48:30] So patreon.com slash AI inside show, sign up, uh, at whatever tier makes sense for you.
[00:48:36] Um, if you want, you can be an executive producer and you know, it's a little pricey,
[00:48:42] but you get a sticker, you get a t-shirt and you get to join the amazing company of Dr.
[00:48:47] Dew, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, Paul Lang and Ryan Newell.
[00:48:54] Essentially you get your name read out each and every episode.
[00:48:58] With, with great gratitude.
[00:49:00] That's right.
[00:49:01] With deep, deep gratitude.
[00:49:02] So thank you everybody for your ongoing support.
[00:49:05] Thank you for a wonderful year of AI and, uh, we'll see you in 2025 on another episode
[00:49:11] of AI inside.
[00:49:13] Take care, everybody.
[00:49:14] Bye.
[00:49:17] Bye.



