Close Enough For Jazz... And AI
February 19, 202501:09:55

Close Enough For Jazz... And AI

Jason Howell and Jeff Jarvis discuss HP acquiring Humane, XAI releasing Grok-3, Mira Murati's new AI startup, the New York Times integrating AI in the newsroom, and more!

Support the show on Patreon! http://patreon.com/aiinsideshow

Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

NEWS

0:02:36 - HP to Acquire Parts of Humane, Ai Pin Startup From Ex-Apple Managers, for $116 Million

All of Humane's AI pins will stop working in 10 days

0:09:56 - Elon Musk’s xAI releases its latest flagship model, Grok 3

Benedict Evans on Grok

Elon Musk’s terrifying vision for AI

0:19:30 - Mira Murati debuts Thinking Machines Lab, her AI startup

0:22:18 - New York Times goes all-in on internal AI tools

0:29:40 - Google’s AI Efforts Marred by Turf Disputes

0:34:19 - Introducing Perplexity Deep Research

0:45:23 - On Probabilism and Determinism in AI

AI Agents are Everywhere and Nowhere

0:57:14 - Jeff rant: Dumbest AI memo you will ever read

Learn more about your ad choices. Visit megaphone.fm/adchoices

This is AI Inside episode 56 recorded Wednesday, 02/19/2025. Close enough for jazz and AI. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. What's going on, everybody?

Welcome to another episode of AI Inside, the podcast where we take a look at the AI that is layered throughout so much of the world of technology at rapid pace. I feel like every single weekend, I'm looking at the development, week after week of artificial intelligence and its integration into tech, integration into the discussion around technology, and it's not letting up. So that's why I'm really happy we do this show every single week. I'm Jason Howell, joined as always by my friend and cohost, Jeff Jarvis. How you doing, Jeff?

Hey, boss. How are you? How's the how's the how's the with skiing, snowboarding? What's your what's your, risk of choice? All of the above.

I am in the mountains and, snowboarding and skiing. All all of the above. It just depends on who in the house is doing what. I am, keeping locked into the snowboard. But, yeah, we've got kind of an even mix of of all things.

So it's been a lot of fun. It's Winter Park, Colorado, which is apparently the coldest resort in Colorado as far as skiing and snowboarding is concerned. So, you know, we're staying there. What's your Fahrenheit right now? What is it outside right now?

Let me check my watch. It is 19 degrees. That's not too bad. I mean, with all the snow, it kinda insulates a little bit, you know, as they would say in Arizona, it's a dry cold. Yes.

That's right. It is a very dry cold. You have no idea. We've got humidifiers to try and combat it, but it's kinda crazy. Thank you everybody for, you know, supporting us each and every week.

We do have a wonderful support, crew on Patreon, patreon.com/aiinsideshow. Our newest patron, Tim Epperson. Thank you, Tim. Thank you, Tim. Here as of a few days ago.

So we appreciate your continued support. Also, if you're watching this live, we have a lot of you watching via x, actually, as we're streaming live on on Jeff's x account. You should subscribe to the show just to be sure that you don't miss it, when you can't catch us live. That's aiinside.show. But why don't we jump right to it?

Because I don't know how long my Internet is gonna keep stable. So we might as well just pretend like things are are going according to plan and jump right in, starting, you know, the humane story just gets, more and more interesting. And, honestly, you had to see something like this coming. Right? Like, it was never it was never looking very good for humane ever since they released their hardware, last year.

And HP Incorporated is going to acquire Humane's assets, dollars 116,000,000 for that. By the way, a year ago, Humane wanted $750,000,000 to 1,000,000,000 for a buyout. That was last June. Oh. Not even a year ago.

So they're getting a fraction of that, a tenth, if you go by the 1,000,000,000. So oof. There should there was there was a great old site, back in the days of the of the February called, I'll clean it up, f company Mhmm. That listed all the companies that were going bad. There should be a separate subset site of all of the, hubristic, a, wishes for huge buyouts, and b, big buyouts they wish they should have taken.

Oh, yeah. That yeah. Which is the other side of that. Yeah. Yeah.

That's on the other side. Like, oh, man. Why did we just do what we could then? Yeah. Well, I mean, you know, at the same time, I guess at least you got something.

Could have been worse, I suppose. HP gets humane software platform, their IP, their patents, which around, how many patents do they have somewhere around 300, I think. Most humane employees will be brought into this new team at HP called HP IQ. So they were at humane working on a cutting edge device that is supposed to be the future of AI in hardware. Now they're going to integrate AI into, connected conference rooms, also also printers and HP's computers.

So I don't know how that feels, but, hey, you know, at least you got a job, I suppose. So there's that. Yeah. I saw somebody on one of the socials, said, you know, don't don't don't dance on their grave. Every company that starts, you know, it takes a lot of courage and fine.

I agree with that. But then somebody answered back and said, well, they kinda danced on the grave with the phone. That's true. They said they're they're we're gonna eliminate the phone. There's gonna be no need for a phone.

So that that level of, the hubris comes back around to bite you in the butt. Well and and we've talked a lot in the past about, like or at least at least I believe this. I I can't remember what your stance is on this in particular, but I I I think we both agree that there will be a time, I think, where there is a hardware device of some sort that really shows the benefit of having something that is that is designed around artificial intelligence. That's what I think at some point, but, like, I have to imagine at some point on this wide, wide, you know, planet Earth that someone's going to come up with an idea that actually resonates, actually works, and makes sense. It doesn't do completely duplicative of what we already have on our smartphones.

But I could be totally wrong. Like, maybe that's, like, not a pursuit worth exploring because it doesn't need to. Yeah. I keep on thinking it's probably not a function of the AI, except for, like, glasses. Fine.

That's we're already there. Yeah. Right. You know, that is already doing that. And Mhmm.

Google, god bless them, is gonna try again. And and and, you know, on on on the old, This Week in Google, we we talked a long time ago about about carrying around your blob, you know, your computing connected device that could connect with any screen and do all kinds of things. That's right. And I think that's kind of what your phone is now in a sense. Totally.

Because it connects with your car and you're using your phone as the brains of your car to an extent. Right? Mhmm. But, yeah, I don't know if it if it really demands a device. Yeah.

Maybe it doesn't. Trying to trying to figure what it does. It was interesting to think about, but but so far, all the devices have failed. And and I don't think AI is the thing. It's AI inside AI inside.

Inside. Right? Like, it's just kinda there told that. Yeah. Everywhere.

Yeah. I think so. And so maybe yeah. That's right. Maybe the destination isn't the AI.

It's kinda like the conversation that we've been having about about, you know, all these companies go stumbling over themselves to make sure that you know that this feature is AI, and that feature is the AI. And that hopefully, eventually, there will be a point where the AI kind of fades off into the background and it just becomes this thing does this thing. And the selling point isn't the AI, it's the function. It's the out you know, it's the it's the destiny of the destination is the output that you get out of it. And not just the fact that it's got AI, that buzzword integrated to it.

And it's kind of the same thing here. Like may, you know, maybe that doesn't matter as much as just like what it does for you. If you happen to have a humane AI pin, that device is gonna stop working, like in a week. That's the, that's the really bad part. They're gonna offer some, refunds.

I don't know what the wonder what what's the sum. Yeah. No. It's, it's like if you bought within the past ninety days, you have to submit by February 27. Meanwhile, the PIN is is, you know, basically worthless on February 28.

That's when the functionality goes away. So you've got, I don't know, seven or eight days to tell them, hey. I want a refund because I bought it within the next ninety days. I think there's another avenue. There was a there was an issue with some of the hardware having, you know, being potentially, the the potential of of, like, lighting on fire or something like that.

I can't remember exactly what that was. And if you had filed for that, then you get a portion of your money back. But nowhere in any of this, like like, here's what I wonder. How many of these damn things were sold? And Yeah.

If your HP, does it behoove you to be like, you know what? Let's just give them their damn money back because this was never worth anything to begin with. And, you know, that that buys you a little bit of positive press. I don't know. But that's not what's happening.

You're kinda screwed. You know? Yeah. And it was a gamble. Yeah.

For sure. So, you know, I I bought some Kickstarter things that didn't really work back there. So fine. Yeah. It happens, I suppose.

It's just this one happened very spectacularly. Yeah. It it was a flame. It was a real flame out. Because but it because it was because of their hubris, because they went overboard what they promised.

Mhmm. And I guess I guess you kinda have to if you're gonna charge that much for something, but it was it was a little too much. Yeah. Yeah. Also, real quick before we move on, this is also a really good good example of how hardware that's so completely reliant on, like, cloud compute the way this device was.

Like, pretty much everything that this device did relied on the cloud for the most part. And so the second they turn this off, if you have that hardware, like, you can do I think the only thing they they said, and I don't even know why you even say this on a device that's completely worthless or useless, is that it'll it'll have you know, be able to monitor the battery health of the device. It will be able to do that. You just won't be able to do any of the useful things with the hardware that you want to do because it was all relying on the cloud, and the cloud goes away. And, yeah.

So, anyways, pour one out for the humane AI pin. Yeah. I feel like we poured many out at this point for the humane. May it rest in HP. Yes.

May it rest in HP. This Monday, XAI released grok three with a live stream starring the world's most powerful billionaire, unelected powerful billionaire, Elon Musk. This includes a think mode for reasoning applications, a big brain mode when you, wanna solve more difficult problems. It has deep search because everyone has a research tool now that uses the Internet, and, of course, it also, taps into x as a data source. Soon they announced that sooner, it'll have a voice mode, probably not very long from now.

I think weeks, I think, they were saying. Eventually, an enterprise API. And finally, they also announced, or Musk also announced, that kind of the plan going forward is when there's a new major version of Grok, the previous version will be open sourced. And so that's the plan anyways. I don't know if they'll stick to that, but that's what's happening with Grok two.

It's open sourced. And you get a price hike. Premium plus goes from $22 a month to $40 a month, so almost double. That's that's a big increase. So yeah.

So I have a lot by the way. I have. I was, I just, I just went in for my amusement and I just went to Crocs. I I I'm I'm a involuntary blue check mark. Yes.

I was given the check mark without pay. I do not pay for it. So I hit it. But thus I get access to Croc. So I have grok chase perk, I suppose.

So I asked just to be a smart ass. I asked, how do we get to peace in Ukraine considering what's being said? Oh, my goodness. Okay. Now interestingly, step one is ceasefire.

Deescalation is interesting too, is, inclusion of key players. Talks must include both Ukraine and Russia. Oh, that's interesting. Disagreeing with the boss there, are we? Yeah.

They they need to they need to connect with each other and get on the same page. I've seen some other examples where where it has come out with the, musk Trump party line out of, grok. So, color me dubious about everything that he does. Yeah. And and color Benedict Evans dubious too.

So I liked, his, his post on grok was congratulations to grok on spending a lot of money to be yet another company on the leaderboards for a commodity technology with no defensibility that we know of. And I think that's true. I think that we see a whole bunch of companies. We'll have another new one in a minute, trying to basically replicate each other's technology and then incrementally say I'm faster. I'm better.

I'm this. I'm that. Oh, and they get that they get that moment at the top. Right? They get that moment, and you had it, and then the next one comes along totally.

I mean, I think you're absolutely spot on. I think Benedict is, absolutely spot on with that. It really seems like that top leaderboard shifts with every single new release by every major AI company. They they always come out with their new one and, hey, we're at the top. Now we're the big deal until the next one two weeks later.

Yeah. And and and and, you know, it does make sense for inventors to leapfrog each other and and build on a base, build on the shoulders of those who came before. I I'm I'm I'm fine with that. But gee, as somebody said responding to Benedict, there's something slightly immoral about spending nearly a quarter of a trillion dollars on this while doing layoffs every quarter and spending, you know, is the return on investment really there just to, leapfrog, in a commodity, use? I I don't think it is.

I think it's I think it's really so much now about ego. Yeah. I think you're right. There is a lot of ego tied into it. And I mean, case in point, this is the quote, unquote, truth seeking, AI.

I put that in hard quotes, by the way. Musk calls it quote, maximally truth seeking AI, even if that truth is sometimes at odds with what is politically correct. This was interesting because I saw a, a post by Gary. This is where I thought I saw something. I realized it's what you put on the rundown that I saw.

Yes. Yeah. Yeah. Gary Marcus. And and in there, you know, there is a a little snapshot, and someone asks XAI for its opinion on the information, the the news publication online called the information.

And the result, the output goes on a tirades, you know, saying, quote, legacy media is garbage, and it goes on to say x, quote, x, on the other hand, is where you find raw, unfiltered news. X is the only place for real trustworthy news. And, Gary Marcus, you know, I'm happy you wrote this because it was it was an interesting look. He says everyone and not just the information should be genuinely terrified that the richest man in the world has built a large language model that spouts propaganda in his image. And whether you believe that statement or not, I think what Gary says there is important to consider, which is that the more and more people are using these AI, systems kind of as a way to understand the world, you know, in their minds better, I suppose.

And how do we know how these systems have been created to venture in one direction or the other? And I don't know. That's that's something to consider with all of these systems, not just XAI. You know, this is why I love doing the show because it it inspires so much further thought. Because I hadn't thought of this before this moment, in in what both you and, and Gary bring up.

Our view of of of messaging at scale, right, was media. I own a newspaper. You don't. Yeah. Yeah.

Yeah. I got to get this. I was printed. Right? Then we get to, the Internet and social media where I can say one thing and it hits scale when it hits virality, when enough people choose to spread it on.

Right? And Musk buys Twitter, so he spreads his own stuff whether or not anyone chooses to see it. Right? And that's that's the next version of scale. What I hadn't thought of here is what gee, I have my own LLM, and I'm gonna tell it what to say in every possible way even in topics that I haven't necessarily said anything about.

It's going to, repeat my propaganda, my worldview Mhmm. My message on every possible interaction, and then it's gonna fill the socials and fill the Internet with that. That's a whole another level of scale. Now when people worry about AI and disinformation, it can make this information. It's easy to lie now.

And we've been labeled a lie since we were able to talk. So I'm not really concerned about that notion. But it is interesting that you can create your own lie machine here. The key will be, don't believe it. It's crock.

Mhmm. And and and and by doing this, he's cutting out any, any possible, credibility that, his system could have. That's such a choice. Well and do the people that are using it care? Yeah.

No. I don't think so. You know what I mean? Like, because because the it is also a lot of this is kind of choosing sides to a certain degree. And there are there are a huge amount of people that are gonna use grok specifically for the reason that Musk is, you know, putting it out there that it's a truth seeking, you know, and and whether it's actually speaking truth.

I This actually kind of ties into a discussion that we're going to have a little bit later, around kind of how these systems are now versus where they're going. And yeah, it's just a real interesting time for how they're all developing. And and while we're on this moment of truth versus truth matters a whole lot less than it used to even though it matters a lot. It's my truth. Yeah.

But the other thing too that that Gary does that's really good is he keeps testing these things. And and Gary has a real attitude, and I like that. For sure. So he makes fun of it and he he asks Grok two to draw a picture of five different basic geometric shapes and label each one. So it labels the triangle a tickle.

A circle is an is square. The square is a rectangle. A rectangle, a circle is a rectangle and a, one two three four five six. A hexagon is a pentacle. It screws up in every possible way, and and makes up entire new words, has no sense.

Then he tries to have a draw, peep draw a picture of an electric guitar and label the parts. It screws that up. Finally, he has a draw, a calendar. Just make a picture of a calendar. It does very nice pictures of calendars, and he tells it to circle today's date, February 16.

It fails miserably at that. So weird. That's so funny. It's like it's like like, you know, what was what do they call it when you were a kid and you you try to pin the tail of the donkey, with your blindfolded. It didn't get anywhere near the donkey.

Yeah. So on the one hand, Gary's just ridiculing grok because it's not, you know yeah. Yeah. Yeah. AGIs are on the quarter.

Yeah. But on the other hand, I think there is something that is insidious here in the wrong hands and to the wrong viewers. For sure. So he's right. Yeah.

Yeah. Interesting. Well, we are going to, take a quick break. And then when we come back, we're gonna talk about a new entrant into the world of AI. A new company has been formed.

And, we don't know a whole lot about it, but we do know a little. We'll talk about that here in a second. All right. Meera Muradi, who resigned as OpenAI's CTO six months ago, not that long ago, has a new startup called Thinking Machines Lab. First thing I thought, Jeff, is I wonder if they considered intelligent machines.

I wonder too. Possibly on the list somewhere. Focused on interactions between humans and machines, quote, we are excited to build multimodal systems that work with people collaboratively. I mean, I feel like we've kinda heard that a million times, but around 30 employees so far, two thirds of which are former OpenAI colleagues, but also some people from Meta, some from Mistral. And Murata is really focusing on AI alignment, which is essentially human values for safer, more reliable systems, that sort of stuff.

And, yeah. What do you we I mean, we really don't know much about this other than what you're probably thinking. Right. Right. And you and you've got others out there, at the same time, by the way.

Ilya Sutzkever's safe superintelligence is similarly trying to say we're the safe ones. He's raising a billion dollars on, I think, $70,000,000,000 valuation. It's just it's just wonderfully absurd. Yeah. I I we don't know much, and and we'll see what this turns into.

She's obviously very smart. She was, embroiled in the in the Peyton Place soap opera that was OpenAI. And it occurs to me, Jason, when I saw this story, I thought a few years from now, I think we're gonna talk about the OpenAI mafia the way we talk about the PayPal mafia. Mhmm. That people came through the company on their way to something else.

Mhmm. And, and some of whom I don't think are very likable. Some of who are are likable are okay. I like Reid Hoffman, but I don't like some of the other people who came through NOSC. And I think the same thing is gonna happen here where, people like she are are going out.

They they they built a reputation. They gathered a team and they took it out and went out on their own. And I think that's not a bad thing in Silicon Valley. We'll see what they what they, if the if the if the gene pool here is good. Yeah.

Is it is it for good or is it for evil? We'll see. Find out at some point. Here, she plan at least around this company, she has plans to share the code, share the datasets, the model specs, all in an effort to broaden research into AI alignment. So it sounds like contributing a lot of the work that they're doing into the field of research, I suppose.

So that's Right. Positive sounding, big move towards an open source, open ish source idea. You know, which isn't not a bad thing, I don't think. I mean, we've been pretty positive on the open approach for AI so far. So be interesting to see what she comes up with.

She and her team. Yep. New York times. Yes, indeed. Indeed.

New York times transitioning towards an AI, more bringing more AI tools into, the newsroom and integrating that into, the news work tasks like SEO headlines, editing, summarization, product development. They are prohibiting certain things though, like drafting articles, generating images, other kind of editorial focused tasks. Don't bring the AI into that. But, but, you know, things that, you know, maybe people don't want to do, like, you know, the SEO crafting and stuff like that, I guess, it gets a thumb up. Yeah.

What do what do you do about this? A friend of mine, Zach Seward, is a brilliant, journalist. He was the founder cofounder well, he he lastly, he was the owner of, Quartz before it was sold to GeoMedia. Before that, he was, a Wall Street Journal journalist, really smart, responsible, good journalist. And so he's in charge of AI development in the newsroom at the at the times.

So I have faith that they're going to use it well. I think they'll use it smartly. And I think that to use it is, you know, I, all the time on the show, I quote, admiringly what Shipstead is doing in, in The Nordics. And they're using it similarly to there as an aid to the journalist. Yeah.

That's exactly what I thought of immediately when I read this. I was like, okay, so we're seeing more of a, kind of a, merging into that mentality at least a little bit closer. Yeah. And I think I saw it. There was a story that I put in really late, that we don't need to talk about.

But but Google has built a an AI coscientist to speed up research. And what it made me think of is the way that that's gonna work is I have this data. I have these thoughts. Can it suggest any other angles for investigation? Mhmm.

And I think that's a decent way to think about how a journalist could use AI is, here's a whole bunch of stuff. I'm still responsible for reporting it. I'm responsible for what I say in the end, but maybe there's something I missed. Maybe there's a correlation. Maybe there's an angle.

You know, as long as it's not misused, I think that's fine. I'm a little surprised they're not using it for illustration. I'm gonna guess that's more of a union issue than anything else. Oh, yeah. Good point.

Because I think illustration is one of the best uses of it now that online, that we talked about this in the show in the past. I think in the old days, a fraction of newspaper stories were illustrated, unlike magazines. And now, everything has to be illustrated online. And it's a cost. And I know that illustrators don't like the idea of being replaced, but in the sense they're not being replaced, there's an extra resource needed now to make illustrations.

So I'm kind of surprised they don't use it to make illustrations and label it properly as such. I don't think there's anything wrong with it. It's just it's it's no different from somebody making up a picture of, a a geek on skis in Colorado, than a machine making up that picture. Right? And, but they're choosing not to, and I think that's probably the reason why.

Choosing not to for now. I mean, if we've gone back a year ago and said, hey, New York Times, you know, kinda opening up the newsroom, do all the AI kind of tools and everything, it would have seemed like, oh, hell no. Like, that's not happening. You know, the the more rapidly places like New York Times are, I guess, getting more more of a not a maybe friendly is the wrong word, but finding finding a way with the technology as opposed to being immediately apprehensive. You know?

And so maybe a year from now, they do open it up further because Yeah. You know, mentality and temperature and all that around it be, lessons and becomes a little bit more amenable to that. So what are the so I'm writing a book right now about the history of the line of type, which was the machine that replaced, yes. Replace setting type one letter at a time. And somewhere here I have a letter to show you.

Let's see here. Yeah. So those of you on video, this is a letter, a piece of type. Just one little letter for five hundred years. Everywhere was set piece types.

So then it came out and instead it was set a line at a time. This is a line of type. Alright. Anyway, the International Typographical Union was was responsible for setting type of line at a time for five hundred years and and and and in The US for a few hundred years, in the end. But they said, that you could have thought that they would have acted like the Luddites or like kept in the swing and said, burn it down.

No. Kill the machine. We don't want any machines. Instead, they quite wisely said, uh-uh. It's inevitable.

We gotta be in charge of it. We are the ones who are gonna run these machines. We are the ones who know it. And there's a, I think, a great lesson for the adaptation to technology here. They lost staff.

They lost 30 by one account, 36,000 typesetters. But within five years, they had all that back and more because the industry grew tremendously because of this efficiency and because of this automation. And then the end of the story is that they didn't remember their own lesson. And the nineteen sixties, they, opposed the technology, killed five of eight newspapers in New York, and eventually killed the union. So when it comes to things like AI and writers and others, I think that, the best path is not to resist it, but to insist to be in charge of it, to be the agent of it.

And so I think in a way that's what the times is doing here. And what Chipstead has been doing and it's saying to the newsroom, this is yours. It's your tool. You use it. You figure it out.

You find the good benefit of it. And you're the ones who know best. And as long as that's the attitude, I don't think there's any there's not much to fear. There's always things to fear, but I don't think there's much. So Sure.

We'll see how the we'll see how the times comes up. But you I I guarantee you, something stupid will come out. Mhmm. And it will get blamed on the machine. For sure.

Guaranteed. Going to happen. Yeah. I also I also am writing about sorry. I'm going off of my my No.

I don't. Tangent. But, I was also writing about, at the end of this period in the sixties and seventies and eighties when when Murdoch, had what was called the whopping, episode where where he replaced everybody in production overnight. And I saw a column from from a London newspaper that said, who are the journalists gonna blame now? Whenever there was a typo or something wrong in their stories in print, they could, oh, well, it's the typesetters.

That was the typesetters. Yeah. You know, not me. You know? You can always find somebody else to blame.

And then when they started typing their own stories in and it went straight through to the press, the columnist said, who do you blame now? Mhmm. So it'll be interesting too to see whether, AI anybody tries to blame AI for something stupid at the times what does. Yeah. Yeah.

That is interesting. New York Times is not alone on this either in The US. Financial Times, Fox Media, Axel Springer, of course, we've talked about. Yep. Associated Press, all doing similar things.

So you said that the inevitability, that was something that really came to mind as I was reading through this too. It was kind of like it's inevitable. So, okay. At what point do you just kind of open up and say, how do we work with it instead of working against it? Right.

Yeah. With the flow. Jeff, you included an article that looks at the Google mother ship and its internal challenges regarding AI development. And I felt like as I was reading through it, I was like time and time again, this just sounds so familiar. You and I have been following Google long enough to know that I You and I have been following Google long enough to know, I think, that this is just seems to be part of the company's DNA.

What did tell me a little bit about that. So so what happened here is that is I think I think everyone is very impressed with NotebookLM. It's the one application I hear constantly people say that they like it. They're impressed with it. They use it, but it almost didn't happen because of internal politics at at Google.

And and that's what happens at big companies. So the people at Workspace wanted to say, no. No. No. No.

We're gonna do that. No. We we were working on that. No. We we got that coming.

No. You can't do that. And it went around and around and around and almost didn't happen. But thank goodness, finally did. In the end, leaders of both teams stuck with Notepo Calam and it launched as a standalone website, and has since been added as a service and workspace.

Now by the way, since since the release, we've talked about this in the show before, three key members of the team have left to build a competitor. And somebody else recently left to build something related as well, I think in the audio world. So it's it's spawning all kinds of action. And it would have been a real shame if this thing got screwed up and people would have said, well, no. It has to work within workspace.

So we can't do this. We can't do that. And we don't do this. We don't do that. That's the way technologies work in companies.

Mhmm. You know what? I I'm I'm going memory lane today. Sorry. I'll do it.

Back in the day, when I was, helping to install the first computers in newsrooms, I always got amazed that people had huge territorial turf fights over boring computers. And why do people care so much? There's something about it. People just wanna be in charge of it, see prior discussion, and they don't wanna think that that's not their choice. And they screw it up along the way.

And big companies like that. And I remember years ago hearing Eric Schmidt in person say that the biggest problem for be Google's own success and size. And I think that's what happens. It gets so big. It gets so high bound.

Totally. But in this case, at least, it worked. Yeah. The the good product came out. They benefited.

We benefited. But it's it's nice to report on these kinds of of near misses because I hope it's an object lesson to other companies and to Google itself not to screw up. Yeah. Yeah. It's it's really hard when a company gets that big.

You know, no matter how good the company was at one time, at a certain point, it's just that massive scale and all of these teams. Teams. I mean, I mean, I feel like this is just a story that I've been hearing and talking about on, you know, on my on the Android show for so many years, you know, just related to that one specific facet of Google. And then you take all of their efforts and so many conflicting approaches, so many situations where this team is working on something. This team over here is working on something.

And to the outside looking in, you're like, oh, well, that's basically the same thing. Like, why don't they just pull their efforts and work together? But internally, it's almost like they're working against each other to prove that theirs is the right approach and then it doesn't get integrated. And I just, I, it's just messy. This is this is Google.

It's classic innovators dilemma. Yeah. Right? Yeah. You know, I remember one time I'm kinda sorry.

More more memory lane with with local Jeff. I remember a newspaper I worked with in Syracuse. The publisher, regretted that somebody had, started a new publication in Syracuse. There was nothing about health in Syracuse. Ergo, they got all the classified ads for nurses and doctors jobs because they were cheaper and more focused than the big newspaper.

And the publisher said, I'm not gonna let that happen again. Better we compete with ourselves than somebody else competes with us. And that's the enlightened view I think for a company, but it's really hard because somebody has their turf. Yeah. This is mine.

And if you come in, you're gonna cannibalize me. The other big corporate word, you're gonna take away something right or you're or you're gonna beat me to it or in politics I lose and they lose sight of the bigger picture. So in this case it worked, but oftentimes God knows what we haven't seen. Yeah. Yeah.

Because things have been stopped. Yeah. But this is a good, good article in the information. Indeed. Don't listen to what XAI tells you about the information.

It's a good place to get. Exactly. Let's give, Aaron Wu as the byline on the article. Give credit where it's due. There we go.

Perplexity rolled out its own research project. We were talking a little bit earlier about research and AI and, like, you know, the the actual project or product of, you know, use this tool. And and it's really kind of, tuned into and zeroed into, being good at reaching out to, you know, a number of different places and asking questions and being critical and then presenting, like a white paper or or a printout of some sort, that you can, you know, kind of have a large expansive understanding of that topic or that question. And, perplexity has their own called deep research. I think aren't it's almost like they're all named deep research.

Yeah. Yeah. They are. Yeah. Google has a deep research project or product.

OpenAI has one titled the same. So, anyway Right. They've all got their own. This one's free to use, though. So that is a little bit of a differentiator.

If you get, up to five queries per day, you do have to be logged into your perplexity account, but you don't have to be a paid user to use it. So that's nice. So, before the show, I I put something in here. Well, I did two things. One, at the very end of the show, we're gonna talk about a really interesting piece somebody wrote.

And and at the end of that article, somebody shared their notebook l m query that helped them do it. So I tried to put that into this, but it was too much. So I put in a different one of what we what we talked about a little while ago. And I asked, how did Leibniz's invention of binary math alongside the Morris code and the code lead to our digital computer revolution, the internet and AI. Wow.

Right? Did you just, like, was this a burning question you just happen to have Actually, yes. Around? I was just writing that paragraph. I I won't I won't do it again.

I won't go down memory lane. But it it has a relation to the typesetter and and and teleg, telegraphy. Mhmm. So, but I I I I I discovered I I saw I just I accidentally came recently that it was Leibniz in 1685, I think, who invented the binary math, the digital zero one. That is the basis of everything digital today.

Right? And that's that's mind blowing. Right? And, but I think that that Morse code added to that because by encoding letters into dots, stashes, and pauses, That in that was kind of a next step that enabled what we would become ASCII. Mhmm.

Would enable the encoding of language into a digital world. So, anyway, I asked Deep Research, and I I haven't had time to read this. I did it right before the show, but it has some good explanations here with with links, understanding binary as a universal language from mechanical calculators to symbolic machines, Morse code democratizing digital communication, Bodeau code, which which had five bits per character per letter, the first binary character encoding, convergence paving the way for networks and AI, and then finally conclusion, a legacy of symbolic abstraction. Oh, I'm gonna you ever steal that. That's nice.

Mhmm. The digital revolution emerged from a cascade of abstractions. Leibniz's reduction of logic to zero one, Morris's compression of language to dot stashes, but those mapping of the characters to binary sequences. Together, they transform information into a manipulable resource decoupled from physical media. As AI and quantum computing, computing advance, this legacy persists, proving that the most transformative technologies often begin as simple symbols on a page.

Woah. Well done. That was I'll give you an a for that. Like Progloxity? Yeah.

So, we were asked by by one of our kind readers before the show, what do you think of, deep research? That's right. And, so we promised we would talk about it and here it is. Now it still did that kind of BS phase in the beginning where I'm thinking this. I'm wondering that.

And I still don't think that's a reflection of its reasoning. I think that's BS. Some argue with me that no. It's actually seeing what we're seeing there. I think it's a performative presentation of it.

How so care. I get I like, I feel like it there's some truth to that. I feel because and and hear me out and maybe someone's already shared this with you and and you're you're still, you know, unconvinced by it. But when I see that, I don't I don't see it as like a it's it's it's telling me this to make me feel better. I see it as okay.

Let me let me put it this way. I've I've used a certain image generation and music generation systems to where you can put in your prompt and you put in a short prompt, and then you can have it turn do like a magic prompt response, which is essentially it takes your prompt and it adds to it. It's almost like it recreates your prompt based on what it knows, and then that informs the system to do something different or better or nicer or whatever. And so when I see what deep research is doing, that's kind of the lens that I see it through. It's like, this is what I wanna know, and it goes, okay.

Well, then we put that in here and we see this. Okay. Now we can take what we see and kind of reshape what you were asking based on the new information we have. And maybe I'm wrong. But when I see when I see that whole process, it's kind of like I'm looking at it through the lens of, like, how can I make my prompts do that so that I can just cut out the middle man?

Right. You know what I mean? I did I could be wrong. But for whatever reason, it's been kind of assuring to me to see that. So okay.

I think it works in images because because, you know, you don't get the BS of the the machine saying, let me see. Right. Well, there is that. No. You're absolutely right.

Right. It's not too. And I think that if so maybe maybe the whooms are just I don't know. Maybe that's all that's performative. But I think it ruins the credibility of believing that it is a sequence.

But you're absolutely right, Jason, that the way this supposedly works now what what what researchers have found again and again and again is that, slicing a task into steps Right. Is how you improve the, results. Whether you do that manually by by doing question upon question upon question or whether the machine does it itself by cutting it up into multiple questions, the results are better. So I think that's what the machine is in fact in some way doing. I would just rather see a more literal expression of that than adding the, let me see.

That's interesting. I would agree with that. Yeah. It gets it goes a little bit down the Cheesy Valley territory with with some of that stuff. Because it's like, yeah.

Okay. Thank you for trying to speak to me like you're a human. We we know you are not. We know you're not. That's unnecessary.

But Right. Yeah. I see what you're saying. Yeah. It's interesting, though.

And it does take a while. Although it doesn't I haven't used Chad GPT's deep research version because I don't pay $200 a month for Chad GPT, and I won't. But, but, apparently, that one can take a very long time. Like, you can put in a query. Really?

You know, you could be waiting twenty, twenty five minutes for the final result. Whereas this, I don't know. It seems to be, like, three, four, five minutes somewhere around there. So it's a lot of time. Quick, and and it also gave me, footnotes.

Minutes. Yeah. It it it was good. I just printed it out. In fact, so I use that line.

Now then my next question is that everyone's seen this. So if you see me, use this line in my book, you can come back and say, oh, Jarvis. I know where they came from. So I think I think I have to footnote that. I was just gonna say, how do you cite that as an author?

I don't know. Because what what it is is I gave it the idea about this. My question presumed the sequence. Mhmm. Right.

So so that's that's mine. Not that it's all fresh. I mean, others have had the same same thing, but I but I, you know, it it but that legacy of symbolic abstraction, that phrasing is its. So what's the footnote? Do I do I footnote that to say, well, I got inspired by, perplexity on that.

Or do I just say that's my machine. I use this. I I don't know. Right. Oh, that's that's gonna be an interesting thing to see play out.

You're kind of at the you know, you're you're working with it along with whoever else is kind of in a similar situation. Yeah. In the next couple of years, we're gonna see where the dust settles as far as, like, what the standards are around that. And what would I want one of my students? I I think I'm gonna email, my friend Matthew Christianbaum, who I've quoted often, who wrote the book wrote the, Atlantic article on the text apocalypse.

He was part of the, MLA, Modern Language Association Task Force on the use of AI in the classroom in English class, basically. So I'm curious what the standard should be for acknowledging, collaboration. Yeah. Yeah. Interesting.

Super interesting. Fun. Well, what when you figure that out, definitely, let's retouch on that on the show because I'm super curious to One more quick thing here. I think that, you raised it to me first because you started using it first. But I think perplexity is a bit of a dark horse here.

They're, you know, as we're talking about, the commodities of grok and whatever else is around there, all those others, they're quietly, not so quietly, because they're doing, you know, stunts like trying to buy TikTok. Oh, that's true. Yeah. We can't even talk about that. But they're they're, somewhat subtly, implementing the AI for users in better ways.

Mhmm. Right? So their their discover platform with news was really good. This, I think, is the best, substantiation of this. It's less geeky.

It's prettier. It's cleaner. So more accessible. Yeah. They're just doing a good job.

Mhmm. I totally I don't really know who's behind perplexity. I'd love to to know more. Maybe get them on the show. Mhmm.

Because I think this is impressive. They they quietly have have once again again and again and again, I say, oh, perplexity that well. So I'm sort of give them credit. Indeed. Happy to give them credit.

I'd I'd it would be really hard for me to do what I'm doing right now if I didn't have perplexity, to kinda lean in into for some of the functions of my independent career. So, yeah, I'm I'm really happy. So perplexity, we're gonna, we're gonna send you a video of this and then ask for, you know, when the problem to come up. Is that how it works? We didn't we didn't we're not sure of that reason.

This is, this is earnest, but in fact, is that how it works? I should ask perplexity and see. Oh, yes. That's how yes. I, my prompt should be how should, how can we get perplexity to sponsor us on this podcast?

Perplexity. All right. Speaking of, let's take a quick break and, you know, I have a few words for you to listen to before we are back and we take a look at a few more stories, including what I feel is a pretty fascinating look at the evolution of computing that you put in the rundown. That's coming up here in a second. Alright, Jeff.

This, this first of two, and this one really talks this one is really cool because it really breaks down the different stages of computing, and they they approach it in four different stages essentially. And I'll just try and summarize them real quick, and then we can talk about it. The early era, which is fully deterministic when essentially was that mean input algorithm output all done with fixed predictable data. So, you know, in the days of old of computing, we would put in information. It was a fixed piece of information.

It would be processed a certain way, and we would get the same answer out on the other end. It was fully deterministic. So the example would be a spreadsheet. Right. Yes.

Exactly. You can ask a question. You can ask a what if. What if I change this? But you know what you're it's got data.

You know what it's doing. It's giving you a specific predictable answer. Mhmm. Mhmm. Then we've got big data machine language era, which is deterministic input, very fixed, probabilistic algorithm, so little mushy is the word that I keep coming up with there, and then deterministic output.

So the input and the output fixed, that processing in between aided by, less specific specificity and more, like I said, kind of mushy processing, like recommendation engines Exactly. For example. Right. So so it it is it's not necessarily a right answer that can be determined, but it has an answer. So it says, this is the ad I'm going to show you, or this is the movie I'm going to recommend.

And it might have a different recommendation for other people depending on the inputs that they're giving to the system and stuff like that. The current era, according to this, this article, or writing is fully probabilistic. And this you know, the the big example here, of course, is generative AI. All stages kind of mushy. Right?

You there's no fixed information. It's more creative. It's, more pliable, I guess. And, you know, there there is no real certainty about what you're putting in, about how it's working with that data and what you're getting out of it. You can ask it a question 10 times and get 10 different answers.

So, yeah, plausible yet incorrect information, is is one of the downsides. Right. And then? And then we have the future, which is probabilistic leading into deterministic output. So accepting that kind of like mushy creative kind of input, accepting the fact that the processing that's happening behind the scenes with the algorithm can go any number of of ways.

But that in the end, there's some sort of gut check, some sort of comparison between the output of the system and truth or fact or whatever that is to kind of steer it onto the tracks again. And yeah. So so we both found this really, really interesting, and credit where it's due. I found this thanks to Benedict, Evans, who I quote often. His newsletter just, always has great stuff, and I recommend it highly.

And, I had to do some searching here. This is, by Who's the author of this? This is Naren, Josh, and Moll. Okay. First, first name n a r a I n.

So I'm hoping I'm pronouncing that right. Josh and model. And, he's a product leader. He or she is a product leader, mentor, advisor, over a decade across two stints at meta, drove growth for ads and so on. So, really smart.

And, and, and I think it, it, again, it abstracts. That's the word of the day is abstraction. It abstracts us to say, you know, this is the academic skill. My, my kind of academic mentor named Jay Rosen at New York university. I'd watch him work and he would stop and he'd say, what's what's happening here?

Let's examine this. And I think that's what, what this piece does so well is it makes us see at a, at a very high level, if we're now at the stage of probabilistic to probabilistic to probabilistic, vague answer, vague process. I mean, vague question, vague answer, vague process. What's also true is that we're at the, age of approximation. Mhmm.

Right? With the beauty of computers, I always thought when I was when I was younger at this, I was never young with computers because I'm too old. But the beauty of computers is there was always a right answer. There was always a solution. There was always and that's just not the case anymore.

No. Now, it's close enough for jazz or AI. Mhmm. It gives you a different answer every time. It's approximate.

It's gonna be wrong a lot of the time. It's gonna be whatever. And and we're kind of learning how to adapt to that because the possibilities are great. And that sounds wrong and scary. Like, there should be an answer.

There should be truth. There should be a fact. There should be but life is like this. Mhmm. Mhmm.

Yeah. Right? So so in essence, is technology becoming more of a reflection of of that of of the in the, not always one answer, you know, kind of nature of life. And, you know, AI has been largely trained on human interaction, right, which is a reflection of that imperfection. Yeah.

Yeah. But it's also it's also the the the the risk here, I think, and this is the problem of AGI and such, is that if we think the goal is to replicate life, and then one up it, I think that's that's where we get in trouble with Test Royale and AGI and all the BS. It's a machine. It can do things that we can't do and do them really well, and it can't do some things we can do, which will get us into our final piece of the day. But it still is inevitably going to reflect somehow the way we think.

Mhmm. And I think that's that's what's occurring here, and it does get closer. Yes. Yeah. One thing that came up came to mind to me, and maybe I've raised this question on the show before, but in, in a time where the importance of fact or the importance of having a specific answer versus the potential for a lot of different types of answers, that fact layer is becoming deprioritized to a certain degree with technology like this and everything.

And I wonder if the probabilistic nature of AI is a symptom or a cause Mhmm. Or completely coincidental. Yeah. It's a it's a great question. It's a very now technology.

When you really think about where we are, like, globally on, like, a political you know, from a political perspective, from so many different aspects, the the idea of fact and truth has really kind of become murky. And and to a certain degree, like you were saying earlier, some there's a lot of people that just kinda don't care about that. Like, that's not as important as it used to be. And then we've got this technology that kind of revels in that. Like, it's like, no.

This is this is a feature, not a, you know, this is a feature of the technology. Right. And and and when you get to quantum computing, as I as little as I understand it, it's even more the case that that we we live in an approximate world, and we fooled ourselves all these years, perhaps since Leibniz, about about the deterministic, set nature of the world. Or maybe that's just not the case. By the way, Narayan, at the bottom of the post, says this was created using ChatGPT deep research with the following prompt and notebook LM for the podcast, which is also here.

So we put in a podcast, he or she put in a podcast and you can click and see the prompt that was given. Following a great really great conversation with an old friend I reconnected with during which we talked about the past, present, and future state of tech AI. The following simple high level framework which I sketched out during the meeting has been on my mind. And that's the framework that Jason just talked you through. Right.

So this is by the way, so interesting. I missed that little footnote. I told and but yet, while I was reading through the article, you know, some of the framings around kind of, the I know some of the some of the ordering, the kind of the the order that it was following had me wondering. I was like, how they must have used to some degree, they must have used AI. So this oh, that's so fascinating.

And so and and so then I have to say, like, I read this top to bottom, and I have often felt like my AI DAR is so strong that when I start to, like, like, think that I'm reading AI output, I I lose interest to a certain degree. Yeah. You know what I mean? If someone shares something with me and it's and it's so obviously completely AI generated, I'm less likely to really kind of dive in and give it twenty minutes of my time. But I did with this and it was really well written and I was captivated by it.

And then to find out at the end, it was it was organized and everything through notebook Gillan. Well, I get it. It helped. Yeah. I don't know.

Can you do audio from from the page or not given your circumstances? I can is there There's a podcast at the top of it. I'm just wondering if we what the first minute or so sounds like. Oh, I see the podcast. Let me see here.

I can certainly try and see if sorry. I'm tap dancing while I'm navigating different That was my fault. I shouldn't have done that to Jason. Okay. Are you am I showing it?

Let's see here. Okay. Alright. So I've added the page. Now let's see if I click on podcast and play.

Hey, everyone. Welcome to another deep dive. You know how much we love exploring AI's potential. Right? Let's jump ahead a little bit.

It's the same the same jerks all the time. About this. The framework argues that we're actually on the cusp of a whole new era of AI, one where it can be both creative, like our minds, but also totally reliable. Woah. Hold on.

Woah. AI that's as flexible as us, but way more than. I'm so surprised. That's that's blowing my neural network, man. I I like AI, Jason.

I'm stealing that. That's great. Fascinating. Happy you put that in there, and my mind is a little blown. Oh, one more yeah.

One more thing to dimension on that is that is that, coincidentally, there was a story in the Wall Street Journal, about, the company being started by Brett Taylor, cofounder and CEO of Sierra. He's also, I think now the chairman of, of, what is he? Chairman of XAI? And he was a cochair at Salesforce. Brett's a good guy.

He'd been around. So he was talking about agents. And what he was trying to say in this article was, to this point of approximation, except that it's imperfect. Like, get over it. Mhmm.

There you can't wait for them to be perfect to start using them. Go ahead and start using them. Of course, I hope you don't use them for nuclear war. But rather than say, will AI do something wrong? Say, when it does something wrong, what are the operational mitigations that we put in place to deal with it?

So you've got to deal with the imperfections of the machine. That's an entirely new way to think about how we thought of these machines. They were Totally. Definitive answer machines. And this is right there an example of what that post was talking about.

Well, yes. And it's and it's like, do you do you throw the baby out with the bathwater? Like, there's a lot of benefit to get from these things. And if we if we decided, no, because sometimes it gets things wrong, we will not use these things. Like, that's getting rid of something that could potentially be a solution for some very major things down the line.

And we would, you know, say no because it doesn't get everything right. And is that the right approach? You know? Yeah. Interesting.

Let's end with a patented Jeff Jarvis rant. So I I I I feel a little bad because I'm gonna I'll give credit where it's due and tell you who wrote this, a professor of English named Paulus Robbins. And she had been a Dean, is no longer. And for the reason you become clear, I'm glad she's not. Because she wrote up a a newsletter post about AI in education that just scared me to death.

And I saw it quoted around the socials a bit. And so a few paragraphs into it, the subhead is immediate faculty and dean action required. That gives me a cold sweat. Like, oh my god. What's what are they gonna make us do now?

Right? So this is this is her prescription, and I'm gonna quote. Every faculty member should begin to write a detailed memo specifying the following. What specific knowledge do I possess that AGI does not? What unique insights or capabilities can I offer that exceed AGI systems?

Which students and in which topics would benefit enough to pay to learn from me and why? Faculty who cannot produce this memo with concrete defensible answers have no place in the institution. There is no middle ground. O m g. This is the dumbest thing I've ever read about AI.

It's just ridiculous. And it's and what scares me is that I'm afraid that Elon Musk is gonna see this and argue that, you know, you should fire every professor who can't do this. And this is absurd. This is ridiculous. This is wrong.

And this, ladies and gentlemen, is what happens when you believe the BS of AGI. When you start to believe that there is a mythical machine out there that's going to do everything better than anybody, then you're going to constantly try to replace the people you don't like, which I guess I'm guessing this person is no fun in the faculty meeting. And and she's going to try to get rid of all her colleagues, and I would guess herself unless she believes she has something that's defensible, that her courses are defensible. But, this is ridiculous. And and it's it's it's the it's the the fruit of the hubris of the AI boys.

Mhmm. That when they argue this machine is gonna be able to do all this stuff, people are gonna extrapolate to these ridiculous ends. So I'm sure that professor Robbins is a very nice, well educated, person and good teacher, but, oh my god. Do not let this get around. This takes it to a Yeah.

To an extreme level. Quote, in the AGI era, the only defensible reason for universities to remain in operation, if we even exist, is to offer students an opportunity to learn from faculty whose expertise surpasses current AI. Nothing else makes sense. Wow. So I didn't just I didn't wanna make it into a loud, crazy Jeff ramp because there's a human being involved here, professor Robbins.

But, oh, lord. No. No. Yeah. Interesting.

This and every other story that we have talked about on the show, and I realize I don't really say this in the podcast. Maybe I need to let you know that we have show notes, elaborate show notes that have links to every story that we talk about, each and every week. If you go to AIinside. Show or if you subscribe to the podcast, you'll get it in your shown in your, attached to the podcast in the notes section. You'll get all the links to the stories.

So, if any of these stories sound interesting, especially this this week, you know, I wasn't showing very much because I'm on location, and, it's harder to do on a smaller laptop and everything. So if you wanna read these stories, and I highly recommend you do that, aiinside.show for the show notes. Yeah. And to those who've watched this live, I don't know if you knew this, Jason, but you've been pretty pixelated the whole time. Yeah.

But, this software, Streamyard, records locally. Mhmm. So if you find this irritating, the local the the I presume that the finished version put up on YouTube Yeah. Will be clean. And if you wanna recommend this fascinating discussion to your friends, maybe you don't wanna recommend the live feed on on, on LinkedIn or on, Twitter, but the, version will be up on YouTube.

And where will that be? Yeah. Youtube.com/@AIinsideshow. Just or just go to YouTube and search for AI inside or AI inside show. You'll link directly to it.

And, yes, I do take the recordings and then do a final edit, and then up that's what you actually get on YouTube. And then, of course, the audio podcast is always a pretty curated, produced, version, so everything will sound really good. So, yes, don't trust me. I'm glad I'm glad we got through the show this week even though you're you you took some time off the, off the slopes. That's right.

Saved you from any broken legs. And, I hope you have a great, great time on the rest of your break. You deserve it. Thank you. Will do, Jeff.

And thank you for we moved this recording time an hour early. You're always super flexible. I appreciate it. Jeffjarvis.com is the site to go to to find all of Jeff's work. So many books that you need to catch up with, including The Web We Weave.

Where are they? Oh, hold on. Hold on. I can't see it in the Oh, I see. Okay.

There we are. The Gutenberg Parenthesis now in paperback, magazine, and The Web We Weave. Gutenberg Parenthesis is also out in, Spanish and will be soon in German. Right on. Love it.

And then new books coming somewhere down the line. I'm working on it right now. Right now. As I said, everything that you need to know about the show can be found at AIinside.show, including the show notes, of course. All of our live, you know, our the video version, the audio version, links to the YouTube channel, it's all there.

So if you need to remember one place, just go to AIinside.show. If you love this show, please leave a review. We really, that that is one really great way for new people to discover what we're doing with AI inside. And so, you know, go on to Apple Podcasts or wherever you get your podcast. If they allow you to rate or review, PocketCast allows you to rate, please do that so we can kind of raise visibility on everything that we're doing.

And then if you really, really, really love the show, you can support us directly on Patreon, patreon.com/aiinsideshow. You get a Discord community, ad free versions of the show. Often, those ad free versions go up before any of the other versions. Once I'm done editing and everything, you get it first. There's some Hangouts.

We got an executive producer level, which you get a t shirt, an AI inside t shirt when you do that. And I'm happy to say we have another new, executive producer. This is the most we've ever had. At one point, we've got DrDew, JEFFREY MARRACCINI, WPVM 103.7 in Asheville NC, Dante St James, Bono De Rick, and our newest Executive Producer Jason Neiffer!

Is it Neffer? Neefer? Regardless, I'm sorry if I got your name wrong, but thank you so much for your support. Thank you. Thank you all.

Means so much to to us and to the health of this show. So we appreciate you. patreon.com/aiinsideshow. I promise my video will be better next week. I will be back home, but thank you everybody for watching and for listening, for being here.

Thank you again, Jeff. And we'll see you next time on another episode of AI Inside. Take care, everybody. Bye bye.