Amazon Saves 4,500 Developer Years with AI
August 28, 202454:33

Amazon Saves 4,500 Developer Years with AI

[00:00:01] This is AI Inside Episode 32, recorded Wednesday August 28th, 2024.

[00:00:07] Amazon Saves 4,500 Developer Years with AI.

[00:00:12] This episode of AI Inside is made possible by our wonderful patrons at patreon.com.com.au.

[00:00:17] If you like what you hear, head on over and support us directly and thank you for making

[00:00:22] independent podcasting possible.

[00:00:29] Hey, what's going on everybody?

[00:00:31] Welcome to another episode of AI Inside, the show where we take a look at the AI that

[00:00:36] is layered like a wonderful azana through all sorts of technology that we use on a regular

[00:00:41] basis or might use in the future when we've got household humanoid robots wandering about.

[00:00:49] That's just a little bit of a glimpse into what we might be talking about today.

[00:00:52] I'm Jason Howell, one of the hosts, joined as always by my co-host Jeff Jarvis.

[00:00:57] How you doing Jeff?

[00:00:58] Hey boss, good to see you.

[00:00:59] Hey, good to see you too.

[00:01:00] Welcome for another episode.

[00:01:03] We've got a definitely a great show with some excellent news to talk about before we get started.

[00:01:09] Huge thank you to those of you who support us on patreon.patreon.com.

[00:01:13] Slash AI Inside show like this week's featured patron, my sister, Kim Blazer.

[00:01:21] Kim, you're supporting me from day one.

[00:01:24] Me and Jeff.

[00:01:25] Thank you sis.

[00:01:25] It's awesome.

[00:01:26] Thank you, Kimmy.

[00:01:27] So good to have you on board.

[00:01:29] Anyone can get on board.

[00:01:30] You don't have to be my sister in order to support us.

[00:01:33] Patreon.com slash AI Inside show.

[00:01:36] Also, if you are watching us live because week after week we continue to get more and

[00:01:41] more live viewers of the show when we're recording it, please subscribe to the podcast

[00:01:46] so you don't miss future episodes that can easily be done at AI inside dot show.

[00:01:51] All the information you need is there waiting for you.

[00:01:55] All right.

[00:01:55] So let's get right into it.

[00:01:57] Starting with strawberries and what I'm talking about is the information who reported on its

[00:02:04] sources saying that open AI is targeting a fall release for its rumored strawberry AI.

[00:02:11] This at one time was called Q star and they ended up reportedly changing the name of this.

[00:02:17] This is the model that is reported to be capable of solving complex math and programming

[00:02:24] problems much better than what we've seen out of current models.

[00:02:29] Also, though part of the report is that open AI is working on another model or AI model called

[00:02:37] Orion, which would actually utilize strawberries high quality training data to surpass GPT

[00:02:46] fours abilities.

[00:02:48] So essentially, strawberry creating that high quality data that's then fed into Orion as its

[00:02:57] dataset.

[00:02:58] It's learning of model essentially, which is interesting.

[00:03:03] I'm super curious about this idea because on its surface, we've talked about it before,

[00:03:09] but on its surface the idea of taking AI generated output and using that as a dataset for

[00:03:15] an AI system just sounds it sounds wrong.

[00:03:20] But I don't know, some people have pointed out that it's not actually it might seem like

[00:03:25] it would be wrong, but it just seems like it would be deluded information.

[00:03:29] But I don't know what are your thoughts on that?

[00:03:31] Two things.

[00:03:31] I think the first is that is the strawberry supposedly capable of reasoning and when I

[00:03:35] wrote about all these topics is it forces us to define reasoning.

[00:03:40] Yeah, what is that?

[00:03:41] What is reasoning?

[00:03:41] What does it mean to say that it can reason something if you give it a problem?

[00:03:45] The issue for AI remains that it has no touch to reality.

[00:03:49] It has no experience.

[00:03:51] It has no way to experience things.

[00:03:53] So can it come up with its own human like algorithms of figuring out the world

[00:04:00] reasoning things through knowing what the impact of something is?

[00:04:04] How does it test hypotheses against reality?

[00:04:08] So that's one.

[00:04:09] The second is this is this synthetic data thing.

[00:04:12] Yes.

[00:04:13] I remain still cautious as can be about that.

[00:04:19] I've mentioned oftentimes my friend, Matthew Christian,

[00:04:21] about wrote a piece in the Atlantic about the text Pocalypse about feeding upon your own

[00:04:26] entrails until you end up with a gray goo.

[00:04:28] Sorry for that.

[00:04:29] Right.

[00:04:31] And the New York Times, I think it was had a story this week

[00:04:36] about the illustration they gave is they gave it a bunch of handwritten numbers

[00:04:40] and then had it learn from the output of that over and over and over again until

[00:04:44] everything just starts to look the same.

[00:04:49] And I don't understand the logic of artificial data, synthetic data.

[00:05:00] Yeah.

[00:05:01] I have a hard time with that too.

[00:05:02] In the sense that if you were saying this machine is trying to just train the machine

[00:05:08] better, that might make more sense to me.

[00:05:10] And it's doing some routine to do that.

[00:05:12] But once again, it has no tie to human reality.

[00:05:17] And so it's making up something on making up something.

[00:05:20] And I just I'm dubious.

[00:05:21] It's going to work very well.

[00:05:22] I know there are experts who say I'm full of crap.

[00:05:24] And I don't know enough about the science and the computing to get to the bottom of it.

[00:05:31] But I'm dubious.

[00:05:32] So we'll see where the strawberry in fact wows us.

[00:05:34] I'm sure it'll have some great parlor tricks.

[00:05:38] And maybe it'll be very useful in new ways.

[00:05:41] But I will say again, it ain't artificial intelligence.

[00:05:45] It's not general intelligence.

[00:05:47] It's not AGI.

[00:05:48] The better what?

[00:05:49] I'm not going to believe it.

[00:05:50] Yeah.

[00:05:51] Well, that is a good question.

[00:05:53] Like I'm not sure that I've seen AGI called out in relation to this.

[00:05:58] But we know that folks like Sam Altman and who are creating models like this,

[00:06:04] they really want you to believe that that is the past.

[00:06:07] That's the past.

[00:06:08] That's the only thing they're all just around the corner.

[00:06:10] When I was a little kid, my parents would say the Christmas was around the corner.

[00:06:14] And I really wanted to walk around the corner for your other.

[00:06:16] That's where Christmas is, right?

[00:06:18] It's to me, AGI is just like that.

[00:06:20] It's perpetually around the corner.

[00:06:23] It's perpetually just out of arms.

[00:06:25] We're there on Third Street.

[00:06:27] It's flying cars.

[00:06:29] Right.

[00:06:30] Right.

[00:06:30] Flying cars.

[00:06:31] We're going to have it in 10 years.

[00:06:32] I guarantee it.

[00:06:33] And then that it's always 10 years later when it's going to actually happen.

[00:06:37] That and fusion.

[00:06:39] Yes.

[00:06:40] Yes, that right.

[00:06:41] That too.

[00:06:41] In the case of Strawberry, it is a fall release possible according to the report

[00:06:49] done as part of chat GPT, but a smaller version of the model could actually still get pushed

[00:06:57] to 2025.

[00:06:59] So it could be pushed even further.

[00:07:01] All that's to say that there is no obvious launch date for any of this.

[00:07:07] We could see it sooner rather than later or later or never.

[00:07:11] Right.

[00:07:11] What was interesting too about the story is that OpenAI showed it to the feds.

[00:07:17] That's right.

[00:07:18] That's very important.

[00:07:19] Constant effort.

[00:07:21] They're doing a very smart job of doing PR, otherwise known as lobbying,

[00:07:25] with the government.

[00:07:26] I'm saying, oh, we're the ones you should listen to.

[00:07:28] We are the smart ones.

[00:07:29] Yes, we want regulation, but we should help write that regulation.

[00:07:31] And we're going to show you this amazing tool we've done before we show anybody else.

[00:07:35] And they're playing to governmental ego, which is kind of fascinating.

[00:07:41] Yep.

[00:07:41] Yep.

[00:07:43] Yeah, so very, very interesting.

[00:07:45] And that by the way, what you're talking about is showing it to the national security officials

[00:07:50] aligns with their, they had announced a collaboration earlier this morning.

[00:07:56] This month with the US AI Safety Institute.

[00:07:58] So this is kind of part of that as well, kind of like proof in the pudding sort of thing.

[00:08:04] Right.

[00:08:05] I knew the second I saw the headline of this article that we had to talk about it.

[00:08:11] The Verge article by Sarah Zhang about the Pixel 9 magic editor.

[00:08:15] Of course, last week on the show, I showed off the Pixel 9 and some of the features.

[00:08:20] And of course, one of those AI features is the magic editor, which is

[00:08:25] kind of a part of Google photos.

[00:08:28] So the photos experience, you can go in there and find a photo in your photo reel

[00:08:32] and go to edit it.

[00:08:33] And then you hit the magic editor button, which is of course denoted by like, you know,

[00:08:37] graphical stars and colors and everything.

[00:08:41] It's an animation.

[00:08:42] Yes, it's totally magic.

[00:08:43] It's enticing you to go there.

[00:08:45] And then when you do that, you can take a portion of your photo that, you know,

[00:08:49] your real photo and reimagine it.

[00:08:51] You can say put daffodils here.

[00:08:53] Or in the case of Sarah's article, you know, remove the person from the Tiananmen Square photo.

[00:09:00] Or and I don't know if that was just used as like an example of what could have happened.

[00:09:04] That might not have actually been used with reimagine.

[00:09:08] But Sarah did show, you know, some images like here's a photo of a stream.

[00:09:12] And then through the use of the reimagine tool able to edit in a very easily.

[00:09:18] And I think that's a big part of her point here at the click of a button

[00:09:23] edit in a crashed helicopter that looks, you know, reasonably convincing or a woman,

[00:09:28] you know, sitting on a carpet in her apartment, let's say, and then edited with magic editor

[00:09:35] able to include, you know, a syringe filled with the red liquid, a bottle of wine,

[00:09:41] something that resembles like lines of cocaine or some sort of powdery drug on the on the

[00:09:46] carpet. And I think her point is that is that these kinds of tools are the bar is lowered so far

[00:09:57] that as she puts it in the headline, no one's ready for this, that the assumption that photos

[00:10:03] equal reality has been challenged before. But this is the biggest challenge that we've seen

[00:10:09] yet because the masses now have access to this capability with very little effort needed in

[00:10:17] order to do it. And I think I know where your where your mind is at on this, but I'm curious to hear

[00:10:23] your thoughts. Yeah, and I get the argument. It's the argument made in the next story you

[00:10:28] put up too, which is also from the verge that the difference here is kind of a follow-up.

[00:10:34] It's the scale and speed. And so they stole a famous Mike Maznick headline about section 230

[00:10:40] and adjusted it here. Hello, you're here because you said AI image editing was just like Photoshop.

[00:10:47] So it's going to go in and say how you're wrong. That's a bad faith argument because

[00:10:50] it can do so much more and so much faster, which is the argument about so much about the

[00:10:54] Internet. Okay, stipulated it can do more and faster. But let's remember that photography

[00:10:59] is less than two centuries old. And even in old fashioned dark room photography,

[00:11:06] I've mentioned before there's the famous incident of a photographer thinking that Abraham Lincoln

[00:11:11] didn't look distinguished and presidential enough. So he put one of the famous portraits

[00:11:16] of Lincoln is his head on Calhoun's body, Calhoun being a slave owner irony of ironies

[00:11:22] and a horrible human being. And this came up with last year's Association of Internet

[00:11:27] Researchers Conference I went to, we were talking about all of this in deep fakes and

[00:11:30] everything else. And one of the researchers said, I quote this in my next book, the Web We Leave,

[00:11:35] we forgot that we already figured out that we can't know truth. And in any of this,

[00:11:41] it's just simply true that we have to judge the medium, judge the source,

[00:11:48] judge the veracity based on motive of what people are giving us. And there are tools,

[00:11:53] some better, some faster than others. But there are plenty of tools that let you create

[00:11:57] anything you want. That's fiction, that's film, that's anything. So I put up on the rundown. I

[00:12:04] don't know if you can get to it because it's a machine. A story from 1990, which had the exact

[00:12:11] same fears, of course about Photoshop and saying in there that, oh my God, look at the things

[00:12:17] that could be done with photos. And I remember being at the New York Daily News in about 1991,

[00:12:24] where I wowed them showing them what could be done with photo manipulation. And they hadn't

[00:12:29] seen this before. They hadn't really seen Photoshop and stuff. And I showed them what was happening

[00:12:34] with it. And there's always this little stage of shock you go through. I didn't think we

[00:12:41] could do that. Oh my Lord, what's the implications? Well, the implication is always that you

[00:12:45] got to judge for yourself. And yes, there's now a factor, a new factor you've got to judge.

[00:12:50] But I'm not terribly concerned. Now the other thing about AI is right now you can tell it in a flash

[00:12:55] because it looks so fakey. But it's made up from not just what's manipulated through

[00:13:00] the iPhone. But if you look at the stuff that AI makes up on its own, you can tell immediately

[00:13:07] it has that strange sheen about it. Yeah. Yeah. This came up last night on

[00:13:13] Android Faithful, we were talking about this. And the example of the Taylor Swift thing that

[00:13:21] we talked about a couple of weeks ago, the AI-generated image of Taylor Swift supporting

[00:13:25] Donald Trump and like, see, this is what happens when more people have access to these tools.

[00:13:32] And I was like, yeah, but what happened when that was shared? Immediately people called BS.

[00:13:38] Right. It's not like suddenly everybody was won over because this thing existed.

[00:13:45] It was immediately called out and widely spread that this thing was fake. And yeah,

[00:13:51] I mean at the end of the day, but my feeling when I read through that is like,

[00:13:55] Sarah, I'm a fan of the work that you do and everything, but it just,

[00:13:59] it feels very reactive like, oh, wait a minute, the technology is now too good.

[00:14:03] And we've got to do something and I don't know that she's necessarily calling for

[00:14:09] slowing down development or just raising awareness potentially about this stuff.

[00:14:16] But I mean, the challenge, the trick is the same as it ever was. It's,

[00:14:22] as we've talked about many times, it's the people, not the tools,

[00:14:25] just because the tool is suddenly better than it was before doesn't immediately make it

[00:14:29] a bad tool. Like it's people can and have done this for centuries.

[00:14:36] And I'll do it again. I'll plug the book again. And the web we weave coming out this October,

[00:14:40] web 20 is the discount code for 20% off if you find it on basic books. Okay.

[00:14:46] I go through a story which I won't dwell on right now that I call it's called Fama.

[00:14:51] It's the ability, the system that people used before they had print, which was social.

[00:14:56] You knew the innkeeper, talked to the people who came through town and the innkeeper

[00:15:01] cared about her reputation and you tended to trust the innkeeper. But that salesperson over

[00:15:05] there, you know that he's full of crap to make stuff up. That is to say that it's in the

[00:15:10] ear of the beholder that it's our responsibility man to decide. And no, I don't think this

[00:15:15] leads to all kinds of new classes in media literacy and tech literacy and all that.

[00:15:19] It just means that we've got to understand the human motivations of why someone might

[00:15:23] make up something like that and make us suspicious enough to ask. And especially

[00:15:27] anything you see that is too good to be true, stop. Just stop and ask what could be behind

[00:15:34] this. It could be a great joke. It could be a great insult. It could be a conspiracy.

[00:15:39] You don't know. And you need to look into it more. What you're looking into is not the

[00:15:42] technology. You're looking into how people manipulated it in whatever tool for their purposes.

[00:15:49] Yeah. So I'm not scared. I'm not scared. Yeah, it doesn't concern me either, but certainly a lot

[00:15:56] of people reacted to that. Yeah. And I think a lot of people do feel that way about it. And I think

[00:16:02] really at the end of the day, it comes down to the uncertainty tied to a new technology that is

[00:16:08] still kind of misunderstood, I suppose, or kind of making itself understood slowly.

[00:16:15] And actually, this is a topic that I'm sure we're going to have plenty of opportunity

[00:16:20] to talk with Sarah about when she connects here in a little bit.

[00:16:25] Is it Gannett or Gannett? I never know how to say Gannett. Gannett shuttering its reviewed

[00:16:34] product review website. This is going to happen on November 1st, 2024, according to sources

[00:16:41] at the Verge. The content on the review site had been scrutinized for the authenticity of its content.

[00:16:49] And this all stemmed from an October 2023 investigation by its own unionized staff

[00:16:56] who was questioning the writing styles and the reviews, could not verify the authors,

[00:17:01] you know, went looking on LinkedIn and other places online and could not verify that they

[00:17:05] actually existed, basically accusing the site of using AI to generate reviews content, which

[00:17:11] Gannett then attributed to a third-party marketing company, AdVon Commerce,

[00:17:19] who later denied using AI to write the articles, but people internally there said, oh yeah, AI has

[00:17:25] been used to write some of their content anyway. So it's shutting down. What do you think about

[00:17:29] this? That was using AI for sports stories too, not really generative AI, but a different

[00:17:33] structure. I think this is bad in a couple ways. One is that they use this stuff,

[00:17:39] and two is when the employees were whistleblowing on it, they ended up losing their jobs.

[00:17:45] And so kind of everybody lost there. They shouldn't have used it in the first place. If you're going

[00:17:50] to have a review site, I expect human reviewers to put their opinions on the line to say, I used

[00:17:55] this product service, watch this, whatever, right? But if the truth is, I talked to the,

[00:18:01] there's an executive at another one of these awful companies I talked to some time ago

[00:18:05] who said, you don't understand, Jeff, we're in a war about reviews. And so he justified using AI

[00:18:11] to make up reviews, which is to say that reviews online now pretty much have no credibility whatsoever,

[00:18:17] but it's not just reviews. It's a microcosm of what's happening to the web. People say Google

[00:18:22] is getting worse. Maybe it is in some ways, but I think the real problem is the web is

[00:18:26] getting worse. The web is getting ruined by this onslaught of junk. And it's not just

[00:18:32] synthetic data ruining AI. Synthetic data is ruining the web. And so, yeah, Gannett, I think

[00:18:40] ruined its credibility in reviews and probably had to get rid of this. The fact that the employees

[00:18:45] were the ones who blew the whistle and they lost their jobs is the wrong responsibility here.

[00:18:50] They should have gotten new jobs, God damn it. But yeah, a lot of the crap that we're

[00:18:56] seeing on the internet now is AI generated and it's ruining it for all of us.

[00:19:00] Yeah. I mean, reviews is just one of many different types of content that can suffer at the face of

[00:19:07] this sort of thing. But it is a very, like it is a type of content that I'm personally very

[00:19:12] familiar with because I review products and as my own kind of ethical approach on this,

[00:19:19] I won't write something unless I truly feel it deserves to be written or spoken about

[00:19:25] a product based on my particular use. And if I'm looking for reviews content from someone else,

[00:19:30] I want to know that that's derived from some sort of personal experience.

[00:19:34] Something real and tangible and not an AI that just goes out and scours and finds the general

[00:19:41] sentiment about a certain thing and then turns that into the declarative statement.

[00:19:49] Especially if you also have an affiliate link to buy. Oh, you know, those companies that are

[00:19:56] trying to make this stuff up are going to make up reasons to get people to buy this stuff and so

[00:20:00] credibility goes nowhere as a result. I used to be a reviewer myself of TV

[00:20:06] and I vowed that I would never use the fast forward button. I'd watch every damn minute

[00:20:09] of some of these horrible long miniseries. You don't know how I suffered. But yeah,

[00:20:14] a reviewer has a responsibility to the audience to say I'm spending my time so you can spend yours

[00:20:20] better. Right? Yes. I watched this entire series, so you don't have to. AD was the worst thing I had

[00:20:29] to watch a 14 hour miniseries. I watched every damn minute of it. AD. I vaguely remember that one.

[00:20:36] Vaguely remember hearing about that. I don't know that I actually watched it. You were

[00:20:43] remember it like I do remember it existing. So I've added like thorn birds. Did you have to review?

[00:20:50] Oh, yes. Oh, yeah. Yeah, that was a big deal at the time too. My parents are way into that one.

[00:20:55] Anyways, scientists from China and the United States have developed a pretty groundbreaking

[00:21:02] AI model called Act Found which can predict drug bioactivity. It could make drug development

[00:21:09] faster, more cost effective. The model was actually trained on a pretty extensive data set,

[00:21:16] including over 35,000 assays, 1.6 million experimentally measured bioactivities,

[00:21:26] a widely used chemical database or sorry, training data was sourced from widely used

[00:21:34] chemical databases. So many databases, I guess the one big challenge is the fact that different assays

[00:21:41] have differing units, different values, ranges, measurement metrics and all that making them

[00:21:49] incompatible, let's say between each other. And so that's a challenge for the AI. But

[00:21:54] I thought this was an interesting story. I'm always very curious to see how AI can transform

[00:22:00] things like exactly this from a super kind of a supercharged perspective of what we could do before

[00:22:08] and then taking the power of AI and its analytical capabilities and applying it to something really

[00:22:15] important. Yeah, this is where it really does matter. I was reading up on AI being able to

[00:22:20] predict where tumors appeared earlier than the human eye could catch them. I think

[00:22:28] gave an address to a pharma company in Switzerland. Very nice trip, good chocolate.

[00:22:35] And the language that I never realized about pharma is what they talked about is what they

[00:22:40] trade in is molecules. They're always in the hunt for a molecule and then the use of it.

[00:22:49] And that makes it a little simpler to get your head around that there's a finite set of, well,

[00:22:58] there's a test that exists against it. And one thing about the pharma industry is that they go

[00:23:03] through obviously a tremendous amount of failure. They try a hypothesis, it doesn't work, they do

[00:23:08] something else. And one of the problems for the industry has been that they didn't share their

[00:23:13] failures because it would seem like, well, let the other guy go through the same stuff we went

[00:23:17] through. When it gets to AI and training sets, I hope that it motivates pharma to share

[00:23:23] that data more openly so that these systems can be smarter and that everybody's going to

[00:23:28] be better off as a result. And I'll be curious just kind of ethically where that goes in that industry.

[00:23:35] Yeah, indeed. Indeed. I'm very curious to see how that how that proliferates and influences

[00:23:41] development of those things. And then finally, robotics which is kind of AI,

[00:23:52] robotics and AI really seem to travel in the same kind of direction. And I think in the future,

[00:23:57] this is going to become more and more the case. But Mark Gurman at Bloomberg wrote about Apple's

[00:24:03] exploration of robotics as its next pursuit quote beyond the iPhone. And so, which brings back

[00:24:13] memories of the auto, their self-driving car initiative that basically went away according to

[00:24:21] sources. Here it's looking at a ways to bring robots into the home. And Mark Gurman points out that

[00:24:30] essentially the car, the driving car project was a giant rolling robot at its core. And so,

[00:24:38] some internally are saying that by shuttering that department, they're able to redirect more staff

[00:24:46] at being positioned towards this goal with a much higher focus. But it's still going to be

[00:24:52] a long time before we see any of this stuff happening. They have a tabletop device codename

[00:24:58] J595 that has an iPad type display cameras and a base with a robotic actuator as a product that

[00:25:07] Gurman says should arrive in 2026 or 2027. But who the heck knows? I mean,

[00:25:13] it's Harvey Rob Collins as shuffle in the comments. What kind of robot would actually make an app?

[00:25:18] When I hear this notion of a tabletop robot, I can't envision what that does. Shuffle some

[00:25:25] cards for me. I mean, probably my paucity of imagination to figure out what that might be.

[00:25:30] But that's the latest description. We'll just have to see. It's a solution looking for a problem

[00:25:37] and maybe they'll find it. Yeah, maybe, maybe. And then let's see here. As far as things that

[00:25:45] the robot could actually do according to the article, it could be a device that comes to

[00:25:54] you when you're preoccupied and you need to do something with a device or whatever. So, okay.

[00:26:00] That's kind of hard to figure out. Operator, check on something in the house while you're gone.

[00:26:06] Do household chores. That would be a good one. Would love to see a robot do some household chores.

[00:26:12] I just don't see Apple being in the vacuuming business though. Totally.

[00:26:18] Policy maybe, but not vacuuming. And if you thought Apple Vision Pro was expensive right

[00:26:23] out of the gate, just imagine how some robot that cleans your home, how pricey that's going

[00:26:29] to be. And then finally, real quick, and then we're going to take a break. I just,

[00:26:33] I came across this video from Disney Research. It's an old video. It's actually from 2020,

[00:26:40] but the whole approach of this is robotic that is meant to imitate the facial movements

[00:26:47] of a human in the eyes and then also kind of like these subtle head nods and things like that.

[00:26:53] So, you could have that on your desktop freaking you out.

[00:26:57] Yes. You know, put a skin bag over it and it'll be fine. It'll be fine.

[00:27:04] Anyways, interesting to look at nonetheless. All right, we're going to take a break and when

[00:27:08] we come back, we should have a fun conversation coming right up.

[00:27:14] All right, Jeff. Today is a day of winging it because we had some plans for this episode.

[00:27:21] Plans changed sometimes on you at a moment. Technology.

[00:27:25] And so, yeah, sometimes, you know, technology even it's not just AI that's imperfect. It's all

[00:27:30] types of technology. So we've got a bunch of stories here. You are more familiar with some

[00:27:35] of these stories than I am. So we're going to kind of like reverse the roles a little bit

[00:27:39] and you get to set up some of this stuff and let me know. And then I can, you know,

[00:27:43] kind of jump in and let you know what I think about it while we're talking about it.

[00:27:45] Sure. What do you think? The way this works is I go through all week and I find AI

[00:27:49] stories and I put them in this rundown also in the twig rundown. And then I put them in here and

[00:27:55] Jason has very good news judgment, really does understands what is going to make for a good

[00:27:59] show and good discussion. He puts stuff up and we thought we're going to have a guest. We did

[00:28:02] fewer stories. So we just went back in and found some more. So we'll go through a couple of these.

[00:28:07] One is the Washington Post. I like to find because I'm very critical these days of the

[00:28:11] New York Times and the Washington Post on both politics and technology.

[00:28:14] So when I find something good and positive, I want to point it out. So Yian Wu,

[00:28:20] the Washington Post wrote a story about how why musicians are smart to embrace AI

[00:28:25] and see if I figured with you, Dr. Musician, it might be interesting to see how they present this.

[00:28:31] But it's really about being able to use it for inspiration and getting past,

[00:28:38] you know, as a writer I can understand this to an extent. But it's pretty hard for me to use it

[00:28:44] because I have specific things I need to say and it doesn't really get me over. But I'm curious

[00:28:49] for you, Jason, if you're trying to get past a melody or past lyrics or past an idea,

[00:28:55] do you think this in terms of your own creativity would be helpful? Is helpful?

[00:29:02] Yeah, I mean, and I've done videos to exactly this point on the TechSplitter YouTube channel.

[00:29:09] I am endlessly fascinated about the progress, the progression of artificial intelligence and

[00:29:17] music generation, not from the perspective that a lot of people seem to be, which is,

[00:29:21] oh, I can type in a prompt and it creates an entire song for me and blah, blah, like,

[00:29:26] I'm less interested in that. But although I respect, you know, that people do get

[00:29:30] interested in that, as a musician this is exactly what excites me about AI. I see it as a tool for

[00:29:38] kind of giving me a little bit of an extra kind of pathway to go down in understanding like different

[00:29:47] options or different ideas or different, you know, melodies that might open up or unlock a

[00:29:53] certain direction in my mind when I'm working on a song and I've especially when I've written

[00:29:57] myself into a corner, which I'm sure has, you know, a direct analog to writing and authorship is,

[00:30:04] you know, at a certain point it's like my creativity has spent and it's taken me to a

[00:30:09] certain point. And it's like, I love the idea and I love how I got here, but I have no clue

[00:30:14] what to do from here. And sometimes I hit those points as a musician. If I was working with

[00:30:19] an actual musician in a studio environment, that would be where that collaborative kind of

[00:30:25] conversation happens where that person that I'm sitting next to says, Oh, well, you know,

[00:30:29] what just came to me? It's, you know, why don't we go in there and we tweak the bass and make it

[00:30:33] the little bit. And then I'm like, Oh, wow, suddenly I'm alive again, which was what a producer

[00:30:36] does. Supercharges, right? Right. Exactly. Yes. Exactly. So that is that role of giving you

[00:30:42] a thought or trying something you haven't thought of. If you go down the story,

[00:30:45] I didn't listen to it all. But if you turn up the volume on the Washington Post story

[00:30:49] on the, on the upper right side and scroll down to the guy with the bass. Got it. So here's,

[00:30:58] bassist Mike Foley performs a solo. Lion wanted to create an unambiguous 100% human moment.

[00:31:05] That's this. This is an actual human basis right now. Yes. And now. So now we scroll up

[00:31:13] to the next screen. Then to build the music's poetic character, Lion added AI narration

[00:31:20] of a dream about a labyrinth of stairs. It's described by philosopher Walter Benjamin.

[00:31:26] Okay. This is AI now. So the AI, so is it the narration that's AI? Is it the accompaniment?

[00:31:36] AI. The accompaniment. Okay. And so he's playing also with AI.

[00:31:46] Musician. Yeah. That's a weird switch. It is. So then I added musical layers and drum

[00:31:57] patterns for the song. So I don't know if I like the result very much,

[00:32:03] but it also makes the sole creator able to do a lot. Totally. Well, and that's what gets me

[00:32:09] excited because I, as a musician, you know, I've been writing music and working with,

[00:32:15] you know, friends of mine writing music for almost 30 years now. And since, you know,

[00:32:20] when I lived in my hometown, Boise, Idaho, I was surrounded by people that I knew who were all

[00:32:26] learning this stuff along with me. And so we collaborated a lot. And it was really an

[00:32:30] inspirational time. Since I've been, you know, started a family and everything. I don't

[00:32:34] really know many people who do music. And so it's been largely kind of a solo operation.

[00:32:39] And I miss the collaborative thing because it's a lot of pressure for me to, like, come up with

[00:32:46] everything. Like I can do it, but sometimes like it's just not fun to have to do that. Like,

[00:32:50] I want to bounce ideas off of someone. So that's where this technology really does.

[00:32:55] It's not really judgmental the way a producer is, but it's inspirational the way a producer can be.

[00:33:01] You know, it doesn't say, oh, that's crappy, Jason. You shouldn't do that.

[00:33:04] Yeah, you're not going to get it right. Totally. Right?

[00:33:05] I was always going to tease you. Always going to be your friend, but it can give you ideas you

[00:33:10] didn't otherwise have. So I think I mentioned this on last week's show. We're talking about this a

[00:33:13] lot more as we go forward. I just wrote a syllabus for a course at another university

[00:33:18] I'm planning to be working with. I can't announce yet. And because actually, today is the

[00:33:22] day I am officially retired from CUNY and officially emeritus. Like today is the

[00:33:27] day. Yes, my congratulations. Wow. So I'll be working with another university soon,

[00:33:34] I hope. And it's all about AI and creativity. And my idea in the course is to get students

[00:33:39] just to get something they want to express. And then I want them to express it on their own,

[00:33:43] just like this, just like the basis. Purely human moment. And I don't care if you hate it.

[00:33:48] I don't care if it's bad. I don't care anything. Just see what you can do on your own.

[00:33:52] And then to experiment with what AI can add or not. How is it a helpmate? How isn't it?

[00:33:57] What kinds of tools is it inspiring? Does it help finish things? That's what I want

[00:34:01] the students to explore and see what the relationship is in collaboration with AI.

[00:34:07] And I know we're going to have Lev Menevich on pretty soon. And Lev is a brilliant

[00:34:13] scholar at the University of New York Graduate Center in digital humanities. But he's been

[00:34:18] doing a lot around this about trying to understand how AI becomes a creative tool.

[00:34:22] No different from a base or a baton or a paintbrush. But different. So anyway,

[00:34:29] I thought you find this one interesting. And just find the...

[00:34:34] That's exactly it. I mean, when I opened the article and saw the basically the subheaded

[00:34:39] which said today's experimenters are finding it can be more an inspiration than a threat.

[00:34:44] I was like, yeah, that's exactly how I feel about these tools. Because so many of the videos

[00:34:49] that I've done about this, the comment section ends up being either people who totally get it

[00:34:55] or totally agree with kind of my hypothesis around how musicians use these tools or the flip side,

[00:35:01] which is the doom and gloom AI is killing creativity. AI is killing... It's the end of

[00:35:10] blah, blah, blah. And it's like, no, it doesn't... It's not though. I mean, it might be a change.

[00:35:15] It might be a fork in the road from where we were to where we are going. But that's just

[00:35:21] technology. That's technology in a nutshell. We learn, we adapt and we use it in the new ways

[00:35:27] that we have options to now. I mentioned my friend Matthew Christian, my earlier from University

[00:35:32] of Maryland. He was part of a task force at the Modern Language Association, the MLA, which is

[00:35:37] educators in that field. And they did a really good report on using AI in English in the classroom.

[00:35:43] And they said the printing press is a tool. The typewriter is a tool. It's a tool. And

[00:35:51] all is the right way to go. Yeah, yeah. So the next story is... Interesting.

[00:35:55] Yes. Andreessen, I found this from Benedict Evans, who is an analyst I think the world of.

[00:36:02] He's great. He subscribed to his newsletter. And he used to work at Andreessen Horowitz.

[00:36:07] So he put up a list of the top 100 generative AI consumer apps. What I found interesting about

[00:36:12] this is how few I've ever heard of. And that's maybe shameful given what we do right here.

[00:36:18] I should know more of them. But my point is... But there's so many.

[00:36:21] It's hard to go up with them. And they haven't broken through. They haven't really broken out.

[00:36:24] So if you go down, there's a fair number we would know here. ChatGPT obviously,

[00:36:28] character.ai, which has kind of gotten half acquired. Plexity, Claude.

[00:36:34] Looking face. Lab. Labs. Right. But then Vigil.

[00:36:42] But it falls apart pretty quickly. Let me just read someone's here. Well,

[00:36:44] you've heard of any of these before. Oh, Idiogram. I love Idiogram actually. That's great.

[00:36:49] Janitor AI? No. Quillbot? No. Poe, I think I might have heard of. Liner. Oh, yeah. For Serpo.

[00:36:58] Yeah, we did okay. Liner. Civitai. Civit AI. What are we doing? Civit AI. Yep. Heard of.

[00:37:04] Spicy Chat? That sounds dangerous. No. 11 Labs. Aluma we've heard of. Candy.ai.

[00:37:10] I don't think I've heard of. Crush on AI. Leonardo Delle. Majority. Yes.

[00:37:19] Yodel. Yodio? Yeah, I don't know what that is. Cutout.pro. Photo room. Gamma. VDO. Enough.

[00:37:27] The point is that there's just tons of these things people are putting money into.

[00:37:31] Oh, so many. I saw a separate story today that I think three quarters of all of the startups in

[00:37:36] the world are doing. What's the big, the big?

[00:37:42] The huge edge.

[00:37:45] The incubator. The one that everybody goes to. You know what I mean? Yeah.

[00:37:51] But are you RAI oriented? And the one Sam Altman used to run. So then there's top

[00:37:58] 50 GenIA mobile apps by monthly active users. Microsoft Edge comes up to number two.

[00:38:06] Photomath. Bing is up higher because it's tied to your phones. Brainly, which I don't think was

[00:38:11] on the other list. But same thing happens. It falls off really quickly. There's more brands

[00:38:16] here. Adobe Express. Things you're going to have. Microsoft Swift Key, which I've never

[00:38:22] heard of. You're going to come to those because you're using other things.

[00:38:25] Swift Key's been around for a long time. SnapEvit. But those are all things that come

[00:38:31] attached to another app that you do use, but not as brands on their own.

[00:38:36] So branding in this AI world is at this point really a challenge. That was what

[00:38:42] it just made about this story. Oh man. I'm just looking at this as a research

[00:38:48] point for myself. I want to go in there and find out what a lot of these things actually are

[00:38:52] the ones that I haven't heard of. I'm actually surprised at how many of these I am

[00:38:55] somewhat familiar with. There's a lot on here that I don't know, but

[00:39:03] because this is such a hot market, I don't know if that's the right word for it,

[00:39:09] but a hot item right now, just AI in general and especially generative AI.

[00:39:14] I feel like there's new services. If you go on Product Hunt,

[00:39:17] just to kind of see what new things are hitting there, it's overwhelming. It's

[00:39:22] truly overwhelming the amount of products that are coming out to be the next AI for this or AI

[00:39:28] for that. So even within certain categories, it's really hard to know which one's the best

[00:39:33] within that category. I don't even know. I guess my question is, of a list of these 50,

[00:39:40] how many of these are going to be around in two years? Oh, I think five years. Very few.

[00:39:45] Yeah. Like, do they have the things bot and fold it in? Or yeah.

[00:39:50] I think Rob, in the comments that it's Y Combinator was where my senior role was going.

[00:39:54] Y Combinator. There you go. At least I have the excuse I am a senior now. I'm emeritus,

[00:39:59] but you don't Jason. I have the horrible affliction of the second someone says,

[00:40:04] what's the name of the blah, blah, blah. My mind goes completely blank. I'm like,

[00:40:08] you know, you could be asking me, what's your mom's name blah, blah, blah? And if

[00:40:12] it's said in the right way, I would suddenly be like, Oh my goodness. Why can I not think of it?

[00:40:17] I know. So by the way, Rob says also in the comments, I'm curious just another time to hear.

[00:40:20] He said he talked to his PhD advisor and she decided to do something similar.

[00:40:24] She's a historian. So I'm curious to hear what your PhD is going toward Rob, but we'll do that

[00:40:27] another time. So on with the next story. Interesting. Yes. So Andrew Jassy from

[00:40:37] Amazon and obviously AWS said that the average time, he posted this on LinkedIn,

[00:40:42] which I just found fascinating out of nowhere. The average time used to upgrade

[00:40:48] an application to Java 17 plummeted from typically 50 developer days to a few hours using Generative

[00:40:58] Mail. We asked him, does it save us 4500 developer years of work? Yes, that's crazy but real.

[00:41:07] In under six sets, is that crazy? Pretty remarkable. Anyone who points out,

[00:41:10] is this kind of upgrading is things that people developers hate to do because you're going back

[00:41:16] into what you've done before and it's not fun. You're not building anything. He said in under

[00:41:19] six months, we've been able to upgrade more than 50% of our production Java systems to modernize

[00:41:24] Java versions at a fraction of the usual time and effort. And our developers shipped 79% of

[00:41:30] the auto generated code reviews without any additional changes. That's what I was wondering.

[00:41:35] I was like, all right. So it's able to do all this stuff. How much time do you then spend

[00:41:39] verifying and correcting? And that's a high number of the code that was generated that was

[00:41:47] fine. Ultimately, fine almost 80%. So Jassy says that there's an estimated $260 million

[00:41:53] in annualized efficiency gains or otherwise known as savings. And so what really strikes

[00:41:57] me about these two stories together is AI and Generative AI, a consumer, a B2C tool or a B2B

[00:42:07] enterprise tool. I think we're going to find the value in the savings clearly in the enterprise

[00:42:12] and not in the sense of Sally executive at her desk using it to write more power points.

[00:42:19] And fine, I don't mean that. But I mean these kinds of specific tasks that can be improved and

[00:42:27] measured and tested against to see whether they're right because it matters. That's going to

[00:42:36] those other things we've talked about. So just another interesting tidbit here.

[00:42:40] 50 developer days to just a few hours. That's just all inspiring.

[00:42:47] Yeah, that's remarkable. And I'm sure there are a number of different examples of how

[00:42:52] this time savings is time and time again with Generative AI and everything. I mean,

[00:43:00] I know for the stuff that I'm doing as a solo independent content creator,

[00:43:05] there are certain tasks that I do regularly that I employ, my AI be it perplexity or whatever

[00:43:12] to help me do that. If I wasn't using AI to do that, I'd still be doing those tasks and it would

[00:43:19] definitely be taking me hours instead of 15 minutes. And that's all compounds on top of itself

[00:43:28] when you as you do this more and as more people rely on these systems and everything.

[00:43:34] It's just it really is a huge time saver. And yeah, that's pretty fascinating.

[00:43:41] Yep, love it.

[00:43:42] So however, people never really learn the lesson of what AI can't do well. It can't do facts.

[00:43:49] It can't do meaning it's not good at search. But the producers and the marketing company for

[00:43:56] Francis Ford Coppola's next movie Megalopolis, which is I guess already getting or bound to

[00:44:02] get bad reviews, they decided quite cleverly because we know that Coppola is a genius. He made

[00:44:08] Godfather for for for his sex. He's he's amazing. Right? A lot of really great film.

[00:44:12] So they decided to make a trailer which would go back and show all of the bad reviews that his

[00:44:17] prior works, his masterpiece has got so that you're kind of primed for the bad reviews that

[00:44:23] Megalopolis is going to get. The problem is that they used AI to do that. So all the bad

[00:44:29] reviews were not real. And it took a while for somebody to catch this. But let's see here, the

[00:44:40] Pauline Cale, who was the goddess of film reviewers completely adored Godfather,

[00:44:45] Godfather, Father Tushy lavish praises on the reading for the Vulture right now.

[00:44:49] And said of the whole epic, this is a bicentennial picture that doesn't insult the

[00:44:53] intelligence. It's an epic vision of corruption in America. However, the alleged quote,

[00:44:58] a tributed tour in the trailer said that Godfather is, quote, diminished by its artsiness.

[00:45:05] That was nowhere in review. And so similarly, I guess every single one of these was completely

[00:45:12] wrong. Andrew Serres was said to have called the Godfather a sloppy self indulgent movie.

[00:45:19] That wasn't in his review. Rex Reed did in fact pretty much hate apocalypse now, but his

[00:45:25] quote doesn't appear in the review either. Roger Ebert's mostly positive review of

[00:45:29] Bram Stoker's Dracula, so it wasn't just couple movies, does not include the words a triumph

[00:45:37] of style over substance. And instead he said the movie is an exercise in feverish excess.

[00:45:44] And for that, if little else, I enjoyed it. Right? So it's one of those funny stories

[00:45:49] we now have like the lawyer whose case I covered where some idiot Schmuck decides to use a

[00:45:55] I for this purpose, doesn't check what's going on. And AI doesn't understand facts. It's going to

[00:46:00] always give you an answer people and it doesn't care if the answer is wrong. Right? I reminded

[00:46:05] of a, a citizen city where I worked with the Chicago Tribune after Chicago today folded paper

[00:46:09] that had no tomorrow. I caught the lifeboat the Chicago Tribune midnight shift and in Chicago,

[00:46:15] the bars are open late and people would get into bar fights about facts and the libraries closed

[00:46:21] so they can't call the library and so they call the city desk of the newspaper got these calls all

[00:46:25] the time. And Billy Garrett who was the assistant city editor midnight shift said he had a rule

[00:46:31] to always give them an answer, preferably the wrong one because he always laughed the next

[00:46:36] morning thinking that there was a knockdown drag out fight before they could get the actual

[00:46:40] facts. This is before the internet. So folks look up stuff on your own. Yeah. Yeah. Or

[00:46:47] or if you are going to use AI for any part of this, you got to verify. You got to check

[00:46:53] yeah that the output is actually accurate. And if you're not like that's just pure laziness. Think

[00:47:01] think of the amount of time it would have taken you to do all of that by hand. Right.

[00:47:06] And instead you got AI to do it. And when the AI is done doing it supposedly,

[00:47:12] like if you don't stop there, it's easy to then be at that point and then be like,

[00:47:16] now I got to go and check him. No, it's fine. It'll be fine. But just think of all the time

[00:47:20] you would have spent if you hadn't done this to get a little bit longer to verify and you'll be okay.

[00:47:26] Yeah. The hapless marketing consultant who did this trailer is pictured in deadline. And

[00:47:33] the studio has now cut ties with him. So very costly mistake for him.

[00:47:39] Yeah. No kidding. No kidding. Very interesting. And unsurprising as well.

[00:47:47] And finally, go ahead you go. Well, yeah, no, this is just about perplexity, which is,

[00:47:55] yes, I mentioned it a lot, but primarily because it's just the AI platform that I use

[00:48:01] most often. And so I'm most familiar with it and everything. But we've talked in shows past

[00:48:07] that perplexity was running ads on some of the experience in the future. And it looks like

[00:48:18] they're about to start selling those ads. These ads will appear next to their AI assisted

[00:48:25] search results. So you could end up seeing this, I'm not entirely sure exactly when,

[00:48:32] sometime in the fourth quarter, but it's coming around the bend. And if you're going to use

[00:48:40] perplexity, what I wonder is if you're paying for it, do you still see the ads?

[00:48:45] That's a good question. And I'm not entirely certain on that, but hopefully not.

[00:48:50] You know, I use discover or perplexity and it does, I don't know,

[00:48:55] less than half a dozen stories a day. So it's not like it's a substitute news source at all,

[00:49:01] but they do got a job of packaging it. They link to the sources. And so I can see there being ads in

[00:49:08] there, you know, because I'm using it as a free service right now. It's fine.

[00:49:11] Yeah, discover. Is it discover.ai? Is that what you're talking about?

[00:49:15] No, if you go to perplexity, the app and then

[00:49:17] I see that's its news stories.

[00:49:21] Okay. And how, what have you thought about?

[00:49:23] I think it's pretty good. So if you go to, let's see here, what's an example?

[00:49:29] The space X Polaris launched a late. So they have a human being curated by Twumbley,

[00:49:37] who's the one who works with them, but they have links to astronomy, business standard, France 24,

[00:49:42] Wikipedia, space, you know, half dozen sources, and then below more sources with the headlines.

[00:49:49] So I think it's a very responsible way to present it. Unlike much else.

[00:49:54] I can check it against those sources. It gives credit to those sources. So I think it's pretty good,

[00:50:00] even though publishers are screaming about them. I think this is a model for how it might be done.

[00:50:04] Now, once they add, adds to this, the publishers are going to get streaming saying,

[00:50:08] well, you owe us a piece of that. But once again,

[00:50:10] That's a good point.

[00:50:12] The publishers do this and they're linking to the publishers. They're setting the

[00:50:14] publishers traffic and the publishers are doing the same thing to each other.

[00:50:18] Because the fact that one thing that comes across when you use perplexity discover

[00:50:21] is how much repetition there is in news. Because the same story can have a half dozen

[00:50:26] links that are essentially basically the exact same. So who copied from whom?

[00:50:32] Who's owed the dollar there? I don't know.

[00:50:35] Yeah, interesting. Well, good. You're getting in on the perplexity thing. That's interesting.

[00:50:43] Super curious to hear how you've thought about that after your experience,

[00:50:47] after hearing me talk about it so much on the show. Well, we did it, Jeff. We made it

[00:50:54] by the skin of our teeth to the end of this episode, turning on a dime with an unexpected

[00:51:01] circumstance. And you know what? If we hadn't called it out a couple of times, people probably

[00:51:05] wouldn't have even known the difference. So that's a good thing.

[00:51:08] So that guest we were going to talk to today, we didn't talk to because

[00:51:11] you wonder where they went. We'll be a future show.

[00:51:13] We will.

[00:51:13] We will.

[00:51:13] Yeah, that would be a future show. It's coming.

[00:51:15] And then you did also mention earlier, Lev Manevich as a future guest.

[00:51:19] We've got Lev scheduled for an episode in September. And I tell you what, I'm really,

[00:51:25] I know you are, I'm really looking forward to that conversation as well. It's going to be all

[00:51:28] about kind of AI and creativity, music, art, the whole nine yards. So some great guests coming

[00:51:35] up on this show. But Jeff, goonbergparenthesis.com.

[00:51:41] Yes. Nope, that's it for now.

[00:51:43] Soon enough, my son will give me a new page around JeffTravers.com and I'll have links

[00:51:47] and discount codes for all three of my books there. But that'll be soon.

[00:51:51] Yes, indeed. Excellent. Where is it? What would Google do?

[00:51:57] Why is that not on here anymore?

[00:52:00] It's old.

[00:52:01] It's old.

[00:52:03] You've written enough books now that you can take your old work.

[00:52:06] That's right. It's got that word.

[00:52:07] You go away.

[00:52:08] Yes, yeah.

[00:52:09] Well, I highly respect that.

[00:52:13] Goonbergparenthesis.com is the place to go to check out all of Jeff's writings and work.

[00:52:19] For me, you can go to youtube.com slash at TechSploter. When you go there,

[00:52:24] you can subscribe to the show or subscribe to the channel and you'll get alerted when we do

[00:52:28] live streams like today. When the video version of AI Inside is published, that will appear

[00:52:33] there. Then if you go to aynside.show, that is actually where the podcast,

[00:52:42] pretty much all the information about the podcast is listed on aynside.show. We do include the

[00:52:48] video links. Last week's episode, you can get there and you can listen to it or subscribe.

[00:52:54] But then you also do have the ability to watch the video version if that's your preference.

[00:52:58] If you've got to go to one place, I'd say aynside.show is your one place on the web to check out.

[00:53:04] You can also get to our Patreon from there, patreon.com slash aynside.show. There you can

[00:53:12] support us and be sure that we continue to do this show each and every week. We really do

[00:53:18] rely on your support to continue things as we have done. You get things in a trade for

[00:53:26] that. You get ad free episodes, you get early access to videos, discord community, regular

[00:53:32] hangouts. You also get an AI Inside t-shirt if you become an executive producer like our current

[00:53:39] executive producers, Dr. Du, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina and Paul Lang.

[00:53:47] Whether they're wearing their shirt today or not, they're getting one if they haven't

[00:53:51] already and you could too just become an executive producer and you'll get one.

[00:53:54] It's a great quality shirt, I gotta say. I wear mine all the time, but of course it's my show.

[00:54:04] But everything else is gravy. Thank you so much for being here with us each and every week. We

[00:54:09] can't thank you enough for that and thank you Jeff for the hangouts and we'll see you all

[00:54:15] next week on another episode of AI Inside. Bye everybody.