Super-Duper-Intelligence!
November 13, 20241:05:08

Super-Duper-Intelligence!

[00:00:01] This is AI Inside, episode 43, recorded Wednesday, November 13th, 2024. Super-Duper-Intelligence.

[00:00:11] This episode of AI Inside is made possible by our amazing patrons at patreon.com slash AI Inside Show.

[00:00:18] If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.

[00:00:23] What's going on, everybody? Welcome to yet another episode of AI Inside, the podcast where we take a look at the AI that is layered everywhere you turn.

[00:00:41] The AGI just around the corner. The news sites, they're all singing in harmony as their content is being linked back to, even though it's being summarized.

[00:00:53] This is all, I'm just calling ahead to stuff that we're going to talk about today, which you would probably have figured out by now.

[00:01:00] I'm Jason Howell, joined as always by Jeff Jarvis. How you doing, Jeff?

[00:01:06] Hey, hey. Yeah, good to be back with you two last week.

[00:01:09] Of course, you were out and we had Mr. Mike Elgin sitting in your place, so it was good to walk through the news of the week with Mike.

[00:01:20] Always fun to get his perspective.

[00:01:21] Good to see him. Where was he? Mexico?

[00:01:24] Where was he? Baja.

[00:01:26] Yeah, Baja.

[00:01:26] Baja, Mexico. He was staring at a beach.

[00:01:30] Which, let me tell you, last week, Wednesday morning is exactly where I would have preferred to be staring at a beach.

[00:01:40] But I did notice, and maybe we already realized this when he was on the show prior, previously, but you and he definitely differ when it comes to the kind of perspective on open source AI.

[00:02:26] He oversees a lot of AI.

[00:02:55] He oversees a lot of AI yesterday.

[00:02:56] If you're against open source, don't forget that it involves taking things away from big corporate world, which is also what Mike tends not to like.

[00:03:03] But I'm arguing with him in absentia, so I shouldn't.

[00:03:35] Yeah.

[00:03:36] Right, exactly.

[00:03:37] Really, when it comes down to it, that would be a really big difference, open versus not.

[00:03:42] Yeah, and the other question is how effective can guardrails be?

[00:03:47] Which is the other argument about the anti-open source folks when it comes to AI is you're taking it away from the opportunity to control it.

[00:03:54] My argument is the controls are going to be futile, but that's a legitimate question.

[00:04:00] For sure, for sure.

[00:04:02] Well, we've got some pretty meaty stuff to talk about this week.

[00:04:06] You know, the AI news, it just keeps on going.

[00:04:09] And boy, oh boy, the next four years in the world of AI, I envision things are going to get mighty strange and interesting and complicated.

[00:04:18] Um, so before we get to the news of the week, though, I just want to throw out a huge thank you to those of you who support us on Patreon.

[00:04:28] That's patreon.com slash AI inside show.

[00:04:31] Go there.

[00:04:32] You can find any number of membership levels to support the show.

[00:04:38] And in doing so, you get extra perks.

[00:04:40] You get your name thrown out at the top of the show.

[00:04:43] Like this week's name happens to actually be in the chat room with us.

[00:04:48] That's Joe Esposito, the Ozone Nightmare.

[00:04:52] Just an incredibly generous supporter.

[00:04:55] So supportive and helpful.

[00:04:56] Oh, over the last.

[00:04:57] Joe, thank you.

[00:04:58] He bought my book twice.

[00:05:00] He's reading it on Kindle and he's giving it a copy of the library.

[00:05:02] See what I'm talking about?

[00:05:03] He's so supportive.

[00:05:04] He really is.

[00:05:05] Thank you, Joe.

[00:05:06] And he makes funny pictures of people like us, which is great.

[00:05:10] He's got a fantastic podcast.

[00:05:12] Check out the Ozone Nightmare.

[00:05:13] I've actually been on it a couple of times at this point.

[00:05:15] So there you go, Joe.

[00:05:16] You got your plug.

[00:05:17] Patreon.com slash AI inside show to all of you, whether you have your own podcast or not.

[00:05:23] We'll throw out your name at the top of the show.

[00:05:26] He says, happy to do it.

[00:05:28] Great to have you here.

[00:05:29] Also, if you happen to be watching live and you aren't subscribed, first, you can catch us live.

[00:05:36] YouTube.com slash at TechSploder.

[00:05:38] We do this podcast every Wednesday at 10 a.m. Pacific, 1 p.m. Eastern.

[00:05:43] Or you can go to AIinside.show and subscribe to the audio podcast form.

[00:05:49] And everything you really need is at that site.

[00:05:51] So subscribe.

[00:05:52] That way, if you miss the live show, you still get it.

[00:05:55] We don't want you to miss it entirely if you miss the live show.

[00:05:58] So with that being said, let's talk about some news.

[00:06:02] And I alluded to it here at the top.

[00:06:04] But we might as well start talking about AGI to begin with, because apparently it's right around the corner.

[00:06:11] Pretty much once a month we lead with something like this, I think.

[00:06:15] AGI just around the corner.

[00:06:16] Or so say all of these people.

[00:06:19] Actually, usually it's like Sam Altman.

[00:06:22] Almost always he is the person beating the loudest drum on this.

[00:06:30] AGI's biggest fan, if I would say so.

[00:06:33] He predicted in a recent interview that 2025 could be the year for AGI, but then so could 2026, 2030, 2040.

[00:06:44] Any number of years could be that year, by the way.

[00:06:47] He says that engineering breakthroughs will pave the way, and not just scientific breakthroughs, but engineering breakthroughs, which essentially means that the hardware capabilities to make that happen will happen according to his estimations.

[00:07:05] Yeah, and I sigh at this moment.

[00:07:08] Because I think the first key problem is there is no definition of AGI that is accepted.

[00:07:15] Gary Marcus is really good on this point and tries to push Musk and Altman and company on this.

[00:07:21] And so they're predicting something that is undefined, which makes it easier to predict.

[00:07:26] Although when I see it, when I say it's there, it's there.

[00:07:29] It's kind of what they're saying.

[00:07:31] And we've talked about this before, too, that this is also about raising capital and trying to get excitement going.

[00:07:39] And I think part of this is also as we change administrations, and Musk is at the right hand of the next president.

[00:07:46] People are trying to get their views in importantly.

[00:07:53] Later, stake their claim of the future that's right around the corner.

[00:07:59] And we're the ones to do this.

[00:08:00] There was a story, I don't think I put it in the rundown, that Musk is presumed to be recruiting, watch out, I'm going to say it, Trump to the Tescreal view of things.

[00:08:11] So we could have the Doomer in chief.

[00:08:15] And so I think all this talk of AGI and extinction risk and P-Doom, we're going to be hearing more of it, I think, because of all that.

[00:08:30] So there was also a collection of other predictions from other people.

[00:08:37] Most of, you know, a lot of them, the AI boys and their boys, they're all boys here, agree with Musk.

[00:08:45] You have Anthropic, which is also trying to say that it's going to build AGI.

[00:08:49] And you have Jeffrey Hinton who's warning against AGI and the dangers of it.

[00:08:53] But you have Andrew Ning who is, I respect a lot because he tends to be more cautious.

[00:08:59] And also Yann LeCun who's becoming my AI hero because he constantly puts the pin in that bubble of AGI and says, not so much.

[00:09:12] Yeah, like you mentioned, Jeffrey Hinton, Dario Amadei, Anthropic CEO are thinking, you know, sometime within the next five years.

[00:09:21] Uh, Demis Hassabis, DeepMind CEO, Ray Kurzweil, somewhere in the next 10 years.

[00:09:29] And Andrew, Andrew Ying, sorry, Andrew Ying, and Yann LeCun, like you said.

[00:09:34] Uh, decades, perhaps not even in our lifetime.

[00:09:38] So there's no consensus.

[00:09:40] And I think, like you said, and that's a really great point that I think I forgot or, you know, didn't have top of mind until you mentioned it now is that the goalposts are always moving on this.

[00:09:51] Like what exactly it is.

[00:09:55] You know, it's become this catch-all term of like this.

[00:09:58] AGI is when this stuff is really good.

[00:10:01] That's basically what it says.

[00:10:03] Yeah, at this point.

[00:10:05] AGI is when AI is really good.

[00:10:07] And right now it's just good.

[00:10:09] But, you know, really good's right around the corner.

[00:10:11] And that can mean a million different things.

[00:10:14] Although, um, although, uh, Altman has, you know, has the belief that once AGI in his definition happens, superintelligence follows quickly after that.

[00:10:28] Once that achievement is finally met, it's not long before we get to superintelligence.

[00:10:32] Duper intelligence.

[00:10:33] So we have super and duper.

[00:10:34] Yeah, super duper.

[00:10:35] Right.

[00:10:37] Stupendous intelligence.

[00:10:39] Yeah.

[00:10:40] Yeah.

[00:10:41] And if folks out there are curious about the kind of other side of this besides Jan Lacone, Gary Marcus's newsletter is constantly shooting howitzers at balloons.

[00:10:54] And I think it's, and I think sometimes Gary goes over word two the other way, but I think he's very interesting in bringing some rationality and sense to this.

[00:11:01] So if you're, if you're interested, I would balance those two views and see where they come out.

[00:11:07] For sure.

[00:11:08] Yeah.

[00:11:08] Yeah.

[00:11:08] It's very, a very good balance, uh, to read Gary Marcus.

[00:11:13] And, uh, I continue to work like heck to bring him on the show at some point.

[00:11:19] Yeah.

[00:11:20] Uh, not easy.

[00:11:21] I mean, he's a, he's a busy guy, but, but we're in, we're in touch.

[00:11:24] So I, I have a feeling that it's going to happen at some point.

[00:11:28] It's just, uh, finding it.

[00:11:30] Couture.

[00:11:31] Yes.

[00:11:31] Yeah.

[00:11:32] I think that'll be an interesting conversation.

[00:11:34] Um, now all of this predicting comes at a time when reports are showing a slowdown.

[00:11:40] For those of you on audio, you're not seeing me smirk about the next story.

[00:11:45] I think I, I think there was like an audible smirk.

[00:11:47] It was like, yeah, you know, something, something along those lines.

[00:11:50] Um, a slowdown in progress in, uh, large language model development, open AI's unannounced Orion

[00:11:57] models reportedly showing smaller improvements over GPT-4 than they had originally expected.

[00:12:04] Um, and I think, and there's a, there's an information article, um, and, and others kind

[00:12:09] of along this lines, uh, kind of signaling the slow growth that is happening in this moment.

[00:12:15] This is another one of those stories that I feel like I've been seeing, you know, every

[00:12:19] month there's, there's like a certain aspect of this and then everybody forgets about that

[00:12:24] and gets excited about where we're headed.

[00:12:27] And then, you know, it's, it's like, we're constantly, we're constantly really excited.

[00:12:31] And then, oh, wait a minute, let's temper our excitement because actually things are going

[00:12:35] slower than we, than we hope.

[00:12:37] And the smirk value here is that how can you have AGI, this incredible leap of computing

[00:12:45] and technology that's just around the corner?

[00:12:48] It's happening next year.

[00:12:49] But meanwhile, your development is slowing down and it's progress.

[00:12:52] Those two stories just do not mix.

[00:12:54] And it's not just, um, open AI that's, that's saddled with this view right now.

[00:13:01] It's, uh, others in, in the field, uh, Anthropic and Google as well.

[00:13:06] Uh, they're struggling to build a more advanced, uh, AI as Bloomberg put it.

[00:13:10] And, um, that's not surprising, right?

[00:13:13] They, they got a lot of benefit out of an insight out of, out of transformer.

[00:13:18] They got a lot of benefit out of the hardware that exists out of NVIDIA.

[00:13:22] They got benefit out of building ridiculously large, uh, instances.

[00:13:27] Um, and they probably pushed all of that about as far as they can go right now.

[00:13:31] And I think what they need is not more hardware, which is their first reflex.

[00:13:35] Cause that's just throw money at it.

[00:13:37] Cause that's just built into the business.

[00:13:38] They're so used to working within that paradigm.

[00:13:40] Yeah.

[00:13:41] Exactly.

[00:13:41] Let's raise money to buy more machines at it.

[00:13:43] Right.

[00:13:43] Right.

[00:13:44] Um, I think that they require next levels of insight, uh, and, and, and, and they've got

[00:13:50] to understand this in different ways.

[00:13:51] This was a, uh, I was going to say a paradigm shift.

[00:13:54] I worked when I worked for Delphi internet low many years ago, the first company that brought,

[00:13:58] um, consumer internet to consumer internet to consumers.

[00:14:01] We had a paradigm chart.

[00:14:03] If you ever use the word paradigm, you had to put $5 in it.

[00:14:05] It wasn't just a dollar chart.

[00:14:06] Oh, really?

[00:14:07] That's a word that I recognize.

[00:14:10] Oh, interesting.

[00:14:10] I, I, I probably overuse that word, but it's always, it often seems like the right

[00:14:15] word.

[00:14:16] Like, I know it does, but it's kind of a show off thing, right?

[00:14:19] Like, Oh, we're at this huge moment.

[00:14:20] Uh, so we've been through a lot of paradigm shifts, which you really can't have a lot of

[00:14:24] them by definition.

[00:14:25] Right.

[00:14:25] Yeah, that's true.

[00:14:27] So if everything's a paradigm shift and nothing's a paradigm shift, right.

[00:14:30] Then it's just another day.

[00:14:32] Um, but I think, I think that they do need a paradigm shift in their thinking about this.

[00:14:36] And when some people in the field are saying that, that it's going to take that kind

[00:14:40] of next insight to understand how to build this differently.

[00:14:44] Um, what they've done is, is again, as always really impressive.

[00:14:47] We're here to talk about it because it's amazing, but it ain't taken over the world

[00:14:52] yet.

[00:14:53] Yep.

[00:14:54] Yep.

[00:14:55] Um, yeah.

[00:14:56] Gemini not really, um, focusing on, on major model breakthrough releases.

[00:15:02] We're seeing lots of these little kind of incremental updates when it comes to Google's

[00:15:07] Gemini, Anthropic, um, seeing delays, like you said, with its development of 3.5 Opus, um,

[00:15:15] better performance than before, but not quite as much, um, as expected given the size and

[00:15:23] the costs.

[00:15:24] And it's, I guess it's just diminishing returns essentially, um, at this point.

[00:15:29] And also, you know, we've had a couple of years of, of getting really used to whether

[00:15:34] it's true or not, recognizing that, that the updates that these AI companies are putting

[00:15:40] out seem, seem bigger than, than we're used to with other aspects of the technology industry.

[00:15:47] And maybe now what we're seeing is more of like a coming down to earth, like, no, actually,

[00:15:52] you know, there, there was a lot to gain in the beginning, but then as this builds out,

[00:15:57] you know, that, that amount to gain becomes just smaller and smaller.

[00:16:01] Yeah.

[00:16:01] I think, I think there, there's just this presumption of hockey sticks as opposed to plateaus.

[00:16:08] Right.

[00:16:08] Right.

[00:16:08] That's a great way to put it.

[00:16:10] And, and, uh, there was a hell of a rise, but then at some point you can't keep going at

[00:16:14] that, at that rate.

[00:16:15] I think they were presuming to go at an even faster rate and a greater rate.

[00:16:19] And that's why they, they convinced themselves of AGI.

[00:16:22] If we get this far in this amount of time, imagine what we can do in another year, but

[00:16:27] they're not that powerful.

[00:16:29] Well, and then another thing when I was kind of reading up on, on these stories is the whole

[00:16:34] idea of data scarcity.

[00:16:37] And like, I realize you can use, I realize, you know, it's, it's often talked about where

[00:16:43] the models themselves can create high quality data that can then be used to train because,

[00:16:49] uh, because at this point, you know, these companies have used almost all available information

[00:16:56] that happens to be out there.

[00:16:57] Like, and you know what I mean?

[00:16:59] Like, I don't, I don't know exactly where I'm headed there, but I feel like that feels,

[00:17:04] that feels like a plateau to me.

[00:17:06] It's like at a certain point, there just isn't the data that exists to train these things.

[00:17:11] I guess then you have to create it, but I'm just not convinced that creating it actually

[00:17:15] leads you to where they're talking.

[00:17:17] No, because it's, it's too, it's too, uh, regressive.

[00:17:20] It's, it's, it's on itself.

[00:17:22] Well, um, yeah.

[00:17:23] So I, as, uh, I think everybody's trying to show off for the new administration and try

[00:17:29] to get their, uh, legislation to support.

[00:17:31] Uh, one thing we didn't put in the rundown, but it's, but it's here is that Microsoft and

[00:17:36] Andreessen Horowitz put out kind of a manifesto, which was really a, um, uh, a policy statement,

[00:17:43] a policy wishlist.

[00:17:44] But to, I'm bringing it up because to your point, what, a few things interested me in

[00:17:49] this.

[00:17:49] One is that they are pushing for open source.

[00:17:50] See our earlier discussion about Mike Elgin.

[00:17:53] They also push for the right to learn, which is exactly what I've been arguing and even

[00:17:57] with the same wording of it.

[00:17:59] But now finally, to your point, Jason, they ask for an open data commons.

[00:18:04] I've not seen that before.

[00:18:06] And I found that really, really interesting.

[00:18:08] So it's saying that there's a role for government to enable and craft policies to support a thriving

[00:18:12] and growing ecosystem, yada, yada, yada, um, through an open data commons, pools of accessible

[00:18:18] data that we've managed in the public's interest.

[00:18:21] And I think that's really interesting as a model to enable more developers to create more

[00:18:30] competitive models if they have access to more data, point one.

[00:18:34] Point two.

[00:18:34] Okay.

[00:18:34] What data then?

[00:18:35] Well, government data.

[00:18:36] And one of the fears I've heard about this administration, uh, and we're not going to get too political

[00:18:42] folks, but, uh, is that, um, uh, in all the discussion of climate change, uh, scientists

[00:18:48] depend almost completely on government supplied weather data.

[00:18:52] Well, what happens if that goes away?

[00:18:55] Uh, then do private entities and universities need to gather together?

[00:18:59] And I think the same question goes here for the notion of an open data commons.

[00:19:04] Um, there's government information that we need.

[00:19:07] There's information in general that could be brought together, which is what happened in

[00:19:10] Norway, uh, as we talked about in the show earlier.

[00:19:12] And then there's the question of the data that's not there because it's biased.

[00:19:16] How do we, instead of taking things out, how do we add more in?

[00:19:20] So, yeah, I think that's a, that's a limiting factor.

[00:19:23] You're right, Jason.

[00:19:24] And it requires a more strategic way of thinking than we're going to make a few, uh, big money

[00:19:28] deals with a few publishers.

[00:19:30] Uh, that doesn't gain you much at the, at the scale we're talking about.

[00:19:33] That doesn't gain you anything except, uh, friends in Washington.

[00:19:37] Um, so the data question is, is the huge one we're training here because do we, are we

[00:19:44] going to train good models or not?

[00:19:45] Are, are they going to be stupid models and bad models and biased models?

[00:19:50] Um, uh, are we going to be advanced versus other parts of the world because of this?

[00:19:55] Uh, yeah, I think, I think you raise a really important point, Jason.

[00:19:59] Yeah.

[00:20:00] Interesting stuff.

[00:20:01] Cool.

[00:20:01] I'm glad you brought up the, uh, the Microsoft one.

[00:20:03] I'll put that in the rundown so people can check out that article as well.

[00:20:07] Um, from the show notes, uh, last week when, when I had Mike Elgin on,

[00:20:15] we were talking a little bit about meta and it's kind of partnerships, uh, along the lines

[00:20:21] of, of national security and U S intelligence and defense and all that stuff.

[00:20:26] And it's really interesting.

[00:20:27] I mean, it seems like one after the other are, are all kind of fallen in line.

[00:20:31] And Palantir sees, seems to be a real big beneficiary of a lot of this right now.

[00:20:37] They're, they're always part of these stories, but now we have Anthropic who's joining this

[00:20:42] trend, getting on board with U S intelligence and defense, uh, in its own partnership with

[00:20:47] Palantir and AWS, uh, Amazon's cloud.

[00:20:50] Um, this is access to clawed AI models in the defense sector.

[00:20:57] It's integrated into Palantir's, uh, AI platform hosted on AWS and it's impact level six accreditation,

[00:21:05] which basically means a classified data up to security or sorry, secret level.

[00:21:10] Um, and yeah, interesting, I guess, because Anthropic is the safety AI company and there's

[00:21:20] something about this that just kind of feels not that.

[00:21:22] Yeah.

[00:21:23] Although it is, although it isn't, I don't know.

[00:21:24] It's kind of confusing.

[00:21:26] Well, a few interesting angles on this.

[00:21:27] Uh, first, I think, I think to go with your point just now, this is again, where we need

[00:21:32] to define safety just as we need definitions of artificial general, general intelligence.

[00:21:37] We don't have definitions of safety and it gets muddied in doomsterism and all that.

[00:21:42] Okay.

[00:21:42] That's one point.

[00:21:43] But the other one is, I think it'd be really interesting to see here what the, um, reaction

[00:21:49] is from the technology community.

[00:21:51] Uh, because, uh, Google's employees, uh, pushed hard for Google to not do, uh, defense deals

[00:22:02] like this with their advanced technology.

[00:22:04] Uh, we have Timnit Gebru who was the, you know, the head of AI, AI responsible AI at

[00:22:12] Google, of course, wrote Stochastic Parents who was forced out as a result, who was criticizing

[00:22:17] this deal.

[00:22:18] Um, and you know, they say that they're all about safety, safety, safety, and then they're

[00:22:23] going to work with defense players.

[00:22:24] What do defense players do?

[00:22:25] They try to kill people more efficiently.

[00:22:28] And so I wonder what the, um, the employees of these companies are going to end up saying.

[00:22:33] Mm-hmm.

[00:22:34] I have to imagine there's going to be some sort of pushback.

[00:22:36] And we've, we've seen that in the last couple of years when they've, and I guess that's where

[00:22:42] my mind is at right now.

[00:22:43] In the last couple of years, there has been flirtations around this sort of thing.

[00:22:48] Yes.

[00:22:48] And now very, very immediately.

[00:22:51] I mean, I feel like in the last three to four weeks, there's been story after story of like,

[00:22:55] this company has decided, you know, decided, uh, to enter into an agreement for, for this

[00:23:01] sort of thing and this one and this one.

[00:23:02] And now we're at the point to where it's kind of like, oh, I guess they just, they're

[00:23:05] all in bed, you know, doing this.

[00:23:07] Yeah.

[00:23:09] And once again, how, what will the response be this time versus two years ago when there

[00:23:14] were employee walkouts and, you know, hard lines?

[00:23:17] Yes.

[00:23:17] And, uh, shall we say with a Fox news host as secretary of defense, um, what would come

[00:23:26] out of the U S Pentagon is in great flux.

[00:23:32] So if you're here saying, well, we're going to do defense deals with U S department of defense.

[00:23:37] Well, that department is going to change radically in the next few months.

[00:23:41] Big time.

[00:23:42] So that matters too, as to what, what are you signing on with?

[00:23:46] Yeah.

[00:23:47] Yeah.

[00:23:48] And with the progression of AI, as we've seen, and we're, we're still kind of adjusting to

[00:23:54] the speed at which this stuff progresses, my, how fast those things can change in a short

[00:23:59] amount of time too.

[00:24:00] And, you know, you find yourself in a situation that you wouldn't have even imagined six months

[00:24:05] prior.

[00:24:06] And yet this is just the way it is.

[00:24:09] Anthropic right now is saying, or rather assuring that the government use of its tools are designed

[00:24:14] to be beneficial while still being restricted from harmful uses.

[00:24:19] But this goes back to kind of what you were talking about.

[00:24:20] What define harm?

[00:24:22] Fine harm.

[00:24:22] Yeah.

[00:24:23] Yeah.

[00:24:23] And you know, we're, we're more aware of this when it comes to, to policing and the fear

[00:24:27] about using this for predictive policing and so on and so forth.

[00:24:31] Well, at least we can, even if it's too late, see that.

[00:24:35] Defense and, and intelligence, we won't necessarily ever see how it was used.

[00:24:40] And so the oversight is less, the public oversight, the journalistic oversight.

[00:24:45] So this is a really interesting story to watch.

[00:24:47] I'm glad you put that in there.

[00:24:48] For sure.

[00:24:49] For sure.

[00:24:50] And, you know, probably next week it'll be another company that we'll be talking about

[00:24:54] entering into a deal with Palantir and the U.S. Department of Intelligence.

[00:24:59] AI and science.

[00:25:01] This is, you know, this is an aspect of artificial intelligence that I'm super, I continue to

[00:25:06] be super curious and I'd say hopeful about because I think there's a lot to, a lot of

[00:25:12] benefit to be gained potentially from the integration of these two things.

[00:25:16] And there's a couple of stories this week that fall into this camp.

[00:25:20] We'll start with AlphaFold3, which was designed by Google DeepMind, not DeepMind, DeepMind,

[00:25:30] and announced not too long ago, but it's now been released, the source code rather has been

[00:25:37] released by Google DeepMind for non-commercial use.

[00:25:42] So they're making the code and weights accessible.

[00:25:45] This is Google's protein prediction model that we first heard about last May, I think.

[00:25:51] It won a Nobel Prize.

[00:25:53] It can predict interactions between proteins and other molecules like DNA, RNA, potential

[00:25:59] drug compounds.

[00:26:01] And it's mapped more than 200 million protein structures to date.

[00:26:05] And it's amazing.

[00:26:07] I found this story interesting when you put it up is the pressure that Google was under

[00:26:13] when they didn't release the model because they were publishing papers that put forward

[00:26:23] things that other scientists couldn't then replicate.

[00:26:25] And that's a no-no in peer-reviewed science, right?

[00:26:29] And so Google at first tried to say, no, we don't want to do it because of competitive reasons,

[00:26:33] so on and so forth.

[00:26:34] And now they are publishing it, but it's to use your coinage, Jason.

[00:26:39] It is open-ish because they're also saying that they're only going to release it to certain

[00:26:46] accredited academics and under some circumstances with some controls, which I don't necessarily

[00:26:54] disagree with.

[00:26:56] So they didn't release, with the papers, they didn't release the weights and such.

[00:27:00] And you couldn't really dig into it and understand it unless you had that.

[00:27:04] So now they have it, but they have it with limitations.

[00:27:06] And I don't know, do you think that's a nefarious thing or a wise thing?

[00:27:14] That's a good question.

[00:27:17] Yeah.

[00:27:18] I mean, that's a good question.

[00:27:21] I don't know how to answer that, to be honest.

[00:27:23] Because I guess I just don't, I don't know what could be done with this information.

[00:27:31] Like, I don't understand exactly what can be done on a deep level with this information

[00:27:36] without those controls versus with.

[00:27:40] Yeah, that's a really good question.

[00:27:42] It certainly doesn't preclude others from kind of following in the footsteps.

[00:27:47] ByteDance and Baidu have spun off their own versions inspired by the original Alpha

[00:27:53] Fold 3 release.

[00:27:54] Right.

[00:27:55] So we're seeing that nonetheless.

[00:27:56] You know, it's not obviously, you know, the commercial version of Alpha Fold.

[00:28:01] They're kind of coming up with their own.

[00:28:03] But yeah, it's an interesting question.

[00:28:05] I wish I had a better answer for you.

[00:28:06] I'm trying hard not to be political.

[00:28:08] But with China hocks in the State Department and elsewhere, that's another interesting angle

[00:28:17] here is that if the Chinese are competitive, then it may be that the government wouldn't

[00:28:22] want it open for that reason.

[00:28:25] Yeah, that's a good point.

[00:28:26] That's a good point.

[00:28:28] Open Fold 3 is also in development.

[00:28:31] That would be to remove commercial restrictions, as you're talking about there.

[00:28:34] But DeepMind's Isomorphic Labs holds the exclusive commercial rights.

[00:28:41] And they have $3 billion in pharmaceutical partnerships to show for it.

[00:28:46] There you go.

[00:28:48] Interesting stuff there.

[00:28:49] Also, a new AI model created by Japanese researchers able to screen for medical conditions with a

[00:28:56] short 30-second video of someone's face and hands at 150 frames per second.

[00:29:04] So, an AI that can view a very short video of someone's face and hands.

[00:29:11] And I mean, right now, let's say that the camera is shooting at a frame rate that isn't necessarily

[00:29:18] what the majority of, say, smartphone cameras record at.

[00:29:22] But one could see where, like, it just kind of makes me think of, like, you know, the Apple

[00:29:27] Watch and all the screening that you have the ability to do now with a wearable on your

[00:29:34] wrist.

[00:29:35] And who knows?

[00:29:37] At some point in the not-too-distant future, you could detect high blood pressure and diabetes

[00:29:42] just by pointing a camera at your face for a couple of seconds.

[00:29:44] That's kind of crazy.

[00:29:45] Yeah, I mean, at first blush, I see AI and a health thing, and I get hinky, and I get

[00:29:52] a little nervous.

[00:29:53] Uh-huh, uh-huh.

[00:29:54] And then I thought, well, this is cheating.

[00:29:56] But no, it's not.

[00:29:57] If it's accurate, for reasons we cannot understand or correlate, it sees things in your skin

[00:30:05] that makes it be able to see these things.

[00:30:07] And, well, then, especially for screening purposes, hallelujah.

[00:30:11] Yeah.

[00:30:13] So that's really cool.

[00:30:15] And I can imagine all kinds of other, we've talked about this in the past when it comes

[00:30:19] to recognizing cancers on x-rays and such.

[00:30:23] Yeah.

[00:30:23] And it predicts them sooner.

[00:30:26] And I think that's all, this is the good use of AI.

[00:30:31] Yeah, but especially in this case, because it's just screening, you're not going to have

[00:30:39] misdiagnosis because it's not hard to then, in turn, take your blood pressure.

[00:30:42] It's not hard to, in turn, get your blood sugar.

[00:30:46] And so it's just as a screening tool.

[00:30:48] I think it's really valuable.

[00:30:50] Yeah, really great use of it.

[00:30:51] But, you know, I can imagine nefarious purposes, you could put up a camera checking people as

[00:31:00] they walk by, as you go into the insurance office to apply for insurance.

[00:31:03] And I don't know.

[00:31:04] Oh, yeah, right.

[00:31:06] Right.

[00:31:06] And with facial recognition, does it know that, no, you're a sicko, we're never giving you

[00:31:10] insurance.

[00:31:11] Yeah, I guess so.

[00:31:12] You could use these things nefariously.

[00:31:13] But you could also always legislate about that and forbid that kind of use.

[00:31:18] So, yeah, I think this is good news.

[00:31:19] I think this is AI to the good.

[00:31:21] Yeah, agreed.

[00:31:22] Agreed.

[00:31:23] It can recognize blood flow pattern changes by analyzing 30 regions of the face and the

[00:31:30] palm.

[00:31:31] And that's how it does it.

[00:31:33] 94% accuracy with high blood pressure in their tests, 75% accuracy with diabetes.

[00:31:40] So, interesting stuff.

[00:31:42] Yeah.

[00:31:43] And I think we're going to see more along these lines.

[00:31:45] It seems like the, you know, in a time where AI is kind of that analogy of when you're

[00:31:53] a hammer, everything looks like a nail.

[00:31:54] Yeah.

[00:31:55] It definitely feels like that with AI a lot.

[00:31:58] But when I see stories like this, I'm like, okay, in my mind, this really lines up.

[00:32:03] This seems like a hammer and nail solution to make things better and easier and also not

[00:32:10] completely take over.

[00:32:12] You know, you still need someone who knows what they're doing to work in tandem with this

[00:32:17] and provide the expanded support that goes around it.

[00:32:21] So, I'm all about that.

[00:32:23] All right.

[00:32:24] We are going to take a quick break and let you listen to the little thing.

[00:32:28] And then when we come back, we're going to talk a little bit about AI as a learning

[00:32:33] companion.

[00:32:33] That's coming up in a second.

[00:32:38] Google has a new experiment called Learn About.

[00:32:41] It's built on its Learn LM AI model and it is specifically used for educational applications,

[00:32:50] educational purposes.

[00:32:52] And I don't know, it's kind of an interesting way to present.

[00:32:56] Like, I played around with it a little bit earlier this morning and it kind of feels like

[00:33:01] an interactive Wikipedia of sorts.

[00:33:05] It's got a nice little interface and everything that you can interact with.

[00:33:09] Yeah, I like that it is more limited in the data it calls upon.

[00:33:13] So, it's trying to use quality data that would be acceptable in an educational setting.

[00:33:19] And that it's trying to help people do something interesting and useful.

[00:33:23] I'm all for all that.

[00:33:25] I used it.

[00:33:27] Interestingly, Jason, I accidentally erased this tab.

[00:33:31] And so, I did it again.

[00:33:32] But being that I do what I do, I asked it twice to explain the development of mass media to me.

[00:33:38] Ha ha.

[00:33:39] The development of mass media.

[00:33:42] Yeah, yeah.

[00:33:42] See if you can try that.

[00:33:43] While you talk about it, I can.

[00:33:45] Yeah, while you talk about it, I can show it.

[00:33:46] So, explain the development of mass media to me.

[00:33:50] And so, I did it once and it came up and it says that Gutenberg invented the printing press in 1440.

[00:33:56] Well, I happen to know, having written a book about Gutenberg, which by the way is out in paperback right now.

[00:34:01] The Gutenberg press is even cheaper.

[00:34:03] Yeah, congrats.

[00:34:03] You can get it.

[00:34:04] Thank you.

[00:34:04] Well, actually, most say they're not so sure about that date in 1440.

[00:34:09] But it's there.

[00:34:10] Then, when I did this earlier, it listed next the creation of the first newspaper in America in 1690.

[00:34:21] Skipping over the first newspaper in 1605 in Europe.

[00:34:26] Now, this time I did it.

[00:34:27] It skipped over a newspaper entirely and jumped right to the first telegraph message.

[00:34:32] And yours has telegraphed.

[00:34:33] Then you have telephone on yours.

[00:34:35] And I have first public film screening on mine.

[00:34:39] Then radio, then TV.

[00:34:42] So, one point is because of the randomness, the random input of generative AI.

[00:34:51] Pardon me.

[00:34:53] It means you get different answers every time.

[00:34:56] And as an educational tool, that's a problem if you're trying to get a reliable sequence.

[00:35:04] But that's not to say there isn't one specific answer to everything on earth.

[00:35:08] Of course there isn't.

[00:35:08] And you've got to get different answers.

[00:35:09] If you asked five students to explain the same thing to you or five teachers, you would get different takes on it.

[00:35:14] And that's okay.

[00:35:15] Yeah, that's a good point.

[00:35:15] But for a student to say, I want to get a reliable, consistent answer here, that's going to be difficult.

[00:35:24] The third thing is, so I wanted to see, okay, on what basis did they say 1440 for Gutenberg?

[00:35:29] Where's the citation?

[00:35:31] And I click on that and it doesn't give me any sourcing.

[00:35:36] It gives me the image source, but it doesn't tell me where it got that.

[00:35:40] So I can't really check it, which is a third issue with this, I think.

[00:35:47] Yeah, so I'm seeing images sourced from these places.

[00:35:51] And that is one thing that is about this tool is that it's meant to – because a lot of this you could do with current LLM tools.

[00:35:59] Like I could open up perplexity and I could do a lot of this.

[00:36:01] It just wouldn't look as pretty, right?

[00:36:04] Like this is definitely more of like a – to coin an old term – multimedia experience.

[00:36:09] You've got your images, kind of clickable zones.

[00:36:12] It creates kind of like an interface that invites you to want to learn more versus just a wall of text.

[00:36:19] And then you have to think to ask the next question or whatever.

[00:36:21] It's almost like it builds a little mini website for you on expanding your learning.

[00:36:27] So this is interesting.

[00:36:28] I ran it again and this time it came back printing press 1450.

[00:36:33] Is it 14?

[00:36:33] Oh, wow.

[00:36:34] Yeah, and then when I clicked that to give me more information on it, what did it say?

[00:36:40] It said the printing press invented around 1450.

[00:36:43] So it stuck to its guns there.

[00:36:45] It didn't like recheck it.

[00:36:47] And then down here in Explore-related content in Germany around 1440, the goldsmith Johannes Gutenberg invented the movable type printing press.

[00:36:56] That is sourced.

[00:36:58] That's from Wikipedia.

[00:37:00] Yeah, that's from Wikipedia.

[00:37:02] Wikipedia.

[00:37:02] Yep, okay.

[00:37:03] And you weren't seeing these sources on yours.

[00:37:05] I wasn't at first.

[00:37:06] Is that right?

[00:37:06] But now instead I see Britannica.

[00:37:08] Then there's a Wikipedia for a different part of it.

[00:37:13] Then I see history.com with 1450, 1448 from MIT, right?

[00:37:21] So you think it might be better to say if they just said about 1440, about 1450.

[00:37:27] Yeah.

[00:37:28] Yeah, right.

[00:37:29] Because that's the way facts are.

[00:37:31] So it's an interesting experiment.

[00:37:34] They make an emphasis.

[00:37:35] It's just an experiment.

[00:37:36] It's a good potential use.

[00:37:40] But I think we find ourselves in inevitable trouble with anything involving generative AI and facts.

[00:37:49] I mean, if you're a student and you use this tool to research and learn about the printing press and you got the results that I got and then you end up going into your test a week later and it says, when was the printing press invented?

[00:38:05] And you say what you studied, which is what it gave you, you would get that answer wrong.

[00:38:10] Right.

[00:38:11] Right.

[00:38:11] And for an educational tool, that's not okay.

[00:38:15] You're right.

[00:38:15] There needs to be some sort of checks and balances there to know.

[00:38:22] Oh, like that seems like a fact that was really easy to get right.

[00:38:26] There is dispute about when he did it.

[00:38:28] We don't know exactly when he invented it.

[00:38:30] We do know when the Bible came off the press, which was 1454.

[00:38:34] Okay.

[00:38:35] We know when he was in Strasbourg.

[00:38:39] There's some argument that he invented it while he was in Strasbourg.

[00:38:41] And Strasbourg would like to say that because they want to claim to be the birthplace.

[00:38:45] Mainz, where I just was last week, says, no, no, no, that happened here.

[00:38:48] And he was only here between this date and that date.

[00:38:50] So there's debate and a good educational tool would present that debate.

[00:38:55] Yeah, right.

[00:38:56] Instead of saying 1440, next minute, 1450.

[00:39:01] Right.

[00:39:01] And so...

[00:39:04] Somewhere in the decade between 1440 and 1450.

[00:39:08] Yeah, and give me as a student that breadth of debate so I can judge for myself

[00:39:13] and so I can become more conversant in how facts work.

[00:39:16] For sure.

[00:39:18] So...

[00:39:18] Huh.

[00:39:20] Yeah.

[00:39:21] Interesting.

[00:39:23] Okay.

[00:39:23] Well, everybody, go check it out for yourself.

[00:39:26] Learning.google.com.

[00:39:28] And there's a bunch of experiments there.

[00:39:30] And you can find Learn About and see what you think.

[00:39:33] But, yeah.

[00:39:34] I do think that it's cool that Google is doing so many of these experiments out in the open

[00:39:42] and that anyone can access them and interact with them.

[00:39:45] You know, their AI studio has so many really neat and interesting kind of AI experiments

[00:39:51] that are actually really useful depending on what you want to use them for.

[00:39:56] And they're not behind a paywall.

[00:39:57] So I think that's...

[00:39:58] I give them props for that.

[00:39:59] Yep.

[00:40:01] Particle is a company founded by former Twitter engineers.

[00:40:05] And that company is setting its sights on the combination of AI and news, a topic that we cover on this show quite often.

[00:40:12] And, you know, so often is the case that news is not friendly or in a friendly relationship with AI.

[00:40:21] And AI is just scooping up all the news and repurposing it so that you don't click on the link and go check it out yourself.

[00:40:28] And in this case, Particle has the goal to assist publishers, according to the article on TechCrunch anyways,

[00:40:39] and kind of avoid that usual resistance from news regarding all that I just spelled out there.

[00:40:47] It plans to compensate publishers as well as drive traffic to news sites.

[00:40:52] And it's got some interesting features kind of integrated in there.

[00:40:55] But before I kind of talk about the features, I'm curious to know, like, how you, from your news background perspective,

[00:41:03] how you feel about, I guess, companies like this creating projects like this?

[00:41:09] Because I feel like this isn't entirely new.

[00:41:11] I feel like I've seen some of this before.

[00:41:13] But what are your thoughts?

[00:41:14] Yeah, it's not unlike perplexities, Discover.

[00:41:17] Yeah, right.

[00:41:19] Unfortunately, I can't download the app because it's an Apple app.

[00:41:22] It is.

[00:41:22] Apple only or iOS only.

[00:41:24] So I can't do it yet.

[00:41:25] The website itself shows you just a few articles.

[00:41:28] It doesn't give you the full functionality at ParticleNews.ai.

[00:41:33] But if I click on a given story there, one thing, it then comes off,

[00:41:39] German Bundesstag faces new elections after collapse.

[00:41:42] It takes the key elements here, puts them in bullet form, and creates links for certain things like the traffic light coalition, which is called the ample coalition, because it's the colors of the parties involved now are the colors of a traffic light.

[00:42:00] So you can go through the explanation, and that's nice.

[00:42:02] And you can see the chancellor, Olaf Scholz, and see his name linked.

[00:42:07] Then it says at the bottom 32 articles.

[00:42:10] And you get links to not all 32 articles, but you get links to a dozen of them.

[00:42:15] And part of what that signals, just as perplexity does, is the amount of repetition in the news business.

[00:42:22] That's interesting, Jason.

[00:42:23] You got 32.

[00:42:24] I only got a dozen.

[00:42:25] Huh.

[00:42:26] Well, it says 32, but then when I open it, it's, yeah, it's probably more like 13 or 14 that it actually shows.

[00:42:31] You got more than I did.

[00:42:32] You got a lot more than I did.

[00:42:33] Okay.

[00:42:33] Again, random AI.

[00:42:36] Yeah.

[00:42:37] And so, and they're making a shtick that they're not going to steal, that's TechCrunch's word, from publishers.

[00:42:46] And I think for the point at which you actually quote news, there's an argument to be made there.

[00:42:55] But if all you're doing is linking to it, then you're the same as Google or Facebook or anybody else.

[00:42:59] And so I don't know that there is an obligation to pay at that point.

[00:43:03] If you're using it in the app, then yes, there would be.

[00:43:07] But then who do you pay?

[00:43:08] Out of the 38 articles that contributed to its knowledge of this commodity event, which is John Thune elected as Senate Majority Leader, everybody knows that.

[00:43:20] That's open news.

[00:43:22] Why one site should be paid for that versus another site, I don't know how you possibly figure that out.

[00:43:27] There are other places out there like ProRata.ai, which is my friend Bill Gross and Tollbit and others that are trying to do the same thing.

[00:43:38] It's trying to find ways to allocate value to the publishers.

[00:43:44] But what's interesting about this too, Jason, I think, is this is starting to get toward the agentive world we've talked about so much.

[00:43:52] How do you become discoverable here?

[00:43:56] And I saw something fascinating this week, just as a matter of nomenclature, that we all know that SEO exists, search engine optimization, which became an entire industry with whole convention floors filled with booths with people with their efforts to promise to have you raise higher in Google.

[00:44:15] Now they've named the successor, which is GEO, for generative engine optimization.

[00:44:23] Oh, no way.

[00:44:24] Okay.

[00:44:25] And so that's now what they're calling this.

[00:44:27] So you can see the same.

[00:44:27] You can just see the new industry burgeoning up from that.

[00:44:31] We're going to make you show up really high in particle.

[00:44:35] Okay.

[00:44:36] And a link to you.

[00:44:37] Okay.

[00:44:39] I mean, yeah, it's interesting.

[00:44:41] I mean, I'm used to, with the tools that I use, seeing stories linked to when I'm doing research.

[00:44:49] Right.

[00:44:49] And I use that regularly.

[00:44:51] When I go to a story here on particle news and, oh, sorry, I thought I was showing the screen.

[00:44:58] When I go to a story here and it shows the 13 articles below, I'm happy, or I don't even know if it is all 13, but it says 13.

[00:45:05] And then there's a bunch listed down here.

[00:45:07] I'm happy that they're there.

[00:45:08] Yeah.

[00:45:09] But are you really going to go with them?

[00:45:11] I have no idea which one I would go to and why other than recognition, like, oh, I like the verge.

[00:45:16] So I'm going to go check out, you know, so maybe there's a little bit of that.

[00:45:20] But again, like, I don't, I guess when I'm looking at this, like, I'm not seeing much different here from what I see in other things.

[00:45:30] Like, like when I use perplexity, I might get a summary that looks very similar to this.

[00:45:34] It might not be formatted in the same way, but it's going to give me the summary of the information.

[00:45:38] It's going to give me the links that it used to create that summary.

[00:45:42] And I guess, what, is the difference here that they're planning on actually paying publishers for it?

[00:45:48] Well, they're vague about that.

[00:45:51] They're also saying the value, just like Google, the value we're going to send you is by sending you links.

[00:45:57] Okay.

[00:45:57] So, you know, who knows?

[00:46:00] Yeah.

[00:46:00] I guess what I'm saying is if publishers are upset with the status quo as it is with AI tools so far, how is this different?

[00:46:11] By looking at it, it doesn't look much different to me.

[00:46:14] It just looks like it has a little bit of a graphical kind of enhancement to it.

[00:46:18] I agree.

[00:46:19] As a user, though, I would say that some of the features of this tool actually sound kind of interesting, right?

[00:46:27] Like, it's not straight up summary.

[00:46:29] There's a feature called Explain Like I'm Five, which honestly feels kind of young if I'm honest, but I get the point.

[00:46:35] It's like, simplify this for me.

[00:46:38] There's another tool called Opposite Side to present differing viewpoints for a story.

[00:46:44] Just the facts that lists the five Ws, who, what, when, where, and why.

[00:46:50] And then they have plans for a chatbot, which reminds me of our interview with Sturmer.

[00:47:03] Yes.

[00:47:04] Yes.

[00:47:05] Sven Sturmurthalo.

[00:47:07] There we go.

[00:47:07] Sven.

[00:47:08] I was like, what is his first name?

[00:47:09] Sven Sturmurthalo.

[00:47:11] And his, you know, when he came on episode two, I believe it was, and talked about how in Norway, like, they had this.

[00:47:18] When we talked to him back then, almost ten, you know, nine months ago or whatever, the ability to interact with the news story and ask questions around it with a bot of sorts.

[00:47:28] And I think that's interesting.

[00:47:29] I like that.

[00:47:30] Yeah, but I'm looking at Perplexity right now, the discoverer there.

[00:47:32] And it lists four sources for the story about OpenAI buying the chat.com.

[00:47:40] And for each of those sources, it has at least two lines out of the lead of that source so I can get some idea of what I'm clicking on there, which is more than a logo and better than a logo, I think.

[00:47:51] Yeah.

[00:47:52] Yeah, interesting.

[00:47:53] Well, and it is only iOS for now.

[00:47:56] So, you know, maybe someday.

[00:47:57] Yes, we can't fully play with it yet and don't know what all of them will do.

[00:48:00] Bring a little love to Android once again.

[00:48:03] Yep, you feel so left out.

[00:48:06] We're in the corner sobbing to ourselves while everyone else is playing with the fancy toys.

[00:48:12] A few more stories before we wrap things up.

[00:48:14] You may remember a couple of weeks ago we talked about a robot artist named Aida or Ida.

[00:48:21] And Ida had created a piece of art called AI God.

[00:48:26] And it is a piece of art about Alan Turing or showing kind of an interpretation of Alan Turing.

[00:48:32] And it was put up for auction at Sotheby's.

[00:48:35] It sold for $1.3 million, nearly 10 times its estimated value, 10 times more than they expected.

[00:48:44] To get out of this thing.

[00:48:45] Well, it helps that they threw in a bridge.

[00:48:51] Like, is that a bridge?

[00:48:53] Yeah.

[00:48:53] I was going to say.

[00:48:55] I was going to say, is that code for something?

[00:48:59] Or did they actually throw in something?

[00:49:00] No, that's the old joke is I got a bridge to sell you.

[00:49:04] Yeah, yeah, exactly.

[00:49:05] Well.

[00:49:05] Don't get it.

[00:49:06] Well, what does it mean when a piece of art like this goes for $1.3 million?

[00:49:10] I mean, it's perceived value.

[00:49:11] Someone felt like it was worth it.

[00:49:13] Well, so did NFTs for a while.

[00:49:15] Yeah, that's true.

[00:49:16] But the thing is, they can, okay, this is, hello, you found a revenue stream.

[00:49:20] Make 100 of these a day.

[00:49:23] Yeah.

[00:49:23] Yeah, exactly.

[00:49:25] Well, I guess, but then you probably reduce.

[00:49:26] But then the value goes down.

[00:49:27] Because it's not.

[00:49:28] The value.

[00:49:30] This artwork, by the way, it's nice to get some scale.

[00:49:33] They showed off this photo in the BBC article of the summit, which had them on display.

[00:49:41] And I don't know.

[00:49:42] I mean, hey, I've seen computers make some pretty bad artwork.

[00:49:46] Yeah.

[00:49:46] Yeah.

[00:49:46] And that actually is somewhat appealing.

[00:49:50] I like it.

[00:49:50] You know, as I said, I'm likely to be teaching a course at Stony Brook in AI and creativity.

[00:49:57] And what interests me about this story is not what the robot did, but what the human did to have it do that.

[00:50:05] What's the interaction between them?

[00:50:07] I talked to a professor at Stony Brook this week, a cellist who does really fascinating work with AI and music.

[00:50:18] And what interests me there is the human just got lost in this.

[00:50:23] If they worked together to create a kind of art that the human otherwise wouldn't have created, then talk about them both.

[00:50:30] But they try to make this cooler by saying, oh, the robot made it.

[00:50:34] Well, no, the robot didn't.

[00:50:34] It did everything.

[00:50:35] No, it didn't.

[00:50:36] It was told to do something.

[00:50:39] Yeah.

[00:50:40] Yeah.

[00:50:40] And it was created.

[00:50:41] Yeah.

[00:50:42] It was created by a human to do something.

[00:50:44] Yes.

[00:50:45] And this is the something that it did.

[00:50:49] Nonetheless, I guess it's a historic moment.

[00:50:51] That's probably that's where my mind goes as far as like, where does the value come?

[00:50:55] Well, if it's the first, then at a time when AI as a buzzword, as a technology, as so many things has this massive amount of interest and momentum and all that, then being the first artwork auctioned at Sotheby's or whatever, maybe that's where the value comes from.

[00:51:15] Then if it fails like NFTs or whatever, at least you were the first.

[00:51:22] Well, which is the only hope.

[00:51:23] Yes.

[00:51:23] Because if you're going to buy art like this, you're buying it for speculation.

[00:51:26] You're buying it for the value to go up.

[00:51:27] For sure.

[00:51:28] Yeah.

[00:51:28] One hundred percent.

[00:51:29] Yeah.

[00:51:30] And at least in this case, you have something to show for it as opposed to NFTs that are just, I don't know, still very, very silly to me.

[00:51:37] And finally, in another first, a Grammy nomination was given to the Beatles.

[00:51:46] Actually, I don't know if that's, were the Grammys around when the Beatles were around?

[00:51:50] I honestly have no idea.

[00:51:51] Jesus.

[00:51:52] Yes, Jason.

[00:51:53] Yes.

[00:51:54] God, I feel so old now.

[00:51:57] I don't know.

[00:51:58] Like, I know that the Oscars have been around for a long time and it only just occurred to me.

[00:52:02] I actually don't know that for certain about the Grammys.

[00:52:04] But so then did the Beatles ever win a Grammy?

[00:52:08] Actually, now this is a perfect question for that Google product that we were just using.

[00:52:14] That's interesting.

[00:52:15] Yeah.

[00:52:16] Yeah.

[00:52:17] The short answer is many, but.

[00:52:20] Okay.

[00:52:20] I want to see what it says.

[00:52:22] Now I'm just curious.

[00:52:23] Now I'm on a tangent because I realized I asked a question that maybe everybody else knows but me.

[00:52:28] The Beatles, did they ever win a Grammy?

[00:52:33] Well, hold on.

[00:52:34] I'm surprised if they didn't.

[00:52:35] Hold on.

[00:52:35] Maybe I'm wrong here.

[00:52:37] Several Grammys.

[00:52:38] Several Grammys.

[00:52:38] They first won in 1965 for Best New Artist, Best Performance by a Vocal Group for Hard Day's Night,

[00:52:44] went on to win a total of seven Grammy Awards in their career.

[00:52:47] Okay.

[00:52:48] Including a posthumous award in 1997 for Free as a Bird.

[00:52:52] Okay.

[00:52:52] That's good.

[00:52:53] So while you have the screen on, now go to Google and just say, did the Beatles win a Grammy?

[00:52:58] You're going to find a less satisfying answer.

[00:53:01] Okay.

[00:53:02] On a search.

[00:53:03] So this is Google being itself.

[00:53:05] What you get is in cursed dark mode.

[00:53:09] Yes.

[00:53:11] Sorry.

[00:53:12] It's just boxes.

[00:53:12] It's just how I roll.

[00:53:13] For those of you, I hate dark mode, so it's a joke.

[00:53:17] And now I hate Jason for using it.

[00:53:20] You have a bunch of boxes of all the awards they won, but no sense of it.

[00:53:24] And then a link to, in your case, you got Grammy.com.

[00:53:29] I got a list of awards and nominations received by the Beatles from Wikipedia.

[00:53:34] Okay.

[00:53:35] That's below the fold for me.

[00:53:37] Okay.

[00:53:38] Well, I feel a little silly for even questioning whether they did or not.

[00:53:43] No, no, no.

[00:53:44] Because I am a huge Beatles fan.

[00:53:46] Yes.

[00:53:47] I love the Beatles.

[00:53:48] It's just a factoid that I never actually thought to look up and research.

[00:53:51] So I'm happy that we use learnabout and google.com for a lesser enjoyable answer to that question.

[00:53:59] What I actually meant, though, by saying another first is that we haven't seen a Grammy given to a song or an artist that heavily relied on artificial intelligence that we know of to create it.

[00:54:14] And I guess last year at the Grammys, an artist was denied a Grammy nom, even though it was a song that was potentially expected to make it.

[00:54:26] And I can't remember who the artist was, but because the sample that was used was unlicensed, I think, was the reason.

[00:54:32] Well, this year, and actually this is last, I think, November, the Beatles released their final song, put that in air quotes, until the next final song that they decide to release from archives called Now and Then.

[00:54:45] And it's essentially derived, it's a song that's derived from an old recording, John Lennon recording of him just sitting at a piano with his tape recorder, I think in 1977 or 78.

[00:54:59] It's pretty rough. But I went through a whole period of my kind of 20s where I got really into the Beatles and I got deep into their bootleg catalog.

[00:55:12] And this was one of the songs that really that I loved more than most of the other bootlegs that I listened to.

[00:55:18] So I was pretty happy that they revitalized it.

[00:55:22] You could listen to it back then. I didn't know that. I thought it was just newly discovered, period.

[00:55:25] You knew of this?

[00:55:26] No. Well, I mean, that depends on at what level, I guess, you were looking.

[00:55:31] Like, I got so into Beatles bootlegs that I was, like, trading them online and finding bootleg lists.

[00:55:37] And, okay, here's the ones I have. What do you have? Oh, I've never heard that one. Okay.

[00:55:42] And so I did hear it back then.

[00:55:44] Uh-huh.

[00:56:14] Fantastic.

[00:56:15] The Peter Jackson documentary.

[00:56:19] And they were able to do stem separation, pull out John Lennon's vocals, his piano.

[00:56:27] And then they had also attempted to do this back in the 90s when the Beatles were doing their anthology.

[00:56:34] And they did Free as a Bird and they did Real Love as, like, new Beatles songs, even though Lennon was not alive at the time, where they took those old recordings and redid them.

[00:56:44] They tried to do that with this song, but they weren't able to because the recording was so rough and so beat up.

[00:56:50] And now with AI, they were able to clean it all up, they were able to pull it all out and take some of those recordings that they attempted back in the 90s and layer it all together and create this song that now has been nominated for a couple of Grammys.

[00:57:04] Which we can't play for you because otherwise we'll get taken down.

[00:57:07] No. No, we cannot play that. Of anything, we can definitely not play that song.

[00:57:12] But it's a great song.

[00:57:13] Yeah.

[00:57:14] Let me just do a quick lightning round about two related stories that I put into the very last minute into the rundown.

[00:57:19] One is that Randy Travis, who has basically lost his voice, is using AI to be able to keep singing and keep creating, which is important.

[00:57:29] I think that's, again, a very voluntary movement where he wants it, he needs it, and he's grateful for this occurring.

[00:57:36] Yeah.

[00:57:38] And then...

[00:57:40] Oh, no, that's not it.

[00:57:41] No, but, well, that's good. You just moved it.

[00:57:42] But then, what I was going to talk about next, is that YouTube is taking some licensed music and is going to enable users to modify it, remix it in 30-second clips using AI.

[00:57:57] So you can, for example, change the mood or the genre on command.

[00:58:02] Okay.

[00:58:03] Which is really interesting.

[00:58:04] As you as a musician, I'm always interested in your view about this.

[00:58:07] Yeah.

[00:58:08] You know, if you take a dream track experiment.

[00:58:11] Yeah.

[00:58:12] And then be able to use that on YouTube.

[00:58:16] And these are artists that have agreed to participate.

[00:58:19] Yes.

[00:58:20] So, you know, it's a little different.

[00:58:21] I mean, some of these tools do exist.

[00:58:23] Like, you can go into UDO or Suno, and you can't necessarily upload a Charlie Puth song or a Charlie XCX song and say, let's turn this into a different thing.

[00:58:36] But if you create something in Suno, you can then kind of guide it into, you know, turn this into a folk song.

[00:58:43] You know, from a dance song into a folk song.

[00:58:45] And you can kind of go in those directions.

[00:58:47] That's like genre morphing features is a thing that you can do with these systems.

[00:58:52] So I'm not surprised at all.

[00:58:53] And I'm also not surprised that we're seeing more of this with sign-off from the artists.

[00:58:59] Because I think more and more, we're going to see artists – you know, some artists are just automatically friendly to this sort of thing.

[00:59:06] Because they're like, well, you know what?

[00:59:07] This is the development of technology.

[00:59:09] I'm pro-technology.

[00:59:10] I want to see where this is headed.

[00:59:13] And they sign off.

[00:59:14] I think we're going to – I'm imagining we're going to see more and more of that as we go forward because the technology enables this.

[00:59:21] And it becomes the kind of thing like are you providing it so you have some sort of agency in the process?

[00:59:29] Or are you outside of it and people are doing it anyways and it doesn't involve you?

[00:59:35] And I don't know.

[00:59:36] Yeah, it's interesting to me.

[00:59:37] I'd love to play around with it and kind of see.

[00:59:39] I think it's a test right now.

[00:59:40] So it's not something that everybody has access to at this stage.

[00:59:45] While I'm at it, I'll plug my new colleague at Stony Brook, Margaret Chanel, S-C-H-E-D-E-L, who's a professor in the Department of Music.

[00:59:54] But she does composition and computer music.

[00:59:57] And she's a cellist and does amazing things mixing music with computer aid, with images and video and performance.

[01:00:07] And that's the kind of really high end here where she's an accomplished technologist and an accomplished musician.

[01:00:15] And pushing the tools, not the pre-made tools that YouTube will give you, but making the tools and seeing what they can do.

[01:00:23] Yes, right.

[01:00:23] Which is really, really interesting.

[01:00:25] And I had a fascinating chat with her.

[01:00:28] Interesting.

[01:00:28] Yeah, that's a whole other level, right?

[01:00:31] It's musicians recognizing that these tools can be crafted to do things.

[01:00:36] Because I think as an artist, as a musician, often you're looking for a way to stand out from everyone else, especially as all these tools have become so democratized.

[01:00:47] And so many people are using the same exact tools to make their own music.

[01:00:52] The music might be different, but the sounds are the same, that sort of stuff.

[01:00:55] And so having the ability or the skills as a musician to create these tools with AI and have them do things that you can't easily replicate.

[01:01:07] Yeah, I think that's interesting.

[01:01:09] And to raise a generation of musicians and students that can work with all those things, that's exciting.

[01:01:16] Yeah, indeed.

[01:01:17] Indeed.

[01:01:17] It has me really intrigued to see what the next five or ten years are going to be like from a plugins standpoint.

[01:01:25] In the music world, you're always using these audio plugins and everything.

[01:01:29] And there's some AI plugins that exist right now, but there's a lot to explore there.

[01:01:35] Yep.

[01:01:38] Well, that's been a good look into kind of creativity, and hopefully we'll have more stories than that.

[01:01:44] That stuff, that part of the conversation always lights me up.

[01:01:47] It ignites me.

[01:01:48] So I love when those stories happen.

[01:01:51] Jeff, you are awesome.

[01:01:53] JeffJarvis.com.

[01:01:54] People should go there.

[01:01:55] People should buy your new book and your other books.

[01:02:00] Also, the Gutenberg Parenthesis, My Pride and Joy, is out in paperback now.

[01:02:03] Now in paperback.

[01:02:04] It's amazing.

[01:02:05] Yay.

[01:02:06] Love it.

[01:02:06] So if you go to-

[01:02:07] See, it flops.

[01:02:08] Yes.

[01:02:09] Go to JeffJarvis.com.

[01:02:10] And then if I may, keep the plugin going.

[01:02:13] Yes, please.

[01:02:13] If you are in the Bay Area or virtual on December 4, I'm going to be speaking about the new book,

[01:02:18] The Web We Weave, at the Commonwealth Club in San Francisco.

[01:02:22] Jason's nice enough to be coming on down.

[01:02:23] I'll be not on the show that week as a result, because I'll be in the middle of San Francisco

[01:02:26] and busy streets.

[01:02:29] But you can see me that way.

[01:02:30] You can always then watch the show another time.

[01:02:34] And so do go to look up Commonwealth Club San Francisco, and you can buy a ticket for in-person or video.

[01:02:42] Yeah.

[01:02:43] I'm super excited.

[01:02:44] I was like, oh, I can make that work.

[01:02:46] Yay.

[01:02:46] You know, that's just 45 minutes down south from where I'm at.

[01:02:50] So I got my copy of the book delivering today.

[01:02:54] Thank you for that.

[01:02:55] You would have gotten a free one.

[01:02:56] Jesus.

[01:02:58] Oh, please.

[01:02:59] I'm happy.

[01:02:59] I'm happy to do it.

[01:03:00] I'm happy to get it.

[01:03:01] So I'll be bringing that, and I'll ask you to sign it, you know?

[01:03:05] Of course, my friend.

[01:03:06] I don't see a person that often enough.

[01:03:08] It's true.

[01:03:09] So got to get the book signed.

[01:03:11] So anyways, I'm looking forward to it.

[01:03:14] Thank you again, Jeff.

[01:03:16] Thank you, everybody, for watching and listening.

[01:03:17] Y'all are wonderful.

[01:03:18] Your support means a lot to us.

[01:03:20] Everything that you need to know about this show and what we do here with AI Inside can be found at AIinside.show.

[01:03:27] All the subscribe links for the podcatcher of your choosing can be found there.

[01:03:31] Subscribe.

[01:03:32] You've got the live information.

[01:03:34] All of the episodes, as they're published, end up on the site, including the video versions if you prefer videos.

[01:03:40] So you don't have to go to the YouTube channel.

[01:03:41] You can just go to the page and watch it from there.

[01:03:44] So subscribe.

[01:03:46] Also share your thoughts.

[01:03:47] If you're enjoying this podcast, please, this is just a quick reminder to go on Apple Podcasts or whatever podcatcher allows you to rate or review and drop a review.

[01:03:55] We'd really appreciate it.

[01:03:56] And then, finally, you can go to patreon.com slash AIinsideshow and you can support us that way.

[01:04:03] That's a direct support for what we're doing.

[01:04:06] You get ad-free shows.

[01:04:07] You get a Discord community, hangouts.

[01:04:09] You get an AI Inside t-shirt if you happen to become an executive producer, of which we have five right now.

[01:04:16] That's Dr. Du, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina.

[01:04:22] Hannah Paul Lang and Ryan Newell continue to support us on the executive tier.

[01:04:28] And we can't thank you enough.

[01:04:30] That's why we keep saying your name at the end of each and every episode.

[01:04:33] That is one of the big perks that you get.

[01:04:35] So thank you all for your support.

[01:04:37] And thanks for watching and listening each and every week.

[01:04:41] We love learning about AI along with you.

[01:04:43] So we hope you're enjoying things.

[01:04:45] And we'll see you next time, another episode of AI Inside.

[01:04:49] Take care, everybody.

[01:04:50] We'll see you soon.

[01:04:51] Bye.