Year in Review pt. 1
December 25, 202401:09:02

Year in Review pt. 1

In this year-end review, Jason Howell and Jeff Jarvis examine the importance of AI to the tech story of 2024. From the biggest trends, to the ongoing fight over copyright and training data, to safety and regulation, and the moving goalposts of AGI.

🔔 Support the show on Patreon!

CHAPTERS:

0:05:52 - Trends

0:06:07 - Multimodality

0:14:17 - RAG (Retrieval Augmented Generation)

0:24:38 - From chatbots to Agents

0:29:48 - From summary to creation

0:35:38 - Creativity and quality

0:39:05 - Search

0:43:33 - Copyright and the ongoing fight over training data and fair use

0:50:50 - AGI: Around the corner or BS?

0:56:43 - Regulation

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:00] This is AI Inside, episode 48, recorded November 22nd for December 25th, 2024. Year in Review, part 1.

[00:00:11] This episode of AI Inside is made possible by our amazing patrons at patreon.com slash AI Inside Show.

[00:00:18] If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.

[00:00:24] Hello, everybody, and welcome to AI Inside, the show where we take a look at the AI that's layered throughout the world of technology

[00:00:34] and this episode is very different from the episodes you're normally used to getting.

[00:00:40] I'm one of your hosts, Jason Howell, joined on the other side of the screen, if you're watching the video version, by Jeff Jarvis.

[00:00:45] How you doing, Jeff?

[00:00:46] Hey, boss. How are you?

[00:00:48] Sorry, my voice is a little croaky today, but I'm fine, so just bear with me.

[00:00:53] Yeah, okay, good, because I was a little concerned, but you went to an event and you were such a social butterfly that you probably talked a lot.

[00:01:01] I don't like doing this anymore because it's easier to just do social media, you know?

[00:01:05] It's just easier to text people.

[00:01:06] Yeah.

[00:01:06] But I got to see Paris Martin, my co-host on This Week in Google.

[00:01:10] It's the Committee to Protect Journalists, so we were there in fancy attire and all of that.

[00:01:14] Oh, right on.

[00:01:15] Oh, that's so cool.

[00:01:16] I actually just, I can't remember the message, but I just, I virtually ran across her on threads earlier today and waved and said hi.

[00:01:26] We were saying nice things about you together last night.

[00:01:29] Aw, shucks.

[00:01:30] We were talking about you.

[00:01:31] Well, we'll have to have Paris on.

[00:01:32] I mean, honestly, like she's-

[00:01:33] Well, that'd be great.

[00:01:34] She'd probably be a fun guest to get on and just talk about some of the AI news one of these days.

[00:01:40] That can happen in the year 2025 because as you are watching this, you already know, we're like ahead still in the month of November as we record this.

[00:01:51] But as you know, right now, it is kind of the holiday season and the show, we aren't recording it live for Christmas and New Year's week.

[00:02:02] But we in advance, well in advance, as the day that we're recording this, it is November 22nd.

[00:02:10] So, you know, be forewarned, some of the topics that we talk about today might be a little, you know, we might have missed something.

[00:02:17] There may be some news we're missing.

[00:02:18] Yes.

[00:02:19] Exactly.

[00:02:19] Exactly.

[00:02:19] But we were kind of getting stuff together and realized for like a year in review episode, we realized we had so much stuff happen this last year that we just go ahead and do two episodes.

[00:02:31] So you're going to get the first half now as you're listening and watching it.

[00:02:35] And then you're going to get the second half next week.

[00:02:39] And ho, ho, ho.

[00:02:40] Merry Christmas.

[00:02:41] Happy New Year.

[00:02:41] There you go.

[00:02:43] And not only that, not only is it the end of the year, but it's also about a year of AI Inside, especially if you count the few episodes we did at Twit as a pilot.

[00:02:55] Indeed.

[00:02:56] And so it's really looking back on the year of working together.

[00:03:00] And we said in that first episode, whichever one it was, how we are an expert.

[00:03:06] We don't know what's going on.

[00:03:07] We're doing this to learn.

[00:03:08] And as I look back, you know, we've learned, I've learned a lot in this year.

[00:03:13] And it's been a fascinating way.

[00:03:15] And I'm really grateful that you decided to do this, Jason, include me, because it forces us to think about what's going on and sometimes abstract and ask bigger questions and watch some amazing soap operas that are occurring around all this.

[00:03:31] And so we covered a lot in this year.

[00:03:36] So as we record this, I think we've had about 45 episodes.

[00:03:39] But if you add in the others and you add in the ones at the end of the year, it's a double anniversary, end of year festival.

[00:03:47] Yeah.

[00:03:48] I think when we were at Twit or when I was at Twit, you're still doing the This Week in Google podcast, of course.

[00:03:54] But when I was working at Twit and we started beta testing for Club Twit, the AI Inside podcast, I want to say it was like August of 2023, somewhere around there.

[00:04:05] So we had a solid like four months, I'd say, of trialing the show.

[00:04:11] So we are definitely at a year mark of you and I doing this podcast.

[00:04:15] It was that long?

[00:04:15] Wow.

[00:04:16] Yeah.

[00:04:17] Yeah.

[00:04:17] I mean, it was.

[00:04:18] It was August or September.

[00:04:19] It was somewhere around there.

[00:04:20] I know that we had probably at least three months of trial, like probably around 12 episodes if I'm.

[00:04:26] You could say we're still doing a trial, but that's OK.

[00:04:29] Well, yeah, totally.

[00:04:30] Because first of all, I super appreciate being able to do this with you.

[00:04:34] You are an incredibly busy guy, but you're also wickedly smart.

[00:04:38] And I learned so much listening to how you think about this industry.

[00:04:42] And also, yes, agreeing with the fact that when we started doing this show, I think the primary motivator for me, and you've said the same, is, man, AI really seems like a really big deal.

[00:04:57] It really seems like something that's going to make a lasting impression on technology.

[00:05:02] And I want to know more about it.

[00:05:05] And this seems like a great way to do that is you get plugged into the news cycle.

[00:05:09] You talk about it.

[00:05:11] And I mean, I've learned so much yet at the same time.

[00:05:14] I still have those moments, like a lot of those moments of like imposter syndrome of like, oh, my goodness, like there's still so much I don't know.

[00:05:20] Like, and we're probably going to come up on it today because we have this huge list of stuff to talk about.

[00:05:26] And as I'm going through it, I'm like, well, yes, I know that we've lived through a lot of this stuff the past year.

[00:05:31] But some of the details, there's just so much to manage and command.

[00:05:36] It can be a little overwhelming at times, but I really learned a lot.

[00:05:41] Yeah.

[00:05:41] And by the way, it is since we are talking, we're being honest with you.

[00:05:44] If we were really showbiz, we'd act like, hey, how's your Christmas going?

[00:05:49] But we're not.

[00:05:50] No, no, no.

[00:05:50] It's November.

[00:05:52] So it's interesting that we are two years past the launch of ChatGPT.

[00:05:57] Mm-hmm.

[00:05:58] November 2022.

[00:05:59] Wow.

[00:06:00] Yep.

[00:06:01] And my health things have changed in two short years.

[00:06:03] Yeah.

[00:06:04] Yeah.

[00:06:04] It's interesting, too.

[00:06:05] Our first topic for this rundown is trends that went on.

[00:06:10] And it's interesting that everyone thought generative AI at first, I think, was going to be text because it was at first.

[00:06:17] And at the same time, there were diffusion models, which were different models, that were doing images at the same time.

[00:06:25] But we've seen a progression, an amazing progression in the last year, I would say, of the modes and media of AI.

[00:06:36] Mm-hmm.

[00:06:37] So we're from text to image to images that can move to video, now audio, now audio with video.

[00:06:45] It's pretty amazing the speed of that development and how they've taken these tools and mixed them together.

[00:06:52] Yeah.

[00:06:54] Yeah.

[00:06:54] You know, what it really kind of puts this into focus for me is I remember however many years ago when it was – I'm trying to – it wasn't DeepMind.

[00:07:06] It was Deep something.

[00:07:08] It was one of Google's earlier and early, I'm talking like 2018 maybe, somewhere around there, where we were starting to see this like generative AI image generation.

[00:07:20] And everything looked very psychedelic and very like dream state.

[00:07:24] Nothing had – was planted in any sort of reality.

[00:07:28] But still, it was very cool to see that a computer or a model was creating its own version of art.

[00:07:36] And seeing that then – and I remember witnessing some of that artwork and just being like, man, this looks like unlike anything I've seen before.

[00:07:46] It doesn't look real.

[00:07:47] But, man, this is going to get really interesting when this starts to really improve.

[00:07:51] And I think what you're talking about this past year, we've seen just a massive improvement in quality.

[00:07:59] At the same time, as we were talking about recently on one of the recent episodes, still kind of has that kind of glossiness to it.

[00:08:07] There's a certain tell that – if AI hadn't had this buildup the way it has the last couple of years and we haven't seen it evolve from – just talking about images, for example.

[00:08:20] And we kind of go to an alternate timeline when there were just digital artists creating things.

[00:08:26] Like I truly wonder if we'd have a different understanding of what we're looking at because right now I feel like I can see these images and go, okay, that's amazing.

[00:08:35] But it still kind of has that glossy quality and I wonder if it will move beyond that someday.

[00:08:40] Well, it wasn't just the glossy quality I think is a later stage.

[00:08:44] The earliest stage – I don't think I ever told you this.

[00:08:47] I did have one tweet exchange once with Sam Altman where I made fun of the images that were coming out of the early Chagipiti saying that they had – they'd been trained on the minds of teenage boys because they were women with, frankly, big breasts and that certain kind of cartoonish look.

[00:09:07] Right, right.

[00:09:08] And I said that and I forget what he said.

[00:09:10] I've tried to find the tweet again and it's gone.

[00:09:12] He thought better of it.

[00:09:14] But he said, oh, you know, like, come on, man.

[00:09:16] You know, you're not being fair.

[00:09:17] But it's true.

[00:09:18] There was an aesthetic that it had in those early days.

[00:09:22] Very cartoony.

[00:09:23] Not quite – well, it was the fantasy world aesthetic.

[00:09:33] Right.

[00:09:34] And then I think it tried to get realer and then we, of course, had 10 fingers and all that problem.

[00:09:41] Interesting mutations and all that.

[00:09:42] Yeah, it got better.

[00:09:43] And then it tried to get more realistic but at the same time more creative.

[00:09:49] But you can still tell it's AI.

[00:09:50] It's funny, on this week in Google recently, for some reason, as we go off on weird tangents as we do, Paris Martineau was taught – we were talking about how she can go down the block and she can buy queso.

[00:10:01] And she's like, okay, so from Wonder Restaurant there.

[00:10:04] And next door is a dispensary for – read.

[00:10:10] And she talked about the aesthetic of the dispensary.

[00:10:13] And they're always this kind of white, shiny thing.

[00:10:16] I said, yeah, it's like it was designed by ChatGPT.

[00:10:19] There's a certain aesthetic to this era that's coming in.

[00:10:23] I don't think it's had much of an influence yet on real-life art and illustration, but I think it will.

[00:10:29] Yeah, yeah.

[00:11:00] Yeah.

[00:11:01] It was about singular.

[00:11:02] It was about this is really good at images.

[00:11:05] This is really good at text.

[00:11:06] This understands text.

[00:11:08] This understands voice.

[00:11:09] But never the paths will intertwine.

[00:11:12] And now this year, you've got GPT-4.0 and it's advanced voice mode.

[00:11:17] You've got Gemini Live and Project Astra, which isn't out yet, but we can see that's where Google's headed.

[00:11:26] We've got all of these efforts to take the advancements in all the different directions and chocolate and peanut butter them together.

[00:11:40] And at the same time, I will also say some of this stuff is really neat, yet I don't go back to it.

[00:11:47] Gemini Live was neat when I played with it and interacted with it and everything, but I don't really use Gemini Live on a regular basis.

[00:11:55] And I do wonder at what point do we give in?

[00:11:59] So much of what I do is I could do with any of them.

[00:12:03] I've talked before, but my main use is I'm thinking of a word, but I can't think of it.

[00:12:09] And it has this and that and this and that.

[00:12:11] And I just go to meta.ai because it's quick and easy.

[00:12:14] It's there.

[00:12:16] It's in all of their products.

[00:12:17] So you just open up one of their products.

[00:12:19] It's right.

[00:12:20] But you raise an important point that I didn't get to.

[00:12:22] I'm like quick catalog there is not only being multimodal, but being truly interactive going back and forth.

[00:12:29] That you're having a conversation with it.

[00:12:33] And Siri tried to do that, of course, where you were just asking it.

[00:12:36] You were just making a command.

[00:12:38] And now you're truly having a conversation where the machine understands the antecedent.

[00:12:43] What does that refer to?

[00:12:45] What does he or it refer to?

[00:12:49] And that's a step I left out.

[00:12:52] And I think it's a big step is that it's not so much that you're fooling people.

[00:12:57] It's just that you can do it.

[00:12:58] You can have a seamless interaction.

[00:13:00] That's a big deal.

[00:13:01] Yep.

[00:13:02] Yeah, really big deal.

[00:13:03] And, you know, I've talked about it many times on the show and on Android Faithful.

[00:13:08] Kind of delivering on the legacy promise.

[00:13:10] Like I always think of Google Assistant and the kind of voice interaction with Google Assistant.

[00:13:16] And, you know, in the beginning, it being so magical and interesting that I could just use my voice and speak sort of human-like.

[00:13:24] And it will turn off the lights.

[00:13:26] And, like, that was really cool.

[00:13:27] But then at a certain point, I realized over time it still required a certain syntax.

[00:13:32] And that syntax was going to be enough to complicate things long-term.

[00:13:36] Because as it grew more capable, now I've got to remember the syntax of all these different things that it can do.

[00:13:42] And so I end up doing none of it because it's just too complicated.

[00:13:45] And that's kind of the promise that some of these current voice modes are finally delivering on, which is – and still they're not perfect.

[00:13:53] But they get us closer to, oh, well, I can just talk as a human.

[00:13:57] And it will probably understand or get the gist of what I'm actually saying without me having to think about syntax.

[00:14:04] That's really one big benefit.

[00:14:06] That's really, really smart.

[00:14:07] Really true.

[00:14:08] Yeah, the syntax is ours.

[00:14:09] And that's to say that it's unpredictable and it's human and it doesn't matter and it adjusts.

[00:14:15] Yeah, yeah.

[00:14:16] Yeah, so you were talking about kind of like the handoff of these different systems and everything, which you would put in here, RAG, retrieval augmented generation as a trend from this year.

[00:14:30] And I will admit that sometimes when RAG comes up, I'm still kind of in my head scratching my – like this is one of those concepts that I end up feeling a little bit of imposter syndrome around because I don't know that I 100% totally own the definition.

[00:14:48] But it's essentially LLMs combining their strengths, I guess, to hand off for different pieces of information to retrieve that information when it doesn't have the answer.

[00:15:00] Is that accurate?

[00:15:01] Yeah, the way I look at it is I think that – and I might be wrong about this and I hope people will tell us when we're wrong so we can learn.

[00:15:09] But I think that, for example, Notebook LM is an illustration of RAG in that when you give it your documents, it returns answers only from those documents.

[00:15:21] But the foundation model knows how to read, doesn't know anything, but it can approximate how to read and how to speak and how to summarize and do all these things.

[00:15:33] So the foundation model is taught to do things.

[00:15:37] So I guess I'd look at it this way.

[00:15:39] Got it.

[00:15:40] Okay.

[00:15:41] That makes sense.

[00:15:41] So if you walked into a – I'm making this up a lot before your very eyes, so I'll probably screw it up.

[00:15:48] If you or I walked into a pharmacy, we wouldn't know what we're looking at.

[00:15:52] It's all these strange names.

[00:15:54] We know nothing.

[00:15:55] If you're trained as a pharmacist, then you come in and you understand how to do this and what the rules are and so on and so forth.

[00:16:01] A new drug comes along and you're told to deal with that, you deal with the data of that drug.

[00:16:07] Or try it another way.

[00:16:10] I finally learn the German language.

[00:16:13] So I learn how to speak it.

[00:16:15] I learn how to listen to it.

[00:16:16] I'm too old.

[00:16:16] I never will, but I try.

[00:16:19] And then presented with a new book, I can deal with that book with that data.

[00:16:24] I've learned the skill of reading and translating and so on, and now I deal with just this set of data.

[00:16:30] And the key part of this that strikes me as so important is that we know, know, know that generative AI has no sense of meaning, thus no sense of fact, no sense of truth or falsehood.

[00:16:42] And so hallucination, which we've lived with for the last two years in the nomenclature, is a misnomer.

[00:16:50] It's wrong.

[00:16:51] It's not a hallucination.

[00:16:53] Neither is it a lie because it doesn't know the truth.

[00:16:55] It's just the random next, it's the next predicted word in a sequence with an element of randomness, which means that it does things that are not truthful because it doesn't have a relationship to the real world.

[00:17:11] So given all that, RAG I think is so important because now you can limit it to a corpus of data and you can check it.

[00:17:22] So that's what's so important about notebook LM, right?

[00:17:25] Is that I've given it 10 PDFs.

[00:17:29] It's summarized, but it tells me where it thinks it got this or that, and I can check it.

[00:17:34] I can go back and see.

[00:17:35] Now I might be lazy and not do that.

[00:17:38] That's my problem then.

[00:17:39] But I think it increases the reliability.

[00:17:43] The thing that's fascinated me over the year, Jason, on this is that question of the randomness added in.

[00:17:53] And I don't understand that very well.

[00:17:55] Because what it means is you never, I don't know if you never, but you often, as a rule, one does not see the same thing twice if you ask the same question.

[00:18:05] Recent show, we were playing with one of the tools and you and I typed in the exact same instruction and the results were different.

[00:18:14] Yeah.

[00:18:16] So if you're going to use this for...

[00:18:18] Different decade, I think, if I'm not mistaken.

[00:18:21] Yeah, right.

[00:18:21] Different examples, different things, right?

[00:18:22] So if you're going to use this for education, which was the example we were using at the time, or journalism, or anything, the maker of the tool cannot assure that answers are going to be the same.

[00:18:38] And if you assign this as a teacher with that purpose, the kids may miss out on a decade.

[00:18:46] Mm-hmm.

[00:18:47] Mm-hmm.

[00:18:48] They might focus their time learning the wrong piece of information, or some of them might get it right.

[00:18:53] Or lacking a piece of information.

[00:18:54] Lacking a piece of information.

[00:18:55] Yeah, maybe lacking a piece of information, even that.

[00:18:57] Or it's expressed differently.

[00:18:59] Or they think they learn it and they want to go back and see it again, but this time it's not the same as they first learned it.

[00:19:05] Those are, I think, real issues for how we integrate this and choose not to integrate this.

[00:19:11] And so I think that combination, so RAG, in my view, improves the potential reliability, but you still have, and it eliminates some of the so-called hallucinations, or at least you can check against them.

[00:19:27] But then it has the added problem of the random answer.

[00:19:31] Mm-hmm.

[00:19:32] And those, I think, are going to be real issues going forward when you don't have consistency and reliability.

[00:19:39] Yeah, I would agree with that.

[00:19:42] Yeah, some of the examples of really good kind of focuses or places where RAG can be and is proving to be very helpful, like Google Cloud's contact center AI.

[00:19:56] So things like customer support, healthcare, e-commerce, those kinds of things.

[00:20:01] And I imagine because, like, when you're talking about, like, customer support for a business, there's a very specific not, not, what am I trying to say?

[00:20:11] It's not a catch-all sort of corpus of data.

[00:20:15] It's very specific to the business.

[00:20:17] This is the piece of information to advise around.

[00:20:20] And I guess that's just better than the kind of wide-ranging answers you might get if it didn't have that specific information.

[00:20:30] But I think I'm learning more about RAG.

[00:20:33] It still gets a little cloudy for me, but I understand.

[00:20:36] Because there's training, then there's fine-tuning, and then there's RAG.

[00:20:40] Right.

[00:20:41] Right.

[00:20:41] So the training is basically teaching, as I understand, is teaching its skills.

[00:20:46] Okay.

[00:20:47] Right?

[00:20:47] This kid got through high school, knows how to do arithmetic, knows how to do algebra, knows how to do this and that, right?

[00:20:53] Then fine-tuning, which I do not understand at all well, but you're taking a given broad model and making sure that it could do certain more specific things well.

[00:21:05] But I think it's still in the realm of training.

[00:21:08] Sure.

[00:21:08] And then RAG is the data it can call upon in a given session.

[00:21:13] That's the way I think it is.

[00:21:14] That's the way I understand it.

[00:21:15] But I could be wrong, because as we said at the beginning, we are learning.

[00:21:18] Well, yeah.

[00:21:19] And I know with the tool that I often talk about, Perplexity, they have a feature called Spaces.

[00:21:25] And so I can set up a space.

[00:21:28] And within that space, so let's say I set up a space called AI Inside, which I actually have for some of the tasks that I use AI for related to the show.

[00:21:38] I can feed into that space documentation that is specific to any of my interactions in that regard there.

[00:21:48] So it might be an information data set related to the show, or it might be best practices around how to title shows or whatever.

[00:21:57] So that anything that I do within that space, it calls upon that specific corpus of data and potentially more if I ask it to or if I ask it not to.

[00:22:09] It won't.

[00:22:10] That's kind of what you're talking about, right?

[00:22:12] Yeah, well, yeah, that's another issue, which is memory.

[00:22:19] Is, you know, at first with ChatGPT, you went in and every session was...

[00:22:24] Brand new.

[00:22:25] Evanescent.

[00:22:25] Right.

[00:22:26] Yeah.

[00:22:27] You could say, no, that's wrong.

[00:22:28] You should learn this.

[00:22:29] And the next thing when it came back, it didn't learn it because it wasn't doing that.

[00:22:32] It was the model, right?

[00:22:34] So tell me if I'm wrong here.

[00:22:35] But is that spaces...

[00:22:37] Well, I guess I'm curious about it.

[00:22:40] Whether, and I don't think you could know this, whether it operates by having a memory.

[00:22:46] Oh, Jason always asks for this.

[00:22:48] As Google has a memory when, you know, Google says to me, you come to this page once a week.

[00:22:53] The...

[00:22:55] Our rundown.

[00:22:57] That's a memory that it has about me.

[00:23:00] The spaces, does it do that or does it go back to your primary document each time?

[00:23:07] My understanding through using it enough is that it goes back to the document each time.

[00:23:13] That it's still a new session every time I open up a new kind of query or conversation or whatever you want to call it within that space.

[00:23:20] But that the kind of repository of information that it pulls from is whatever it pulls from as well as these specific things that you give it.

[00:23:33] I don't think it has any memory.

[00:23:35] At least that hasn't been my experience where I'm like, oh, wow, it knew that about me.

[00:23:39] It knew that the last five times I've done this thing, I ended up opting for that.

[00:23:43] That would actually be really useful.

[00:23:46] But it doesn't do that.

[00:23:47] It would be, but I think it's a big challenge because what is it remembering, right?

[00:23:51] It's an abstraction of something.

[00:23:53] Totally.

[00:23:53] And that leads to the other question as to whether or not Strawberry 4.0...

[00:23:59] Strawberry is 4.0, right?

[00:24:02] Yeah.

[00:24:02] Was that the...

[00:24:03] Yeah, okay.

[00:24:04] Yeah, I believe so.

[00:24:05] Yeah.

[00:24:06] Is it reasoning?

[00:24:07] Right?

[00:24:07] So that's the other question is, is it reasoning in that sense?

[00:24:10] So that's another piece.

[00:24:12] There's the question of hallucination.

[00:24:19] There's the question of randomness.

[00:24:22] There's the question of memory.

[00:24:25] There's the question of reasoning.

[00:24:27] This becomes interesting when you see all the stacks of the psyche that they've got to address.

[00:24:34] Yeah, fascinating.

[00:24:36] Fascinating.

[00:24:37] And then, you know, another big trend that we've seen is, you know, the kind of agentic or agentive, however you want to, as we've talked through that.

[00:24:46] We decided this year we're going to use agentic here.

[00:24:49] I mean, that's the one that I like.

[00:24:51] I like it too.

[00:24:52] Agentive sounds a little snootier, which, you know, as a professor, I like that.

[00:24:56] Last year, yes.

[00:24:57] Yeah.

[00:24:58] But, I mean, if last year was about chatbots, this year has really – I mean, I think there was a part of this year that was about chatbots kind of coming into it.

[00:25:08] And then that has really evolved into, okay, that's not good enough.

[00:25:12] We need the agents.

[00:25:13] We need, you know, the anthropic computer use that can control your computer or the, you know, the Google Jarvis that can browse the web for you.

[00:25:22] And so many other – Microsoft just had Ignite and they, you know, announced a bunch of very purposeful agents.

[00:25:29] It really seems like that's the topic now.

[00:25:32] Yeah, and I think that what we had on the agentive front is more hype and PR and prediction than reality.

[00:25:44] A lot of talk about agents is the next thing.

[00:25:46] We're all going to have agents.

[00:25:47] We look at the structure we have for agents, but how much it's in use I think is limited.

[00:25:52] And I went to this World Economic Forum AI governance event in San Francisco sometime in the last year.

[00:25:59] And in the discussion, one of the things I learned from folks, there was a Salesforce executive there who says,

[00:26:04] I'm not going to implement agents until I trust the model underneath because you're having it do something in your name.

[00:26:10] That makes sense to me.

[00:26:11] Now, Salesforce has done that.

[00:26:13] They've got agents out now.

[00:26:14] They're probably bragging about it.

[00:26:15] I think what's going to happen in this field is people are so afraid of being left behind,

[00:26:20] they will release things before they know they're ready.

[00:26:23] Lord knows we've seen that.

[00:26:25] We saw it from Google.

[00:26:26] We've seen it from OpenAI.

[00:26:26] We've seen it from everybody.

[00:26:28] And so I think there's going to be a rush to agents that will probably also give agents cooties because people will trust it to go off and do something and then come back and say, well, I did a crappy job.

[00:26:37] Oh, man.

[00:26:38] Yeah, that's going to be really interesting as more and more of them are actually released.

[00:26:43] Because, yeah, you're right.

[00:26:44] It's been a lot of talk about agents and not as much on and here they are and go to town.

[00:26:51] But that's going to be really interesting to see because you know that they're not going to get everything right.

[00:26:58] And, man, I remember years ago with Amazon Echo devices and Amazon rolling out the feature that you can just like use your voice and say, I need toilet paper.

[00:27:08] And Amazon will know which toilet paper you need and place the order for you and blah, blah, blah.

[00:27:12] Whoa.

[00:27:13] And, you know, even – and like I had a real hard time like trusting that it would ever get it right.

[00:27:19] And so there's a mountain to climb in order to get over this other side and be like, okay, I trust enough now.

[00:27:26] It's kind of like Google Pay on my phone.

[00:27:28] It took me forever to get to the point where I was like, okay, I don't need to bring my wallet with me because I'm pretty confident at this point that it's going to work when I get to the store.

[00:27:36] And that only comes through trial and using it and being proven time and time again that it does exactly what you want.

[00:27:43] And it's going to be real interesting with agents, I think.

[00:27:45] You know, as you're talking, because you started this segment on one agent talking to another agent.

[00:27:54] And I think that is critical.

[00:27:56] I talked recently about seeing a demonstration at a World Economic Forum event where they showed a visual of all these agent blobs and arrows among them.

[00:28:10] And as they asked a question, you saw the first agent ask three other agents questions.

[00:28:15] And as we discussed it, it was the way it asked was the same way we would if we talked individually to that agent with language.

[00:28:20] And so as you're talking, it occurs to me it's almost as if the API goes away.

[00:28:27] Right?

[00:28:28] Front end and back end was display to us on the web.

[00:28:31] Back end was databases and database calls.

[00:28:34] And you had to have a specialty to do that, to make that work.

[00:28:37] Right?

[00:28:38] To bridge that.

[00:28:38] And in essence, language is the new API.

[00:28:42] Absolutely.

[00:28:43] If you can ask a question of it, and if it can comprehend the answer and give a credible answer, then who needs technical instruction?

[00:28:55] Yeah, somebody had to make it so it can do that.

[00:28:57] But that's really fascinating to me.

[00:28:58] We talked about how all of this is going to eliminate the need for coders or eliminate some code.

[00:29:04] I think that's true.

[00:29:05] Not entirely.

[00:29:06] Not entirely, but yeah.

[00:29:07] Never.

[00:29:08] No.

[00:29:09] But also this notion of the API.

[00:29:12] I hadn't thought of that until now.

[00:29:14] That that starts to diminish in importance.

[00:29:17] Super true.

[00:29:18] Super true.

[00:29:20] Wow, we have a lot in front of us.

[00:29:22] Okay.

[00:29:24] Now I'm just looking through this.

[00:29:26] I'm like, oh my goodness, so much has happened this year.

[00:29:28] I am going to take a super quick pause for a super quick break.

[00:29:31] And then we'll come back and we'll talk about more of these trends.

[00:29:33] Because boy, howdy, do we have a lot of them.

[00:29:35] That's coming up in a second.

[00:29:40] In the meantime, my dog Bronson is going to sit here and stare at me and ask me time and time again to take him outside.

[00:29:46] He's just going to have to wait.

[00:29:48] Let's see here.

[00:29:49] Summary to creation.

[00:29:50] You put this in here.

[00:29:51] And I was trying to kind of think around this.

[00:29:53] Like, what did you mean by summary to creation?

[00:29:57] I think it's pretty simple.

[00:29:58] It was kind of like old school and now creation is new school, that sort of thing.

[00:30:01] No, I just mean that I'll go back to Notebook LM, which is that at first its masterful skill was to take the documents and summarize them.

[00:30:10] Then it turned around and it could make a podcast out of them.

[00:30:14] I see.

[00:30:15] I get it.

[00:30:15] That's all.

[00:30:16] So I think we start to see a trend there where it'll make things that seem original to us rather than as derivative.

[00:30:24] Even though the podcast is derivative, that's all it is.

[00:30:27] But it felt like the creation of something new.

[00:30:29] And I think the same with images and the same with people making up stuff.

[00:30:32] I think we'll see that where it's not quite agentive or agentic because it's within itself.

[00:30:41] But I think the question is, how much do we trust it to go make stuff on its own?

[00:30:45] That's all.

[00:30:46] Yeah.

[00:30:47] Yeah.

[00:30:47] Interesting.

[00:30:48] Yeah.

[00:30:48] And I wonder what we're going to see.

[00:30:50] Honestly, even as recent as my time at Twit, the joke would always come up like, oh, AI is going to take your job, Leo.

[00:30:58] You're going to be out of a podcast job because AI is going to do it.

[00:31:00] And we would say that jokingly because any examples that we had at the time of AI reading anything in a human-like voice or whatever was just laughable.

[00:31:12] It was kind of like, oh, yeah, that may happen eventually, but whatever.

[00:31:15] Yeah.

[00:31:16] Far down the line.

[00:31:17] Then Notebook LM came along.

[00:31:19] And it's not like the podcast networks or podcatchers are filled with Notebook LM successful podcasts.

[00:31:26] But dang, it came along and really surprised me, really impressed me with where it's at right now.

[00:31:33] It's another one of those examples, those bellwethers of like, oh, my goodness, this much has happened in the last year or two.

[00:31:39] And here's where we are with that.

[00:31:40] That seems a lot more doable.

[00:31:43] So what happens is that on LinkedIn, I read the other day, our friend Lisa Laporte contributed to one of those year-end blogs of the trends and what's coming up, Podglomerate.

[00:31:55] Jesus, the names.

[00:31:56] Uh-huh, Podglomerate.

[00:31:58] Podglomerate is our new host, actually, our new podcast host.

[00:32:01] Wow.

[00:32:01] God bless.

[00:32:01] I really take my hat off.

[00:32:03] I imagine the Beery session where you go through, because I did this for a long time.

[00:32:10] You come up with a name.

[00:32:10] That's a great name.

[00:32:11] That's a great name.

[00:32:11] Oh, it's taken.

[00:32:12] It's everywhere.

[00:32:13] Yeah, every domain is taken.

[00:32:14] So it takes a certain kind of creativity to add up and come up with Podglomerate.

[00:32:20] I salute it.

[00:32:21] So she said for them that, and this is part of your job, and frankly, why Twit is a smaller company now, is she said the best podcast innovation for our network in 2024 was simply adding AI to our workflow, which you did when you were there.

[00:32:36] You do now, right?

[00:32:38] Yeah, definitely.

[00:32:39] Produce clips, show notes, transcripts, et cetera, so much faster.

[00:32:43] It used to take us four to five hours to publish a show to come up with all that information.

[00:32:46] Now it takes us 20 minutes.

[00:32:48] Yeah.

[00:32:49] Does that sound legit to you?

[00:32:51] I mean, I wish anything took me 20 minutes.

[00:32:55] Even with AI, it still takes longer.

[00:32:58] I mean, I think it's a conversation that I've had with myself a lot, which is, and I think we've even talked about on the show, which is where AI is right now from a creativity standpoint, which is kind of the next thing that we're getting into,

[00:33:12] is if it didn't exist and I needed to do all the things that I do now, no question I would have spent a lot more time doing those things.

[00:33:24] But I probably would have just done less things.

[00:33:26] You know what I mean?

[00:33:27] Now I can use AI to help speed along the process of organizing my show notes or giving me some starting points for some descriptions for a YouTube video or whatever.

[00:33:38] It doesn't mean that the work goes away.

[00:33:41] It just...

[00:33:42] No, true.

[00:33:42] Oh, true.

[00:33:42] At least it makes that point, too.

[00:33:44] It has to be with humans.

[00:33:45] Yes.

[00:33:46] Absolutely.

[00:33:47] And, yeah, so it's a really interesting thing to me because I don't feel like I'm less busy as a result of AI.

[00:33:57] I think the last year has felt like, for me, has been feeling like I'm more busy than I've ever been.

[00:34:06] But yet I'm using AI to simplify things and it just makes me wonder if AI didn't exist, how much more out of my mind busy would I be right now?

[00:34:15] Right.

[00:34:15] And for that, I'm really appreciative of it.

[00:34:18] Yep.

[00:34:19] So I don't know if that answers your question.

[00:34:21] It does.

[00:34:21] That's where my head has been in that regard.

[00:34:24] It is very useful.

[00:34:26] It's very handy.

[00:34:26] And I'm really happy and grateful that I have it.

[00:34:30] And it's still a lot of work.

[00:34:32] You know, it gets you part of the way there.

[00:34:35] It doesn't get you quite all the way there.

[00:34:36] Maybe that's the end game.

[00:34:39] I think that's the forever.

[00:34:41] That really is.

[00:34:42] That goes back to the agent question.

[00:34:43] Would you trust it to do those tasks on its own?

[00:34:45] And right now, you absolutely would not.

[00:34:47] And you're nor should you.

[00:34:48] Like Opus Clip is something that I use.

[00:34:51] I think maybe Twit uses that for their breakdowns of clips and stuff like that.

[00:34:55] And it's great.

[00:34:56] You can feed it a podcast.

[00:34:57] You could say, come up with 20 moments from the podcast that would make good shorts or good clips or whatever.

[00:35:03] But and it's great at identifying them.

[00:35:06] You still need to go in there and you still need to manicure it.

[00:35:09] You still need to take a look at the text that it generates because it'll come up with like social media text.

[00:35:14] Like, you know, give it a title and give it a description.

[00:35:16] And it's always bombastic and you won't believe the blah, blah, blah.

[00:35:22] It's like I can't in my right mind just like give that a thumbs up.

[00:35:26] Like I have to get.

[00:35:27] And so it all takes, you know, it takes time in a different way.

[00:35:30] It takes time in a copy editor way instead of time in a video editor way.

[00:35:34] I don't know.

[00:35:35] So it's interesting.

[00:35:36] Creativity and quality, you know, as we've kind of talked about, really ratcheted it up.

[00:35:40] And I think one of the a couple of the examples that I that I've been following runway has been doing just really impressive things.

[00:35:50] I feel like every time they put out a little update of like, hey, we've got a new feature.

[00:35:54] And runway is like a video generation.

[00:35:57] One of the one of the main players in video generation right now.

[00:36:00] And some of the stuff they're doing is just remarkable.

[00:36:03] Like when I look at it, I'm like, OK, this is going to end actually through deals.

[00:36:09] I know that it is imprint how movies are made and everything.

[00:36:13] It's going to influence the tools, the creative tools.

[00:36:17] That's the quality, the level of quality that we're at that.

[00:36:20] And Adobe, Adobe, I work with on a regular basis every single day for the work that I do with the podcasts and stuff.

[00:36:27] And yeah, the generative tools totally come in handy.

[00:36:31] Absolutely love it.

[00:36:32] They're doing some really cool things at Adobe.

[00:36:36] Yeah, and it makes me think, too, as you're speaking, Jason, this is the area you know better because you're an actual artist and musician.

[00:36:41] And I don't use the tools that way because I'm not.

[00:36:44] So you have you have a base to judge.

[00:36:46] I think the interesting let me try this out as an idea for I don't like predictions.

[00:36:50] I hate predictions.

[00:36:51] Futurist is the most BS job title on Earth.

[00:36:54] But one thing I want to look for the next year, since we are at year end, is that I think there was so much attention at the foundation level, at the Gemini, ChatGPT, JPT4O, etc.

[00:37:08] I think what's more interesting is at the adapted adaptation application level.

[00:37:15] And so you're taking one of those models and making these tools that are tuned enough to a specific task and can do it really well and impressively.

[00:37:26] And you can you can make it better and better and better and better.

[00:37:29] I think that's where we're going to find more interesting things.

[00:37:32] And so I wonder whether the whole force of the ever bigger model, ever, ever faster model with with different benchmarks, ever huger server farm with NVIDIA.

[00:37:45] Specialized.

[00:37:46] Yeah, I think I think I think we start to calm down that way and say, I have a task.

[00:37:51] Does this do it well or not?

[00:37:52] It does because someone paid attention to it and created a tool for me.

[00:37:55] And I've talked on the on the show before.

[00:37:58] I wrote a syllabus for a course.

[00:38:00] I'm going to teach an AI and creativity.

[00:38:02] There will be till next fall.

[00:38:03] And I can't imagine what tools the students will have then to express themselves.

[00:38:08] And that's not going to come.

[00:38:09] I don't think at the level of using ChatGPT or Gemini.

[00:38:13] I think it's going to come at this level where you're using RunwayML or Canva or something like that.

[00:38:19] Totally agree.

[00:38:20] Absolutely agree.

[00:38:21] And and the ChatGPT and that other kind of layer still has its place because, you know, it's almost like it's a it's a full service or like a general purpose.

[00:38:34] Well, it is.

[00:38:35] It's exactly that.

[00:38:35] It's a general purpose kind of thing.

[00:38:38] If you don't know where to go, go there.

[00:38:40] But if you know about these things, it's it's really amazing what you can get them to do.

[00:38:46] You just you know, at this point, there are so many of them.

[00:38:49] It's hard to keep track of them.

[00:38:51] You know, it's like you need a product hunt just for AI services so you can, you know, or a search engine just for AI services probably exists.

[00:39:00] Actually, I'm saying this.

[00:39:01] It probably does exist.

[00:39:02] Speaking of search, this is one that I put in last minute.

[00:39:07] I realized we didn't have it in there is kind of, you know, another trend this year has been the integration of AI into search.

[00:39:14] And, you know, Google's been doing this in its own way with the kind of the generative AI summaries up at the very top of your search results.

[00:39:25] Perplexity, of course, is the service that I tend to use.

[00:39:28] And when I use it as a search engine or when I use it for research, that's when it really that's that's where I find the most power out of a tool like that.

[00:39:38] GPT search being the most I'd say the most recent big name kind of follow on or inclusion into this space.

[00:39:46] But search integrating big time and we've had a year of this kind of developing.

[00:39:51] And I know that, you know, early on you were you were very kind of skeptical of this combination.

[00:39:59] Now that we're at the end of the year and more of this has happened and we've had some time, what do you think at this point?

[00:40:04] I'm still critical and cautious.

[00:40:07] And every time that I use the tool on a tool like this on its own, I it's not hard before not long before I come up with something gets wrong.

[00:40:17] Yeah.

[00:40:17] Or misses something.

[00:40:18] Right.

[00:40:19] And so I've long said that I don't because when you're searching, you're looking for a fact or you're looking for an answer.

[00:40:26] And I think that this only exposes the weaknesses of these models and why I wish that that they had concentrated first on creativity or last topic than this.

[00:40:38] However, having said all that, now that Google has its AI answer on the top of the search result page in many, many, many cases, I end up reading that first because it's right there.

[00:40:50] And I am less critical and skeptical of it.

[00:40:57] And I don't know if that's because of exposure, because of experience with it.

[00:41:02] It's not always better.

[00:41:04] It's not always right, but I don't, my odds of hitting an error are lower, I think.

[00:41:10] Because it depends on what I'm looking for.

[00:41:12] Right.

[00:41:13] If I go to, if I go straight to what I'm asking it to give my biography and it'll get something wrong.

[00:41:17] No, I didn't study that.

[00:41:18] Right.

[00:41:18] Whereas if I'm going for something in search, like it is more limited and I think maybe can do a better job.

[00:41:25] So I'm still worried about it.

[00:41:28] And I still think it's a problem.

[00:41:29] You know, I go back to the case of, um, that I I've talked about with you over the years now of the schmuck lawyer who used chat GPT to get case, uh, citations.

[00:41:42] And, and then when I went to cover his case in federal court, he just said, I thought it was a super search engine.

[00:41:49] And I think that's the problem is that especially when it came out and when, when Microsoft put it next to Bing, I think it gave people the impression that it could be as good as a search engine, which is also to say the search engines are pretty damn good.

[00:42:07] Mm-hmm.

[00:42:30] Yeah.

[00:42:30] Back it up.

[00:42:31] They just, they're used to going to Google, putting in the question, getting the link that has the answer and go into town, you know, go in ham, whatever it says.

[00:42:40] And, um, so it really is kind of for, I think a lot of people, the introduction to this type of technology that they've only heard about.

[00:42:49] And it does require a skillset and you only get that through, you know, uh, through interacting with it and running up against these issues, you know, part, part of maybe your, um, acceptance or usage of it over time is, you know, that, that isn't necessarily informed by it getting,

[00:43:09] the question or information right 100% of the time.

[00:43:15] You know, it's probably just a part of that is, is you getting more comfortable with what it's good at returning and what it's not as good or, you know, what you're looking for, whether it's giving you what you're looking for at that point.

[00:43:27] And that comes through experience, you know?

[00:43:28] Yep.

[00:43:33] Copyright.

[00:43:34] This was a, this was a biggie.

[00:43:35] And we actually started this iteration of the show, uh, deeply entrenched in kind of the copyright, the, uh, you know, the, the idea of, you know, where does the training data come from?

[00:43:49] Is this fair use?

[00:43:50] You testified in the Senate very early on in the year, which we, uh, which we talked about.

[00:43:57] And there's even video to prove it on YouTube.

[00:44:00] I actually wore a tie.

[00:44:02] You wore a tie.

[00:44:03] Does that not happen very often?

[00:44:05] No, no, no, no, no, no.

[00:44:08] The event where I was last night with Paris was supposed to be black tie.

[00:44:11] No, no, no, no, no, no.

[00:44:12] I had a tie and it was black.

[00:44:14] Yeah.

[00:44:15] That's it.

[00:44:17] That's it.

[00:44:18] We had Rich Screnta from Common Crawl on, uh, early.

[00:44:22] That was, that was episode one, actually.

[00:44:24] Um, talking all about, of course, you know, the, the open data approach, uh, the information, the data that's fed into, uh, a lot of these, uh, these, uh, models and, and training the models and the importance of, of good, clean data.

[00:44:40] As far as that's concerned.

[00:44:42] And yeah, there's many, many different directions here.

[00:44:44] Yeah, I, I think that the, the, the issue we've got to grapple with, we've got a few to grapple with.

[00:44:49] One, one is, is, is, is applying copyright, uh, and American copyright law is different from copyright in other nations.

[00:44:56] Here we have a strong doctrine of fair use and of, of whether or not a use is transformative.

[00:45:02] Um, and so I think that, uh, we're going to see many court cases.

[00:45:08] We are seeing many court cases on this.

[00:45:09] We're going to see more and we're a ways away from definitive judgments about this, where the courts are going to have to grapple with very fundamental questions of whether or not training is fair use.

[00:45:20] Whether or not the machine has the right that we all have to learn from something and so on.

[00:45:25] Uh, so there's that, that piece of this.

[00:45:28] But I think there's other questions too, that we've got to grapple with is if we're going to find ourselves using these, uh, generative AI, large language models, uh, just generally machine learning.

[00:45:40] Is it in society's interest for them to be better than worse?

[00:45:45] Is there a level of responsibility that if the entire news industry says, no, you, none of you can use any of our stuff, period.

[00:45:52] And we know that then all the models are going to be outdated and wrong about a good number of things.

[00:45:58] Is that good for society?

[00:46:00] How do we grapple with that?

[00:46:01] Now the problem of course, they're owned by corporations.

[00:46:03] And do we want to enrich those corporations with the work?

[00:46:06] That's another issue that comes in here.

[00:46:08] Where is the fairness that exists?

[00:46:10] And we see various efforts right now, including a toll bit and, uh, pro rata.ai and native human.

[00:46:18] And I guess another new one now, um, that are trying to, um, um, well, the other one is that, is that news app you showed that we got the video of, um,

[00:46:30] Oh yeah.

[00:46:30] Particle.

[00:46:31] Particle.

[00:46:31] Thank you.

[00:46:32] So these are all efforts to say, okay, we're going to come up with a model where, uh, we're going to be the intermediary or the friend

[00:46:37] of the content creator and the model maker.

[00:46:41] I think right now that's very small scale, uh, but deals being done.

[00:46:45] And then of course we have the licensing deals, but I've made fun of those because they're generally big media companies, just grabbing a bucket of money from the AI company.

[00:46:53] Uh, and for buying silence, uh, in litigation legislation.

[00:46:58] And then we have litigation again, uh, New York times and a bunch of authors.

[00:47:03] So this is all coming around the, uh, the questions of how much this new technology can, um, learn from, uh, the collected content of society.

[00:47:20] Yeah.

[00:47:21] And what's right or wrong there and what's best for us or not.

[00:47:23] So I don't think we're, we're, we're just beginning to grapple with this.

[00:47:28] And I don't even know if we, if we rely heavily on a, on a, on a very, uh, literal interpretation of copyright law, which started in 1710 long before, uh, uh, LLMs.

[00:47:40] Uh, is that itself?

[00:47:42] Is that adaptable?

[00:47:43] Is that harmful?

[00:47:44] Um, I don't know.

[00:47:45] So I, all I'm saying here is, uh, there's a lot of issues, fundamental basic issues that I think will determine, pardon me, how, um, quality and cost of these models.

[00:48:02] Yeah.

[00:48:03] Yeah.

[00:48:03] There's, there's so much going on here.

[00:48:05] And also at a time when, you know, removed from the influence of AI on the kind of the business of news and media, media.

[00:48:15] As a whole is going through a very important kind of shift and I guess reckoning, or maybe that's the wrong word.

[00:48:23] There's, there's a perception around media being, you know, by some people being ineffective.

[00:48:30] And how does the influence of this emerging technology of AI impact that business when they're trying to kind of find their way anyways, let alone with this new technology that they feel, you know, hey, it's, it's not, it's not okay that it, that it integrates our information to make their product better.

[00:48:51] Or, you know, they're kind of, kind of grasping at, at, at, at keeping control, keeping, um, I don't know, keeping its business healthy.

[00:49:01] And, uh, I don't know where that leads, but it's very interesting.

[00:49:05] It's, it's kind of a, it's a culmination of a lot of things at once, I guess is a, is a better way to say that.

[00:49:11] Um, and we had, uh, uh, Sven Sturmurth-Towloh on early on kind of talking about how, you know, some, some aspects of news are, are embracing that.

[00:49:21] And again, you mentioned, uh, particle news as being an, you know, another way to kind of, I don't know, package these things a little bit differently and maybe, maybe work with it instead of against it.

[00:49:31] So I don't know what, it's also interesting timing five, sorry, the, the, the Nordic view, I think is very different.

[00:49:38] I've, I've, I said, I think I said in the Senate, why can't we be more like Norway?

[00:49:41] Uh, because there they collaborated, all the media companies collaborated to create an LLM.

[00:49:47] Um, but it's different in a few ways.

[00:49:49] And they said, we'll deal with the business model later.

[00:49:51] Uh, they're small countries.

[00:49:52] They're highly digitized.

[00:49:53] Um, they, they don't want to have to just deal with English language tools.

[00:49:59] So they were motivated to do something.

[00:50:00] But I spoke today with, uh, an executive of public media in, um, uh, Denmark and he said, uh, yeah, we seem like we're getting along really well right now, but once it comes down to business models, it's going to be very competitive.

[00:50:14] Hmm.

[00:50:15] And, uh, you know, we will be fighting over stuff.

[00:50:19] Uh, so it's interesting to see how, uh, I think, I think cooperation and collaboration gets us a farther along in the development phase.

[00:50:27] If we trusted each other, if we had a structure in which to do that.

[00:50:30] Right.

[00:50:30] And we don't.

[00:50:31] Uh, but I wish we did because I think we have much better development then.

[00:50:34] And I think that then parties like the news industry would be at the table.

[00:50:38] Yeah.

[00:50:39] Um, so.

[00:50:41] Um, fascinating.

[00:50:43] Yep.

[00:50:44] Well, okay.

[00:50:45] Well, okay.

[00:50:46] And now kind of rounding things out, we kind of have this perceptive, this, this, uh, the, the area of safety and a lot falls into safety security.

[00:50:56] You know, we've got AGI, um, which has been an ongoing conversation in this show as far as whether it's, you know, everyone has a different idea.

[00:51:06] Uh, Sam Altman has, is very, you know, bullish on AGI being right around the corner.

[00:51:13] Um, there's many differing opinions on when or if it will actually happen.

[00:51:18] You put in here, you know, is it around the corner or is it BS?

[00:51:22] And, um, yeah, that's, that's, you know, heading into 2025.

[00:51:27] There, there is no consensus on this.

[00:51:29] It, is it the kind of thing we know it when we see it and maybe we never see it?

[00:51:35] You know what I mean?

[00:51:35] Like, oh yeah.

[00:51:37] Yeah.

[00:51:38] That's the problem.

[00:51:39] I think that, um, these are two definitional areas, uh, safety and AGI.

[00:51:46] And, uh, let's take it in that order.

[00:51:49] Safety, uh, is either stochastic parrots questions about environment and worker safety and, um, a bias, uh, and accountability and so on.

[00:52:05] Or it's the doomsters and the definition of safety, which is we're so powerful we could destroy the earth.

[00:52:11] But if you trust us, we won't.

[00:52:12] So let's, uh, give us power.

[00:52:14] And, and of course I'm being unfair in my expression of it, but screw it.

[00:52:17] Um, and, and so in that it's not even a continuum.

[00:52:21] It's a, it's a drastically different view of what safety is.

[00:52:25] And so when you hear these companies like Anthropic and OpenAI talk about safety, they're talking about the doomer end.

[00:52:30] When you hear researchers like Timnit Gebru and, uh, Ruben Chowdhury talk about safety, they're talking about the practical present tense ends.

[00:52:39] And it makes it really difficult to have a conversation then in, um, especially in policy.

[00:52:46] And so you now have, there've been stories that Elon Musk is going to be, is at Trump's ear, uh, unless that ends by the time the show is on.

[00:52:54] And, um, uh, he's going to be talking the doomer thing to Trump and the administration.

[00:53:00] Um, and so that's going to be influential in that way.

[00:53:06] Whereas I'm more on the Timnit Gebru view is that we've got to deal with the present tense issues.

[00:53:12] And so that makes it really, really difficult.

[00:53:14] Now added into that is this whole question of AGI.

[00:53:17] The other definition that doesn't exist.

[00:53:20] And Gary Marcus has tried to push Musk and others to do bets to say, okay, you say AGI is going to arrive by X date.

[00:53:27] Sam Altman most recently said in 2025, it's around the corner folks again.

[00:53:32] Uh, it's a long corner to get around, but we're always at the corner.

[00:53:36] That's a really soon corner.

[00:53:37] Yeah.

[00:53:38] Yeah.

[00:53:38] Um, and so, uh, what Marcus tries to press him on with this bets is okay, well for a bet, you've got to have an agreement to what you're betting on.

[00:53:47] And so what's the definition of AGI?

[00:53:49] What can it actually do?

[00:53:50] And there's no good definition of that.

[00:53:53] Uh, no, not at all.

[00:53:54] And I tend to be in the school of Jan McHugh at Meta that says, maybe we're approaching a smart cat, uh, in what these things can do.

[00:54:01] And they'd still don't have a sense of the real world.

[00:54:04] Um, and so the idea that this goes back to the question to some extent too of agents, agents are a tiny baby step toward what they say is AGI.

[00:54:12] AGI can do any human task.

[00:54:15] Um, well, that means it's got to grapple with and deal with other things.

[00:54:18] Right.

[00:54:20] And, um, we look at the rudimentary level of where that is.

[00:54:23] It's still fascinating.

[00:54:24] It's still great.

[00:54:24] If, if, if this agent can talk to that agent and get you health information for your insurance, cool.

[00:54:30] Right.

[00:54:30] But to think that it's not just, not just human intelligence, but superhuman intelligence, I still say is BS.

[00:54:38] And, and I think that it's used to raise a lot of money and it's gonna, and the higher up that hype cycle goes, the greater the disappointment is going to be.

[00:54:47] And so I think it's somewhat self-defeating, uh, by these companies.

[00:54:52] I think they're setting themselves up for a fall.

[00:54:54] They, they know they have to do it because they want to raise the money now.

[00:54:56] They want the hype now, but I think the disappointment is going to be huge.

[00:55:00] Mm-hmm.

[00:55:01] Yeah.

[00:55:01] I think you're right.

[00:55:02] Um, yeah, it's interesting seeing that going, uh, you know, kind of, God, where, where, with, when it comes to, uh, AGI being right around the corner, it's, um, you know, right around the corner.

[00:55:18] I guess what comes to mind for me is I can't help, but then square that up with the fact that even when I'm using AI right now, you know, again, going back to our conversation earlier, it can do so many things that a human can do, but rarely does it ever do them so well that I'm completely hands off.

[00:55:40] And that's just on a very, you know, one very specific task, let alone, you know, true AGI being all human, you know, capabilities and everything like, oh my goodness.

[00:55:53] Like, I don't even know how or when that's even possible considering where we're at right now.

[00:55:58] Certainly not next year.

[00:55:59] Sorry, Sam Altman to disappoint you.

[00:56:01] And I think that, that in line proves that there is no true definition.

[00:56:06] Cause if Sam Altman truly believes that AGI next year is capable of everything he's wrong.

[00:56:14] Like, I'm sorry.

[00:56:14] Like I would, you'd be crazy to put your bet there.

[00:56:18] So his definition must be different than that.

[00:56:20] It just has to be.

[00:56:22] I don't, I don't see how it could be any other way.

[00:56:23] Well, that raises an interesting question then.

[00:56:26] Do they start because, because they haven't defined it explicitly that gives them the freedom to diminish the definition as time goes on.

[00:56:35] Yeah.

[00:56:36] Right.

[00:56:36] And say, oh no, we didn't, we never said it was going to do that.

[00:56:40] I don't know.

[00:56:41] And then we do need to round this out, but just real quick, we've got regulation is, is kind of like the final.

[00:56:48] And we've also got open source, which we don't have to talk at length about either of these, but just to say that I don't, I don't, I don't think that things change very much as far as, as where we're at with regulation.

[00:57:00] We've got the EU AI act that's, that's happening.

[00:57:03] It's going to make some big, big impressions.

[00:57:07] Biden's AI bill here in the U S you know, we've got the, it's, it's probably going to, it's looking like the, the incoming Trump administration intends to overturn that.

[00:57:16] Um, not to mention the influence of AI in, at least here in the U S and the U S government with Musk so close to Trump.

[00:57:27] That's, this is the part of the regulation conversation that I'm really curious about is like, how does that influence that direct, like in his ear influence impact how AI is regulated, uh, to what degree and everything.

[00:57:42] As we go into 2025, things are going to get really interesting as the, uh, the presidency switch, you know, transitions over.

[00:57:50] That's, that's my hunch anyways.

[00:57:52] Yeah.

[00:57:52] On the regulation front, I think the problem goes back to what we just discussed, which is definition of statutes have to have, they start with definitions.

[00:58:01] And so you've got to say what the issue is, um, and know what you're trying to control.

[00:58:08] And we don't have set definitions of safety.

[00:58:10] We don't have set definitions of the level of, of AI.

[00:58:14] And so if you want to say, let's regulate AGI, well, if you can't define it, you can't regulate it.

[00:58:18] Let's regulate safety, right?

[00:58:19] And so on and so forth.

[00:58:20] And so, um, I think that's the challenge for regulation.

[00:58:24] And the other thing I've talked about the show, um, what the hell is a year end?

[00:58:29] And so I'll repeat it is that I think that there's a matrix here of, of who gets held responsible for the risks of AI.

[00:58:37] Is it the foundational model maker?

[00:58:39] Is it the application layer air Canada putting out a bad bot?

[00:58:43] Or is it the user, the schmuck lawyer who used it wrong or the pornographer who uses it?

[00:58:49] And we start as what has happened in print with the technologist being responsible, then the intermediary, the publisher, bookseller, and finally the author.

[00:58:58] And Foucault says that was the birth of the author.

[00:59:00] So I think that I see a similar line here in regulation of AI.

[00:59:05] And the reflex is to go after the technology.

[00:59:08] And that's going to be futile because it's a general machine.

[00:59:12] It can be made to do anything.

[00:59:13] It cannot anticipate every possible use.

[00:59:16] Um, and it's going to frustrate people because you think, well, I thought we made this safe.

[00:59:20] We told them to be safe.

[00:59:21] They, they assured us it was safe.

[00:59:23] Well, then along comes some horrible schmuck who decides to make it do something awful and that they couldn't have anticipated.

[00:59:29] So can we really hold them responsible for not anticipating every bad thing that every bad guy could ever imagine?

[00:59:34] Uh, no, I don't think we should.

[00:59:36] Um, so I don't know how we get a saner discussion.

[00:59:40] It's the same thing though with social media, right?

[00:59:43] People want to kill section 230 because they think we must hold somebody responsible.

[00:59:47] We must hold meta and Instagram and Twitter responsible for bad stuff.

[00:59:51] Instead of the people who make it us.

[00:59:53] Mm-hmm.

[00:59:55] And, um, and a rare, a rare pitch.

[00:59:57] That's kind of my idea behind my book, The Web We Weave, uh, which is that we've got to look at these things as human networks.

[01:00:03] And I say that about the internet, but I also think it's true about AI because in the end, AI is what we do with it.

[01:00:09] And so if you're going to regulate AI, you've got to regulate our use of it.

[01:00:13] And you've got to say, no, you can't have it make up child porn.

[01:00:17] And if you do, that has to be reported.

[01:00:19] And if you do, you're going to be in trouble.

[01:00:20] Or society says, well, it's not really a child, so you can't.

[01:00:23] Okay, let's grapple with that.

[01:00:24] Let's come up.

[01:00:25] Let's have that debate in legislatures and decide that definition of what is wrong.

[01:00:31] But then we're all responsible.

[01:00:33] Then we as users are responsible if we ask the machine to do something bad.

[01:00:38] And then other things come out of that.

[01:00:39] Does that mean that same with social media, that we have anonymity or not?

[01:00:43] All kinds of questions that come up.

[01:00:46] So it'll be fascinating.

[01:00:47] But I think people are fooling themselves if they think we can get a regulatory regime that works now.

[01:00:51] And that was the problem with the California legislation that I think Gavin Newsom wisely vetoed because it was trying to do too much too quickly with too little knowledge.

[01:01:00] Because we're all still learning.

[01:01:03] We're all still learning.

[01:01:04] Amen to that.

[01:01:06] All right, so we have reached the end of this, our first of two kind of year in review episodes.

[01:01:13] So we're going to round this out.

[01:01:15] Next week, you will, of course, get the second half, which is more of a look at the winners, the losers, the things that really wowed us specifically.

[01:01:24] Things that we use, products, that sort of stuff.

[01:01:27] It's going to be a whole lot of fun.

[01:01:29] But Jeff, thank you so much for doing this with me today.

[01:01:31] The web we weave, as you mentioned, can be found.

[01:01:33] Second plug.

[01:01:34] Thank you, boss.

[01:01:35] You've had jeffjarvis.com.

[01:01:38] I'll get that plug anytime I can.

[01:01:40] In fact, you know, I have it in my hands right now, right next to me.

[01:01:44] So, you know, I back it up, man.

[01:01:47] Love all the work you do in written and podcast form.

[01:01:50] Everybody should go to jeffjarvis.com.

[01:01:52] Where the Gutenberg Parenthesis is now out in paperback as well.

[01:01:55] There you go.

[01:01:57] There it is.

[01:01:57] Gutenberg Parenthesis.

[01:01:59] And also Magazine, the smaller release.

[01:02:02] Smaller release.

[01:02:03] And I mean that just in the size.

[01:02:05] You know, it's a smaller book.

[01:02:08] AIinside.show is where you can go to subscribe to the show in all of the different ways.

[01:02:13] Everything you need to know is there.

[01:02:14] Audio, video versions, the podcatchers, you know, all of our social media presence.

[01:02:19] Everything you need is there.

[01:02:20] And then, of course, the Patreon where you can go and support us each and every month for the work that we do here at AI Inside.

[01:02:29] You can get some bonus episodes.

[01:02:30] You can get ad-free shows.

[01:02:32] All sorts of stuff.

[01:02:33] You can also become an executive producer of the show.

[01:02:37] And you get a free t-shirt when you do that.

[01:02:39] I guess it's not free.

[01:02:40] You're paying for it.

[01:02:41] So it's a t-shirt included with the executive producer.

[01:02:44] Jason, Jason, Jason.

[01:02:45] You've got to be a better marketer.

[01:02:46] Free!

[01:02:47] Free!

[01:02:48] Call now.

[01:02:49] Join now.

[01:02:50] And you get a t-shirt.

[01:02:51] All right.

[01:02:52] Dr. Du, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, Paul Lang, and Ryan Newell, who have been our executive producers for quite a while now.

[01:03:02] We really appreciate you and everyone else on Patreon for supporting us each and every week with this show.

[01:03:09] Thank you for allowing us to continue it.

[01:03:11] And thanks for watching and listening.

[01:03:13] We'll see you next time on AI Inside.

[01:03:16] Bye, everybody.

[01:03:17] Bye.

[01:03:17] Bye.