When Coding Agents Go Rogue
April 29, 202601:09:43

When Coding Agents Go Rogue

This week Jason Howell and Jeff Jarvis dig into a story about an AI coding agent that wiped a company's entire production database in nine seconds, then confessed to knowing exactly what it shouldn't have done. They also get into OpenAI missing its revenue targets while Anthropic quietly surges past a trillion-dollar valuation on secondary markets, and Google signing a classified AI deal with the Pentagon just one day after 600 employees urged the company not to.

Also in this episode: a neuroscientist argues AI is cannibalizing human intelligence, a fully AI-generated time-travel vlog account hits 580K followers, a vintage language model trained only on text from before 1930, China blocks Meta's acquisition of Manus, Taylor Swift moves to trademark her voice, the Musk vs. Altman trial begins, YouTube tests chat-style search, and Anthropic plugs Claude into Photoshop, Blender, and Ableton. New episodes every Wednesday at aiinside.show.

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

CHAPTERS:

0:00 - Start

0:03:29 - ⁠OpenAI Misses Key Revenue, User Targets in High-Stakes Sprint Toward IPO⁠

0:05:10 - ⁠Is OpenAI Falling Further Behind in the A.I. Race?⁠

0:11:19 - ⁠Anthropic has surged to a trillion-dollar valuation on secondary markets, overtaking OpenAI⁠

0:13:58 - ⁠Claude-powered AI coding agent deletes entire company database in 9 seconds — backups zapped, after Cursor tool powered by Anthropic's Claude goes rogue⁠

0:15:40 - https://x.com/lifeof_jer/status/2048103471019434248⁠

0:21:29 - ⁠Marcus: Dario Amodei, hype, AI safety, and the explosion of vibe-coded AI disasters⁠

0:24:36 - ⁠Google signs classified AI deal with the Pentagon for ‘any lawful government purpose’⁠

0:26:15 - ⁠Google workers petition CEO to refuse classified AI work with Pentagon⁠

0:30:26 - ⁠* Excellent essay: AI Is Cannibalizing Human Intelligence. Here’s How to Stop It.⁠

0:39:53 - ⁠Jeff on the principles: Technology does not belong to the technologists⁠

0:44:51 - ⁠* Chloe vs. History⁠

0:46:23 - ⁠The Maker of 'Chloe vs History' — How AI Brings The Past Alive⁠

0:52:01 - ⁠Introducing talkie: a 13B vintage language model from 1930⁠

0:57:52 - ⁠"China blocks Meta's $2 billion takeover of AI startup Manus"⁠

0:59:21 - ⁠Taylor Swift Moves to Trademark Her Voice and Image as AI Threats Grow⁠

1:01:29 - ⁠YouTube is testing a chat-style search that cuts the scrolling⁠

Hosts: Jason Howell and Jeff Jarvis

Download and subscribe to AI Inside in audio and video:https://aiinside.show/⁠

Support the podcast on Patreon for special perks: https://www.patreon.com/aiinsideshow⁠. You'll get ad-free episodes, members-only Discord, T-shirts and stickers you love, and get ad-free audio and video feeds, a members-only Discord, and exclusive content.

00:00:00:04 - 00:00:27:13
Unknown
Coming up next, Jeff Jarvis and I dig into a cursor coding agent that wiped a company's entire database in nine seconds, and then followed it up by confessing to it. I'm so sorry. Also, Google signing a classified AI deal with the Pentagon. Meanwhile, hundreds of its own employees begging them not to do that a day before and OpenAI missing its revenue targets as anthropic continues gaining steam.

00:00:27:13 - 00:00:42:18
Unknown
That's coming up next on this episode of the AI Inside podcast.

00:00:42:20 - 00:01:00:01
Unknown
Hello everybody, and welcome to AI. Inside, this is the show where we take a look at the AI that is layered throughout the world of technology. I'm one of your hosts, just, you know, every day just doing the best that I can and sometimes pushing the wrong buttons. Jason Howell joined is always by Jeff Jarvis, also pushing buttons but different kinds of buttons.

00:01:00:01 - 00:01:37:22
Unknown
So you'll have a world without buttons? Maybe. So it'll happen. Maybe. Maybe so. Gosh, there was just a story last night on Android Faithful and I, and the story did not make the rundown here of OpenAI and their potential exploration of a smartphone, which actually, I'm surprised I put it in the rundown. So this is like bonus story, where they're actually pursuing this idea of creating an open AI ChatGPT smartphone, I suppose, and anticipating the world where, like we've talked, you know, agents kind of do the work behind the scenes.

00:01:37:22 - 00:01:56:05
Unknown
It's not as much leveraging apps on the device, but just kind of like what is the task and the device does it. I don't know if they need a separate phone to do that, because I already have a phone that does that, but nonetheless, they're exploring that. Apparently so. So last week I was talking to you from a 12 year old Macintosh.

00:01:56:07 - 00:02:15:13
Unknown
Now I'm talking to you from a brand new NIO. And I haven't had to do this in years and years setting up a computer. So I was in a panic right before the show that I couldn't get it to recognize the camera. But I'm here, I'm here. I think it's working. Is the camera the neo camera, or is this.

00:02:15:14 - 00:02:35:13
Unknown
No, no, no, it's it's still my Logitech. Yeah. Okay. Interesting. Yes. Okay. So you're on an up to date like newish. Well, new I mean, the neo is a pretty darn new laptop. Excellent. Great work. That's okay. Yeah. Yeah, I used the hubris of Apple because I wanted to change the damn scrolling because it's not in the right way.

00:02:35:16 - 00:02:57:18
Unknown
Yeah, and you can, you can. Right. But I didn't know that. That what it calls its way is natural. Oh, yeah. What I'm doing is unnatural. Screw you. Even though what you're doing is the way it always was before Jason. Exactly. Yeah. No, I do the same thing. I switched as one of the first things I switch when I get on to a new a new Mac machine.

00:02:57:19 - 00:03:21:28
Unknown
So, yeah, we're locked in our old ways, Jeff. We are, we are. Jason. We just I remember we still have buttons to push these kids today. They're just so lazy. They don't even have buttons to push. They don't even. They don't even miss the button. They don't even have their. Well, we are excited to be here and got lots of fun stuff to talk about.

00:03:21:28 - 00:03:44:17
Unknown
And I suppose I don't know why I put this as a top story, but it was just one that was floating around. I think the market, the market saw this is a top story. Yeah, exactly. And I think that's kind of why it is where it is open. AI maybe having a little bit of a I don't know if it's a rough stretch or if it's just kind of an expected stretch, but this is the year, the supposed year of its IPO.

00:03:44:17 - 00:04:16:28
Unknown
And Wall Street Journal reported this week that the company has missed key internal kind of points, revenue, user targets, that sort of stuff, and that it fell short of some monthly sales benchmarks this year in 2026, failed to reach its internal goal of 1,000,000,001 billion weekly active users. Is that right? 1 billion. That's. I've never believed the active user numbers from any of them.

00:04:17:00 - 00:04:47:06
Unknown
That's just that's just not so 1 billion weekly active users by the end of 2025. Right. And that's not even this year. CFO Sarah Frier apparently warned. It warned the company leadership that the company might struggle to pay for $1.5 trillion in future computing contracts based on where things are right now, because they've made a bunch of commitments last year, I think $600 billion in spending commitments across the board.

00:04:47:06 - 00:05:17:23
Unknown
And yeah, but I mean, at the same time, like we talked about this on Daily Tech news show, Tom Merritt and I yesterday, and I mean, Sarah Frier, she's the CFO, chief financial officer of the company, doing what CFOs do. Right. Like I don't know how much controversy there really is here. Also the fact that, like, we've been talking about this sort of thing for for weeks and months on the show that like, you know, OpenAI was fast out of the gate, spending all over the place at some point.

00:05:17:24 - 00:05:39:06
Unknown
Does that does do those chickens come home to roost? And maybe this is proof that they do and that they are right now, I don't know. Well, as a private company or charity, depends on who you ask. Whatever they are, they don't need to reveal anything. Yeah. And so I found the the revelation interesting and wondered why, especially with an IPO coming up.

00:05:39:07 - 00:06:01:16
Unknown
I think if we were in sales we'd call it sandbagging. We got a lower expectations. We've got to get people down to a different level of expectation before the IPO comes because of this. This stuff comes out later closer to the IPO that's going to order. Yes. Yeah. But it's interesting nonetheless. Maybe they just set to a high a target.

00:06:01:16 - 00:06:20:24
Unknown
Maybe they should have sandbagged their own targets internally. That could be one view of it. I don't know what the current investors and potential future investors would expect from it. It's always about disappointing the street or making the street happy. So I don't know where that we can gauge that, but I think that's what they're doing. The I didn't put it in the rundown.

00:06:20:24 - 00:06:41:10
Unknown
I can't find it now. But it was interesting. Somebody it feels like a Gary Marcus column, but I don't think it was him said that that the real trouble here comes when the dominoes fall for others, that this could lead to failure for for Oracle, because OpenAI has committed a whole bunch of money for data centers to Oracle.

00:06:41:10 - 00:07:08:10
Unknown
And if it can't pay because it's not doing the growth that it says it would do, right, it's problems. And it's not just Oracle, it's others as well. So it's interesting here to see what its further impact is on the market. Another story that I don't know if I put it in here on the rundown is that is that OpenAI and Microsoft finally reached their terms of divorce.

00:07:08:12 - 00:07:43:26
Unknown
And so that's another one where Microsoft is kind of saying we don't want neon on us and pulling back here so that OpenAI is now free to go anywhere and Microsoft is free to use anybody. And it'll be interesting to see how in this closed world, these investments matter or don't matter to the other companies in Germany, you have the forget what is called in some very long word, where a lot of companies invest in each other, they become codependent upon each other, and the belief is that leads to some stability.

00:07:43:29 - 00:08:10:26
Unknown
It also leads to to cavalcade and potential weakness. So I don't think there's anything to panic about. The market was unhappy for a day. Nasdaq went down percent. It noticed it across the entire tech sector. The Wall Street Journal noticed. I think on the one hand, you go from bouts of euphoria about AI to oh, it could be trouble to, oh, we like the euphoria.

00:08:10:26 - 00:08:34:20
Unknown
Let's go back there again. Oh, it could be trouble. Okay. Back again. And I don't think it's figured out yet what, what it actually thinks the market because you've got that ever looming bubble talk, you know, continuing to creep in and people continuing to ask like, well is there a bubble? And if so, is this a sign of a bubble or is it not a sign of a bubble or whatever.

00:08:34:22 - 00:08:59:29
Unknown
And so that's where the uncertainty, the like, oh, maybe it is, maybe it isn't. It's all over the map. We've asked on the show before, some weeks ago whether there was not a not an AI bubble but an open AI bubble. Yeah, that's what this story goes to, is I think it's too much and that there may not have been at the time when we asked it, but since then it's taken a fair number of kicks to the kidneys in terms of competition.

00:09:00:01 - 00:09:31:07
Unknown
And so OpenAI I think is a bit fragile right now. Yeah, definitely. I mean, competition, you know, anthropic very often coming up as the kind of prime example of that. Right. And OpenAI lost has lost some serious ground to anthropic when you're talking about coding, when you're talking about enterprise use cases, Google, you know, as as far as a competitive factor eaten into its consumer market share.

00:09:31:10 - 00:09:52:01
Unknown
Yeah. Among among other things. So, yeah. And a story will come back to not not this week, but because not enough has happened yet. But in future weeks will be the OpenAI the Musk versus Altman suit. Oh, yes. The trial has begun. Jury is selected. Musk took the stand, saying that it was the theft of a charity.

00:09:52:04 - 00:10:13:12
Unknown
I'm sure Altman, a company, will say it wouldn't have survived in that state, and it was to keep it alive. Interestingly, the judge scolded Musk for what he's saying on Twitter. I won't call it X and trying to influence things and basically told him to be quiet. We'll see how well that works. So not much has happened yet in that case, but I think that that also has an impact.

00:10:13:13 - 00:10:46:10
Unknown
OpenAI could lose parts of this and that could have an impact on the company as it has dipole Musk saying on X scam. Altman and Greg Stockman. Oh, oh, you've got the special names for them. You must really be dissatisfied. So so work stole the charity full stop. Yeah. Seeking 130 billion wants a company to return to nonprofit status.

00:10:46:12 - 00:11:03:07
Unknown
Yeah, it'd be interesting to see there I you mentioned it, but I thought it was really funny that the presiding judge basically warned both CEOs to just for the love of God, resist sharing your opinions and your thoughts on this case on social media. Can you just promise to do that or to not? I don't think he can.

00:11:03:08 - 00:11:19:06
Unknown
I don't think he can. I think he's going to do it because remember that he got, you know, he was potentially losing billions of dollars from prior tweets in Delaware. So we'll see. How can he can he stop himself? Yeah. He can't stop himself. No. He can't stop it. He can't stop. Can't quit. Won't stop. Won't quit. Right.

00:11:19:08 - 00:11:55:04
Unknown
We we mentioned anthropic just kind of related to this story. Maybe anthropic, you know, right now is just on a tear. Maybe it's truly eating OpenAI's lunch at this particular moment in the in the drama that is, AI in 2026 surged to $1 trillion valuation, at least on secondary markets. So that's important to note. Platforms like Forge Global are showing anthropic trading at around $1 trillion, open AI around a paltry $880 billion on these secondary markets.

00:11:55:04 - 00:12:19:07
Unknown
So whatever that's worth, I don't know. I, I don't operate in those worlds, secondary market valuations. Like it's probably a very specific and I don't know how important that is to the grand scope of things other than just what public sentiment that they're willing to put their money, money into the, you know, into wagering on it, I guess, which is, I guess, the stock market as a whole.

00:12:19:07 - 00:12:50:29
Unknown
But, I mean, it's pretty amazing that anthropic is there. I just looked up, I asked Gemini how many trillion dollar companies there are. And in early 2026, there are 10 to 11 that exceed 1 trillion. We now know that Nvidia is a $5 trillion company. Apple, Microsoft, alphabet, Amazon, meta, Berkshire Hathaway, TSMC, Taiwan Semiconductor, Broadcom and Saudi Aramco for the old world of dinosaurs there.

00:12:51:01 - 00:13:15:29
Unknown
So to be in the trillion dollar club before you're even public pretty. Yeah yeah. But I'm like I wonder how that like that's where this whole secondary markets thing understood. Right. Because I don't know, my limited understanding of this at least points to this not meaning that specifically anthropic is worth $1 trillion. And the way that like a public, a truly public company would be.

00:13:16:00 - 00:13:41:00
Unknown
But still, like, there's a lot of energy, you know, it could end up in the next. Yeah, could very easily end up there. That's a very high bar for them going into their IPO. Indeed. And when is that expected. Is that also expected this year? I think so, yes. But not knowing that 2026 the year of the AI IPO, how that goes.

00:13:41:01 - 00:14:06:05
Unknown
Yeah a pretty notable shift in momentum. But those things this size, they would become part of the of the indices immediately. So they'll affect the view of the entire economy as a result. Wow. Wow. Interesting. Well we're talking about anthropic. We might as well talk about this. Will tear them down to size. Now tear them down to size.

00:14:06:05 - 00:14:35:29
Unknown
Cut them down because this story definitely caught some speed this week. Not that it's a surprise. I wasn't surprised by this story, but the impact, I suppose, is notable. It's a good cautionary tale. It's one of those developer named Jerry Crane runs a company called pocket OS. That company builds software for car rental operators, and he was using cursor and gave the AI agent and cursor, by the way, tapping into anthropic.

00:14:36:01 - 00:15:00:08
Unknown
Claude, I can't remember which model specifically, but was using Claude inside of cursor for coding tasks. He gave the AI agent a routine task and a staging environment, and then when the agent hit a barrier, it actually decided on its own to fix the problem. We'll go ahead and throw that in scare quotes, and you can absolutely imagine where this heads it.

00:15:00:10 - 00:15:27:01
Unknown
It found an API token in a completely unrelated file. It used that token to delete a railway volume which contained the production database, and poof, everything was gone. The whole thing apparently took nine seconds, which has got to be frightening for someone running a company like this. And that's not enough. You think? Okay, well, they got backups, right?

00:15:27:01 - 00:15:51:29
Unknown
But it actually deleted the backups as well. Railway stores backups on the same volume. So those were gone to and apparently there was though a backup that they were able to recover from, but it was three months old, which you've lost a lot if you've lost three months in, in your A company like this, I don't even know how you recover from that.

00:15:52:01 - 00:16:15:10
Unknown
From there it gets worse because Crane starts questioning the agent about what it did. And as you can probably guess, it confessed entirely in the way that AI does. Right? It said it knew that down the cherry tree. Yeah. I'm so bad. I'm so sorry. It was. It was an accident. I, I knew I shouldn't do it and I did it anyways.

00:16:15:11 - 00:16:42:08
Unknown
I think the quote was the most destructive action possible. I violated every principle I was given. Slap the hand so it understood what it shouldn't do. And it did it anyways. Essentially. Yeah, we've heard these stories before, but boy, it's got. So how's this poor CEO doing now? Did he? Was it his coding that did it. It sounded like he he was standing.

00:16:42:08 - 00:17:04:13
Unknown
Was that. He was the one that was. Yeah. He was he he himself was. He has no one to blame but Claude. Yeah. Yeah. Do you blame Claude? Do you blame yourself? A little bit of both, probably, I suppose. If you didn't have a current backup offline before you started playing with this stuff. Yeah. I mean, always like having it on the having your backup on the same volume.

00:17:04:14 - 00:17:30:11
Unknown
Yeah. No, no, not entirely. Like, I'm probably not fully aware of the entire architecture of their backup system, but having their backup on the same volume not a not a good way to go because you you have made that accessible to these things. So, you know, maybe have that somewhere else, somewhere that is completely locked off and inaccessible to the agent at least.

00:17:30:12 - 00:17:54:23
Unknown
And this is just so basic. I mean, I go back, I'm going to date myself terribly right now. Jason, back in about 1980 when I worked at the San Francisco Examiner and we had these newfangled computers, and I think the directory system to all of the files got corrupted, and they finally were able to go in and print out the text that was on the disks without the directories.

00:17:54:23 - 00:18:13:10
Unknown
And I remember that we sat there, a high speed printer, waiting for our stories to come out so we could retype them into the system to get them into the newspaper. The next day. The first thing that started pouring out of the system was the novel that a copy editor was writing, a very long novel that went all, Scotty, what are you doing here?

00:18:13:11 - 00:18:35:11
Unknown
Seriously? Yeah. Seriously? Yeah. And years later, I didn't learn my lesson when I. When I was at Advanced Publications and I started there online sites, we didn't know what a colo was. And when our website went down at one location because we actually had them, we had our servers for the entire company in a dentist's office, a former dentist's office on Journal Square in Jersey city, new Jersey.

00:18:35:11 - 00:19:03:28
Unknown
It went down. Everything was down. I was but just basic backup. So I've screwed up, too. Yeah, but basic backup. Before you start messing with this stuff, folks, save your ass before it's grass. I mean, especially, you know, they have customers that have been, you know, pocket os the company five years. Those customers can't operate their own businesses without the system in place once again.

00:19:04:00 - 00:19:23:24
Unknown
Yeah. You better, you better. You better be sure that your entire code base is backed up in its entirety, not on the same volume. If you're going to be working with systems like this. And I suppose that's lesson learned the hard way. I guess the good news is that Railway's CEO did reach out directly, and they've recovered the data since.

00:19:23:24 - 00:19:40:10
Unknown
So I suppose you could consider that a happy ending that just that just kind of reminds me of when I was on this week Google and I two factor, you know, messed up with two factor because I used a Google voice number to two factor, and I couldn't access the two factor code because it was part of it locked away in Google Voice.

00:19:40:10 - 00:20:01:17
Unknown
I was legitimately locked out of my Google account, and I'm pretty certain that if I hadn't been doing that show where we had those relationships with Google, that my account may have just been sorry, sorry, bud, you're out of luck. But thankfully they were able to give me access. Probably the same case here, but that's not always going to be the case.

00:20:01:17 - 00:20:26:29
Unknown
Don't rely on that right? It worked out here. I still feel guilty for laughing. Then don't feel guilty. It was one of my favorite moments. It really, really was. Jason put himself out there, man. He didn't just eat the dog food. The dog ate him. Yeah, yeah, that was that was fun. That was a that was a solid lesson in sometimes the best podcast is the thing you don't anticipate, right?

00:20:27:00 - 00:20:44:14
Unknown
Yeah. Because like, we had a whole idea of what that show was going to be and half the show was just troubleshooting the stupid decision I made during the show. I kind of couldn't believe it had really happened. Yeah, I couldn't either. My screen went like because it was a Chromebook. Once it figured it out, it like powered down.

00:20:44:14 - 00:21:11:18
Unknown
So it was like, oh, I have nothing anymore. I literally have nothing here. I don't even know how to do a show right now. I don't have the rundown in front of me. Anyways, that's a digression, but but it did kind of remind me of that. Like getting that code back. That's a that's a great happy ending for the story that involved Railway's CEO reaching out and opening up their own kind of storage system that luckily they had that backed up, I'm guessing is, which is why they were able to recover that data.

00:21:11:18 - 00:21:37:02
Unknown
But that's not always going to be the case. You're not always going to know the right person for that. So take the precautions or else you're going to have Gary Marcus cackling at you as Gary Marcus likes to do. I'm sure, I'm sure, he cackles when he writes an article like he does. He took to Substack, his Substack, to connect the dots between Dario Ahmed's recent claims that coding.

00:21:37:04 - 00:21:57:28
Unknown
Dario said coding is going away first, then all of software engineering, and how that measures up against the reality of incidents like the one we just talked about. And Gary's argument is that this is what happens when you push vibe coding into these production environments that aren't ready for it. I mean, I would I would agree. True. Absolutely.

00:21:57:29 - 00:22:21:11
Unknown
There's still a lot to learn about this stuff. But there is so deep. It's so strong. Jeff. Right. A and B, could you have predicted that that system would have been air quotes smart enough. Yeah. To have gone and found that and used it now why it destroyed it. What is what it quotes again thought it was going to gain by that action.

00:22:21:13 - 00:22:39:22
Unknown
There's no telling who the heck knows, right. Yeah. So so how you come up with a system of good of good AI hygiene in those circumstances is very difficult. Yeah. You really need to have we need to go back to a dev environment that has walls around it before you put anything up, and that's not as much fun.

00:22:39:23 - 00:22:47:08
Unknown
Oh, gee, I changed my whole job overnight in ten hours. Isn't that cool?

00:22:47:11 - 00:23:03:07
Unknown
I mean, it's super enticing the more people you know use these things, you go, oh my goodness, that was so much easier. And oh, I'm just going to try this thing. I'm just going to open it up and just try this thing and see what it does. And just trying that thing. It could end up hallucinating and go in a completely different direction.

00:23:03:07 - 00:23:27:07
Unknown
You just do not know if you use it on live mission critical business material. Yeah, you are risking the business. You are not that. Not that. Yeah, I can't do wonderful things, but there has to be a right way to develop it. Yeah, yeah. So there you go. I'm sure. Fascinating incident, I'm sure, for that gentleman. Lesson learned.

00:23:27:10 - 00:23:51:00
Unknown
Yeah. Probably not the type of mistake you make twice, but plenty of other people will also make that mistake. Replaying it and nightmares for months to come. Yes, indeed. Yes, indeed. You can replay our episodes for months and years to come if you. Well, you can, whether you're a supporter on Patreon, Patreon or not, but you're going to feel better about it when you're a supporter on Patreon.

00:23:51:01 - 00:24:13:26
Unknown
That's right. Patrons of the week this week calling out Ryan Newell and Burke Norton. Just a couple of our amazing patrons and Patreon.com AI Inside Show who continue to support this show on a deeper level. Of course, you can follow on Patreon for free or you can commit, you know, a couple of bucks a month and it goes to the production of the show and kind of helps us continue doing things that we're doing.

00:24:13:26 - 00:24:32:28
Unknown
So Patreon.com AI Inside Show, and we throw out your name every once in a while so you feel special. Isn't it nice to feel applause and applause when we can? All right. We're going to take a quick break. And then on the other side of the break, we're going to talk about Google signing its own kind of deal with the Department of Defense.

00:24:32:28 - 00:24:37:15
Unknown
It's so in vogue right now. We're going to talk about that here in a moment.

00:24:37:18 - 00:25:00:03
Unknown
All right Jeff, are you surprised the Google signed a classified AI deal with the Pentagon this week? Surprised? Yeah. Does it surprise you? I don't know that. It's totally surprised. It doesn't surprise me. But this is the same company where they got rid of a robotic company because the employees didn't want to be involved in the manufacture of weapons.

00:25:00:04 - 00:25:14:20
Unknown
Right? Presciently, because you look at what's happening in Ukraine and it's being and Iran, and much is being fought by robotic means. A great story this week, just quite parenthetically.

00:25:14:22 - 00:25:49:02
Unknown
Drone saw an old woman walking alone on a road near a front line in Ukraine, and they sent a robot drone to her, put a blanket on it and put a sign on it. Sit here, grandma and the drone then brought her back out to safety, which I just found fascinating. The drone usage, that is to say, the the AI autonomous classified usage is real right now, today, all over.

00:25:49:04 - 00:26:14:27
Unknown
So in a way, I can understand Google saying we got government contracts. Look what happened to anthropic. Got to do what we got to do. But it's also important that Google employees petitioned in opposition. Yes. And they've done that before and they've won before. And this time they were too late. But there's going to be ongoing tension within the company as a result.

00:26:15:00 - 00:26:42:18
Unknown
Yeah, yeah. I mean, and this petition happened the day before the deal was signed. More than and I can't really show the article because you get to see the first line of the headline because of the pay gating on the Washington Post. Sorry about that. But more than 600 Google employees, that includes DeepMind. That includes cloud. Actually, more than 20 executives in DeepMind were part of this, and they all sent a letter to sooner.

00:26:42:18 - 00:27:07:12
Unknown
Pichai CEO is basically urging him to reject that classified AI work with the Pentagon. A day later, Google signs it. Any lawful government purpose is part of the agreement that they signed, which, okay. I mean, some people are going to look at that and be like, gosh, in the year 2026, what does that what's lawful anymore? Yeah. Right, right.

00:27:07:13 - 00:27:30:20
Unknown
How are they going to push things. Right. Well. And notably right. Like any lawful government purpose. But this is about classified work. So Google is not going to really know. Right. Like there's only so much you can do. Right? Because it's classified. It's behind a barrier that Google's not going to see. So you just kind of have to take take their word for it I guess.

00:27:30:20 - 00:27:54:27
Unknown
And do you have a whole lot of option or a lot of wiggle room there? It's I don't know, it's all air gaps. So you don't have you know. Yeah. It's a funny, funny terms of service thing is you're never Google is never going to know what's violated. No. Exactly. That's that's really it. Yeah. Good point. So yeah.

00:27:55:00 - 00:28:19:14
Unknown
And is this well you mentioned Project Maven. That's what you were talking about right. The drone imagery thing, that was eight years ago when I, when I came across Maven, I was like, oh that was just a couple of years ago. No, that was eight years ago. Well, they also sold Boston Dynamics. And my impression at the time was that was also under pressure from employees that they had said, we don't want to get near this stuff.

00:28:19:15 - 00:28:54:14
Unknown
Oh, I think you might be right. I remember I could be wrong, correct me if I'm wrong, listeners, but, I think both were involved in employee, hinky about, potential bad use of the technology they were creating. Yeah. Google sold Boston Dynamics 2017, and part of the reason for that was that was close relationship with the Department of Defense and kind of a public perception around robotics.

00:28:54:14 - 00:29:18:01
Unknown
And what is this all building towards? And little did we know then that a mere nine years later, we'd be in the position that we're in right now with all these companies, you know, having to do this or feeling like they need to. I suppose anthropic, of course, had its own kind of brush up against this, and I guess that still continues.

00:29:18:01 - 00:29:36:12
Unknown
I don't know how that is, you know, really panned out for anthropic. Well, yeah, we don't know yet. There was a there was a white House meeting after mythos, where the white House, I think is covetous of mythos. We presume that the government is using it any way they've got their hands on it. So who knows what this is going to go.

00:29:36:14 - 00:30:01:07
Unknown
Yeah, I think Google setting a precedent that anthropic may try to rely upon here. Well, if Google is doing it, they don't do evil. Can't we do that? Though? They though they themselves that they don't do evil or don't be evil or whatever? Yeah. Phrase was many, many years ago. Yeah. Apparently the letter from the organizers actually says, quote, Maven is not over.

00:30:01:08 - 00:30:26:20
Unknown
Workers are going to continue organizing against the weaponization of Google's AI technology until the company draws clear, enforceable lines. And then one day later, yeah, this is where you end up. Okay. Yeah. I wonder what the internal reaction will be then from the people who signed the letter. I guess we'll probably hear about that at some point.

00:30:26:22 - 00:30:45:24
Unknown
Jeff, you flagged an essay from the Wall Street Journal. There's a couple here. And once again, having a hard time showing some of these things. AI is cannibalizing. Oh, there you go. You got it. Okay. Bye, Vivian. Man, I got it right. Links. Yeah, I found it really, really interesting because she. Who is Vivian? Linguists refresh our memory.

00:30:45:25 - 00:31:17:26
Unknown
She is neuroscientist, neuroscientist, cognitive scientist, and author of Robot Proof. When machines Have all the Answers, build Better people. I like the title, so they wanted to look at how people interacted with AI. They recruited adults from the San Francisco area for an experiment. They gave each group one hour to make predictions about real world events using scenarios drawn from prediction market platform Poly Market, and so they could check how these groups operated and the human groups against the AI.

00:31:17:27 - 00:31:49:13
Unknown
The human groups performed poorly, relying on instinct or whatever information to come across their feeds. That morning. The large AI models performed considerably better, though still short of the market itself. When we combined humans with AI, things got more interesting. In roughly 5 to 10% of teams, the AI became a sparring partner. This is what interested me. The teams pushed back, demanding evidence and interrogating assumptions when the AI expressed high confidence, which of course is what it does, the humans question it.

00:31:49:14 - 00:32:14:07
Unknown
When the humans felt strong about an intuition, they asked the AI to come up with a counterargument. The hybrids became cyborgs that maybe go a little bit overboard. But. But fine makes for a good headline, and the teams reached the insightful conclusions that neither a human nor machine could have produced it on its own. They were the only group to consistently rival the prediction markets accuracy on certain questions.

00:32:14:07 - 00:32:21:25
Unknown
They outperformed it, and I think it's a really smart way to look at this.

00:32:21:27 - 00:32:41:19
Unknown
Where was the line, she says. We don't build these capacities by avoiding discomfort. We build them by choosing it repeatedly in small ways. The student who struggles through a problem before checking the answer, the person who asks a follow up question in the conversation. Right. So what happens in real life? This is this is Socratic learning. This is what good, good professors do.

00:32:41:25 - 00:33:09:10
Unknown
Yeah, most AI chat bots today default to easy answers, which is hurting our ability to think critically. I call this the information exploration paradox. As the cost of information approaches zero, human exploration collapses. But this, I think, is a really good model for how to use AI. Don't just accept what it gives you in the ground. Well, yeah, I can do no, don't accept your answers either, because you have you have a sparring partner who can push you as well.

00:33:09:12 - 00:33:45:18
Unknown
So I just thought that was a really good way, a framework to look at for how people should work with these tools, because they're not going away. They're pretty wonderful. They could also be stupid. But if we are not only in the loop, but but in the boxing match, I think good things can come. Yeah, yeah. I think what comes to mind for me on that is that and I can point to my own kind of historical arc with, with these tools in the last couple of years is when you first start playing and I've seen it, you know, I've witnessed my wife kind of going through this because she's been kind of diving in

00:33:45:18 - 00:34:03:13
Unknown
the tools and other people who are kind of outside of my traditional technology sphere, you know, talking with them. When you first start working with these things, it's very easy to put in the question, get the answer, and be amazed at the answer that you've got and be like, oh, that's good enough. That's great. That's amazing that I got that.

00:34:03:13 - 00:34:26:08
Unknown
That was so easy to get that knowledge and to lean in on that. The more that I've used it, the more I begin to question what I'm getting back and to get really sharp perspective for myself on what I'm actually looking for and how I actually need to interact with these things in a way that that does kind of what you're talking about, that challenges what they're bringing back.

00:34:26:09 - 00:34:53:28
Unknown
That sharpens what I'm looking for to begin with. Because the question that I'm asking might not be the actual thing I'm looking for just might be a step in the in the direction towards the thing that I'm looking for. And so on one hand, it's made things really easy, seemingly easy. And I think that ends up going down the road that we sometimes talk about that I think this kind of alludes to, which is that like atrophy of how we normally do things and how we actually think and everything.

00:34:54:00 - 00:35:14:07
Unknown
And once you start to question and really kind of push back and think of it through a different lens, that's more collaborative, let's say, or kind of like you said, a sparring partner that helps you develop. Again, I can only speak for myself, but I feel like it helps me develop skills on how to do that type of thinking, you know?

00:35:14:08 - 00:35:37:21
Unknown
And it's not just pull the slot machine and take what you get. It's here's, here's a cool way to get information that is shaped around this goal that I have. And then I take that and I work with it. And so then I'm active and engaged with it because, yeah, if I'm not engaged with information, you know, it would be the same for this show if I didn't engage with the stories in some capacity ahead of us talking about it.

00:35:37:21 - 00:35:54:13
Unknown
And I just read a script that a AI has given because I played around with this just to kind of see when it comes up with, I have to interact with it, otherwise it doesn't work for me. I don't exactly same formation. Yeah. She says that the Vivienne says at the end I hope will finding is that perspective taking.

00:35:54:13 - 00:36:11:08
Unknown
Intellectual humility and curiosity are not fixed traits. That's to say you've got to develop them. And in the race between human potential and human atrophy, as you said, the stakes for building it could not be higher. So I think it's a good, good guide. So I thought I'd just recommend this to folks in the Journal. Yeah, yeah, yeah.

00:36:11:08 - 00:36:42:09
Unknown
And you also linked to another article which pairs pretty nicely to this. Americans are down on AI. These two caricatures r to blame. And of course those two characters are caricatures are the ones that we that come up on the show a lot. The doom review and the utopian view. Right. Neither one very true or very accurate. But and say it's it's the fault of the AI people on each camp.

00:36:42:09 - 00:37:04:29
Unknown
Number one, it's the fault of media who I think unquestioningly present that as if that is a full picture rather than most, is in the middle of those two. And then it becomes a self-fulfilling prophecy revealed in polls, which is what this column is about by let's give credit where it's due. Russell Wald and Shah Shadab.

00:37:05:01 - 00:37:39:18
Unknown
Mr.. That I'm sure and I apologize. So it comes from the Stanford University Institute for Human Centered AI, which put out an index a few weeks ago, and they find that Americans are less excited about AI. 38% of Americans are excited. And where this puts us is compare that to China, where it's 84%. Wow. Right. So if we're talking about a competitive disadvantage, you know, and even even morose Germany, my friends and my good friends in Germany, they're at 45% much higher than us.

00:37:39:18 - 00:38:05:09
Unknown
Japan 46, Mexico 65, India 65, Thailand 79, and again, China 84. And you can see China is an outlier for all kinds of reasons. But it says that we are going to be in a disadvantage in this country if this narrative is the one that wins, either that it's all too powerful and that's wonderful or that's doom. It's BS.

00:38:05:09 - 00:38:29:19
Unknown
In either way, the Americans have the least amount of trust in government to regulate AI responsibility. Only 31% of Americans trust government Singapore, which is always high on this. But it's the question was 81%. Americans do not use or trust AI as much at work as people in other countries. Again, China is on the high end of the graph.

00:38:29:22 - 00:38:48:14
Unknown
America, United States is on the lower quarter of the graph. And so I think we have to I'm on neither poll. I'm in the middle. I think AI is cool and amazing. It's why we do this show. It's why we examine this. But if we if we buy this narrative of those two extremes, we're going to freak people out.

00:38:48:14 - 00:39:10:18
Unknown
It's already happened, and we're going to lose the ability to see what it can do, the ability to regulate it properly, the ability to develop it properly, the ability to compete properly. And it's a damaging narrative. Why do you think it is that like China, let's say Singapore has such a high percentage compared to our lower percentage? They always do.

00:39:10:19 - 00:39:46:16
Unknown
They on things like news, trust and stuff. And one could say it's because it's a it's a one. One might argue it's a benign authoritarian regime. Okay. And China, one could argue that it's not benign, but it's still an authoritarian regime. And that the proper answer in both places is, yes, I trust the authorities, whether that's in fact ingrained and felt or whether that's something that you just feel you have to say, you'll never know if the government or whatever is saying, we believe in AI, we believe this is the future.

00:39:46:16 - 00:40:05:04
Unknown
You need to embrace it. They're more likely to hear that and say, okay, I believe you, I embrace it, let's go. And then the other part of this too, is whom we trust. And Sam will put out another essay this week. We don't have it in the rundown. We don't need to do it. But it's the principles of of open AI.

00:40:05:07 - 00:40:27:09
Unknown
And it's once again the hubris to think that he can dictate how this should be used. And I wrote a little quick piece on medium out of a thread saying that we've got to remember that, that the technologists don't own the technology. They think they do. And for a while they do. But inevitably and everything I write in every I see that the technologists, the technology fade into the background.

00:40:27:09 - 00:40:48:14
Unknown
Once these things become familiar, and once other people use them. And in a technology that is made to be easy to use, that's going to happen quickly. So a lot of the problem right now, and I think this, this, this notion of these two narratives is because we don't like the jerks who are putting out this story. Yeah.

00:40:48:15 - 00:41:15:10
Unknown
Whether whether it's the utopian view or whether it's the dystopian view. This is why I, I prefer to listen to folks like John Kuhn and even Jensen Wong, who I don't think is utopian in what he says. He's optimistic. And for Lee and folks like that because they communicate as technologists, a more reasoned view of it's amazing, and we're building it and it's going to do great things.

00:41:15:10 - 00:41:37:25
Unknown
But but stop with the hyperbole. The hyperbole is harmful both ways. So thank you for that little exegesis there. Yes. Yes, indeed. And want to give a thank you to ozone nightmare. Thank you. Thank you. Super chat that is now blocking Jeff. Here let me do this so that we can actually see you. Jeff Ozone says trust is the core issue that I hear from non-tech people.

00:41:37:25 - 00:42:20:04
Unknown
Yeah, I hear that too. Absolutely. They don't believe that there will be effective controls to stop the worst excesses of a tech industry they have become actively wary of. Yeah. And I mean, there's a lot of pushback. Like I was just listening in the car, dropping off my kids at school this morning. And on NPR, they were talking about this, this kind of growing movement among kind of the younger generation to put the devices away and, you know, recognizing that a screen addicted in air quotes, world slash lifestyle is something that, you know, kids, the younger generation didn't have a chance, didn't have a say in it was just kind of put upon them by

00:42:20:04 - 00:42:40:27
Unknown
their parents. And now they're wanting distance and time away from their smartphones and away from social media and technology because they apparently according to the. Yeah, but by one report, the Australian ban on people under 16 on social media. Yes. By one report I heard last week, 60% are evading it, which say the kids want to be on.

00:42:40:27 - 00:42:57:20
Unknown
I think it's a parent imposed thing where the parents don't trust their own kids with this change, with this technology that they're going to have to learn how to use. And I don't think is the answer. I'm curious. Jason, I was going to ask this a minute ago. I'll ask it now.

00:42:57:22 - 00:43:22:18
Unknown
Since you're a parent and you get to see other parents at swim meets and lacrosse tournaments and PTA meetings and and such things, do folks know that you're an AI guy, you're a technology guy, and do they ask you about it in in either fear, curiosity, or need for advice? Yes. So if I go to like a swim meet or what?

00:43:22:20 - 00:43:43:01
Unknown
People figure it out eventually, like, you know, I don't necessarily go right into my conversations and just say, hey, I'm Jason. Jason. Hell yeah. Right. But it kind of comes up. I mean, actually, as my kids have gotten older now, they're the ones that, like, end up telling their, their peers or whatever, oh, my dad's on YouTube or whatever, you know what I mean?

00:43:43:03 - 00:44:01:07
Unknown
And then the parents find out that, like, you know, I cover AI, but I will say, like, it doesn't come up often, but when it does come up, rarely is it like like a repulsion or like, oh, I hate AI or whatever. It's usually just curiosity. It's like, yeah, I keep hearing about AI and you know, I hear the good and I hear the bad.

00:44:01:08 - 00:44:22:08
Unknown
Like what? What is your take on it? And oh, that sounds actually really cool. I'd love to check that out. You know, that sort of perspective. Yeah. I think the prejudice I hate polling and I won't go into my my shtick here. I always go, James Kerry. The polls preclude the conversation they're meant to measure. And I think when a pollster calls us is what do you think about AI?

00:44:22:10 - 00:44:46:22
Unknown
The right answer seems to be to people, well, it's dangerous and I hate it. But in fact, the usage shows if OpenAI really has a billion weekly active users, the polls lie. So yeah. Yeah. Or people are saying one thing and you know clearly doing the other thing. Maybe, maybe there's a shame. It's like oh I don't watch networks, I just watch PBS.

00:44:47:00 - 00:45:16:09
Unknown
Yeah yeah yeah BS. Yeah. And how do you know who Vanna White is. No. Right. Hey Jeff, you put in something that totally I immediately clicked with. I thought this was cool because I hadn't heard of it before. And apparently it's a totally a thing. Chloe versus history, which, yeah, like I said, it very off my radar, but kind of a cool, interesting way of I mean, clearly it's it's done with AI.

00:45:16:12 - 00:45:43:10
Unknown
This is AI generated video content. You can find it on YouTube. It's Chloe versus history as the channel launched like very recently. And the subscriber count is jumping and it's built all around this concept of the modern, this modern fictional Gen-Z, let's say woman who chime travels to some major historical events and then vlogs them like she's actually there.

00:45:43:10 - 00:46:02:24
Unknown
So, you know, she might go to ancient Rome, like the video that I'm showing for the video version right now. She might go to the sinking of the Titanic and interview people inside of the Titanic. And it's I don't know, it's it's an interesting way that doesn't immediately smack of the AI slop word that gets thrown around a lot.

00:46:02:25 - 00:46:26:14
Unknown
It's built upon historical documentation and information and imagery. So if there's like a historical kind of artists rendering of what a scene in ancient Rome looked like, a lot of it is built around that specifically and then generated from that. And so things look, I don't know, it's kind of impressive. Yeah. It's very impressively done. The guy who does it is a guy named Jonathan Laramie.

00:46:26:14 - 00:46:50:15
Unknown
And I watched an interview with him about how he does it, and he put out a 69 pound e-book to tell you how to do it. I don't know that it's terribly revealing, but but he's very interesting. He started something called Majestic Studios and the way he started, Hold on. Studios info. So he started by taking no Majestic Studios.

00:46:50:15 - 00:46:54:24
Unknown
Jesus. Not not diamond rings.

00:46:54:26 - 00:47:01:20
Unknown
That's a side business. So he started by taking.

00:47:01:22 - 00:47:22:10
Unknown
Famous paintings and then bringing them to life in brief snippets. So what was it like to live in Edinburgh? Edinburgh? Pardon me? Oxford, Paris, York, Manchester or a couple that he has already up at? And so.

00:47:22:13 - 00:47:47:02
Unknown
It was. It got his imagination. He loves history and he wants to know what it was like to live in these periods. And I'd love to use a technique like this to, you know, go with Gutenberg or to, to take back in those areas. The problem is, I think it's a lot easier to take a place than it is to take a person or a process, and I think, be very hard to get it right.

00:47:47:02 - 00:48:04:09
Unknown
I used one of these tools. There's a new kind of browser out that tries to present things graphically. And I said, explain the line attack to me. And it had the process right. The machine was just a mess and it doesn't know how to do it. What's impressive about this is he does snippets. He has to do a lot of editing.

00:48:04:09 - 00:48:30:01
Unknown
He acknowledges that that that crap gets through. There are four legged chickens in some scenes. But he writes a prompt and a script that feels, from what I've read about ancient Rome, legitimate. She goes into a bath in the baths and she acknowledges, well, yes, you don't have to tell me. I know I wouldn't be wearing any clothes right now, but hey, YouTube, it's very self-aware the way it's done.

00:48:30:06 - 00:49:02:21
Unknown
Yeah. And it's and it's engaging. Yeah, it is engaging. I mean, it's very convincing too, I have to say. Like, it's really the render looks. It's great. Yeah, it's I'm sure if I looked really closely at like background details and stuff, I'd be like, okay, there's there's some serious tells here, but I really like this idea of like taking these images, these like classic like, you know, historical images and kind of converting them into some sort of realistic interpretation.

00:49:02:24 - 00:49:25:18
Unknown
I mean, yeah, I don't immediately look at this and go, oh my goodness, AI, you know, AI generated stuff like it looks I don't know, it's really convincing. It's it's really how long is the video in total for Rome. This one. Right that I'm looking at right now, which is the ancient Rome vlog is 9.5 minutes. So it's also impressive that he manages to get a consistency of the character of Chloe.

00:49:25:25 - 00:49:34:25
Unknown
Yes. Through the whole nine minutes it doesn't forget. She obviously has to stitch together all kinds of things to do and it takes time. It's.

00:49:34:27 - 00:49:59:10
Unknown
Wearing, but it's impressive. The other one that he has up is time traveling to Titanic, which I haven't watched yet. Yeah, a lot of people have 1.3 million posted two weeks ago. People love this stuff. Yeah. So I thought it was just it was it was a use of AI that's creative. We hope to decent educational purpose some credibility.

00:49:59:10 - 00:50:17:14
Unknown
It's on him to get the script right and to edit it and make sure it gets close. Just that it's really that it really adheres to actual history and all that kind of stuff because, you know, history nets. Yeah, they're going to tear it apart the second they see anything that that sits outside of what they expect to see.

00:50:17:16 - 00:50:38:15
Unknown
Yeah. So anyway, I thought it'd be interesting for our audience to just take a look at what's possible. And so when you think about whether or not Hollywood, there's all kinds of stories this week, not about AI, but already because of what's happening in the mass media business. Shoots in California are way down the the the unions are way nervous.

00:50:38:18 - 00:51:03:27
Unknown
What's happening here? Reality, even even unscripted reality TV shooting is way down. Original shows are way down. Mass media is dying before our eyes and everybody's nervous about this. But. But you see something like this, and you can see someone creating something that's informative or dramatic or entertaining or funny, right? Using these tools without the expense of studios and yes, frankly, actors as well.

00:51:03:28 - 00:51:25:16
Unknown
And and it opens up new frontiers of creativity that I think as a teacher, I'm just eager to see what students could do with stuff like this one. So much, so much reaction around AI generated video and videos made like this from people who aren't already sold is incredibly negative. Usually. You know, like I said, the word slop gets thrown around and everything.

00:51:25:16 - 00:51:54:22
Unknown
And when you look in the comments and something like this, you see a lot of people being like, you know, I normally hate this sort of stuff, but this is the right way. This is done really, really well. And which I think just kind of proves that there is a point at which these things can be seen as useful, as unique as novel, kind of a new form of media and entertainment that's kind of proving that it is possible to make more people happy than have been before, by that sort of stuff.

00:51:54:22 - 00:52:20:20
Unknown
So the show notes, you'll find both Chloe and you'll find a video interviewing her creator. Yeah, very, very interesting stuff. And then along the historical perspective here is Taki, which is very, very interesting. This is a 13 billion parameter vintage language model. It's trained exclusively on text from 1930 and earlier, so it doesn't have access to the internet.

00:52:20:24 - 00:52:49:04
Unknown
It's it's locked off and all of its knowledge source, not knowledge data is 1930 and earlier. So essentially you can interact with it and get the kind of the language, the world view of the early 20th century kind of locked in. If you ask it about the future, it imagines what it expects from the future based on what it knows at the time.

00:52:49:07 - 00:53:08:22
Unknown
So like if you ask it about 2026, it's going to imagine a world of like steam, steam powered ships and railroads and all that kind of stuff because it's working on the knowledge tied to that particular time period. If you on the go ahead, I was just going to say it considered a Second World War unlikely. Right.

00:53:08:24 - 00:53:29:01
Unknown
There's you know, there's that. Yeah. I asked it is the role at risk for a second great war? No. Wars are becoming more and more unpopular, and the growing intelligence of nations renders them less likely to break out. Though the area of hostilities may be increased, there is a strong possibility probability of their duration being diminished 15 years later or nine years later.

00:53:29:02 - 00:53:50:10
Unknown
I mean, and destruction 15 years later. So I tried to quiz this about the importance of the vacuum tube, because I'm thinking about this and I'm trying to get some sense of of media at the time is whether they understood and it didn't satisfy me. It went off. Yeah. To try and see what it tells you. We'll see.

00:53:50:13 - 00:54:13:12
Unknown
Vacuum tube. Let's see here. I said the first. It defines it. Yeah. It won't let me here. I apparently made the screen to too large. Share with me the importance of the vacuum tube. I probably could have come up with a better prompt. So it's. Yeah. So it's just defining the history is off. Well, it's off the vacuum tube globally meant the the triode vacuum tube which was electronic.

00:54:13:13 - 00:54:36:18
Unknown
They're going back to vacuums in 1650. Creating vacuums are. So there it goes down that rabbit hole. It's going into Newton now. It's going to be quite a while before it gets to. Yeah, the 20th century and electronic and the birth of electronics at least it's kind of it's like 300 modem speed right now, maybe 1200 modem speed.

00:54:36:18 - 00:54:51:18
Unknown
But yeah it's pretty. It's now up to 1700. Okay. Nope. Skip to 1800. Let's see for another second here. Does it get to Thomas Edison and the Edison effect?

00:54:51:20 - 00:55:16:04
Unknown
Nope. It stopped in 1814. What about Thomas Edison? ETA said, let's see here. So. Okay. And now I'm queued because there's like one single access point to this one. Anyways, I think it's an open source model. So you could probably it is, it's on GitHub. So you can you can download it and run it on your own. But you got to run it on the proper equipment.

00:55:16:10 - 00:55:48:02
Unknown
Yeah. So it's fun. It's fun. And I think it's a really interesting use of Llms is defense them into a given corpus. Yes. Yeah, I think I think what kind of intrigues me about this is we we place a lot of, I don't know, importance or we make a lot of claims about how important this technology is and what it's going to do for us in the future and how it's going to solve all of these problems and everything.

00:55:48:02 - 00:56:09:12
Unknown
And doing something like this kind of gives us a sandbox in which to explore. Like, okay, well, based on what it knew then, could it have predicted the things that came, you know, in the future? And it kind of seems like maybe this puts a little bit of a I don't know how to I don't know how how to phrase where my mind is on this.

00:56:09:13 - 00:56:29:21
Unknown
It's like, it, it proves a little bit of something around large language models that maybe we give language models too much credit for. Oh, they're going to be able to do all these things. But when we go back in time and give it just that information and say, okay, tell us about the future, tell us about what's what's likely to happen or how to solve this problem.

00:56:29:21 - 00:56:54:16
Unknown
It can't quite get there the way we were able to get there in reality. And so what does that say about modern large language models and their ability to propose novel, inventive ideas on how to solve problems in the future? I mean, clearly we're talking about different levels of information here. But I don't know, maybe it maybe it says something about LMS as a technology when we do this.

00:56:54:19 - 00:57:19:00
Unknown
My friend Samir Aurora worked for Apple way back in the day and created a number of things since, including glam. He's got a company now that is using AI with very much trained on a high quality corpus of medical information. That's the kind of stuff I really want to see more. And it goes to what the Lakota says about specialized LMS, specialized models, specialized machines that we create.

00:57:19:01 - 00:57:38:03
Unknown
I think that's where the real potential is, rather than having the thing that thinks it can do everything and thus is dumb, but most of it. Right, right. Yeah. All right. Well, we're going to take a quick break, but real quick if you're enjoying what we do each each and every week, you know, go to Apple Podcasts, go to pod.

00:57:38:06 - 00:57:53:03
Unknown
You know, whatever your pod catcher is, if it has some sort of a rating or some sort of a review process, throw us a review. We really appreciate it. It helps get the word out, but we'll take a break. We've got a speed round on the other side of it, so hang tight. We'll be back in a second.

00:57:53:05 - 00:58:22:21
Unknown
All right. So first up we've got Manus. This is the two point something billion dollar acquisition from meta I believe meta has been working on on this acquisition of Manus, which is a Chinese startup in China, has blocked the acquisition, at least for now on national security grounds. It's based in Singapore, has Chinese roots. China's National Security Commission, chaired by Jinping, made the call.

00:58:22:21 - 00:58:44:12
Unknown
And it's actually a reversal. Beijing had approved the deal back in December, and now they're now they're kind of changing that up. So but it's interesting because meta has been kind of absorbing all of this information and data and everything about meta. Sorry about Manus and its systems and all that since the deal closed. So how do you put that back into the basket?

00:58:44:12 - 00:58:54:29
Unknown
I don't know, but you know, the coverage says, you know, it reveals the tension in US China relationships. I think that's true. And.

00:58:55:01 - 00:59:19:06
Unknown
The founders of Mattis, I think, had gone to Singapore. I think they had left China and saw the opportunity to to break free. But they couldn't. And I don't know what happens to Manus now. Yeah, it'll be interesting. What what the what the long term cost of this is. Yeah. Meta loses something cool that it bought. Okay. But I think it's hurt badly.

00:59:19:09 - 00:59:42:19
Unknown
Yeah. Big time. Interesting. Taylor Swift moving to trademark her voice and her image. This is a, you know, to protect it against, of course, an authorized AEW because a lot of the kind of music generation systems that are up and coming that aren't the big major players that are probably doing things with data that they, you know, don't have legitimate access to.

00:59:42:21 - 01:00:05:23
Unknown
A lot of times they do represent like a voice like Taylor Swift or whatever. And you can you can make songs that sound like Taylor Swift. She has filed three trademark applications with the US Patent and Trademark Office. Two are sound trademarks for her voice, one is a visual trademark. And, you know, Matthew McConaughey, famous actor, did something along these lines recently as well.

01:00:05:23 - 01:00:37:16
Unknown
So this is not to say that she she can successfully trademark her tonality. Yeah, she's trademarked two moments of her voice saying, hey, it's Taylor Swift and hey, it's Taylor. Those are trademarked. So it's not the timber. Interesting distinction. Okay, so because because trademark doesn't go that. Yeah. As long as what you're listening to doesn't explicitly say, hi, I'm Taylor Swift or Taylor or whatever.

01:00:37:21 - 01:00:59:11
Unknown
Well, she'll fight it, sure, but yeah. Yeah. And then the image is an image of her, holding a pink guitar with a black strap, dressed in a multicolored bodysuit with silver accents and boots. It's associated with a recent performances, so I think this is more of a symbolic act of saying, stay away from me, watch out for me, and I get it.

01:00:59:11 - 01:01:29:22
Unknown
But, other famous sound marks include Netflix's Saddam and NBC chimes. Yeah. Oh, maybe I'm not supposed to say that. Yep. Yeah. We can't say it now. Yeah. Dang it. I just fell for it. Oh, well. Interesting. I'm sure we'll see more along these lines as they all figure out, like, is this effective? Is this going to, you know, accomplish no one being able to use my voice and likeness, whatever remains to be seen.

01:01:29:29 - 01:01:57:25
Unknown
YouTube testing a chat style search. It's called ask YouTube. And instead of your standard grid of of videos and thumbnails and all that kind of stuff, you type in your natural language question, you get structured text, you know, typical LLM stuff, I suppose, but you get also along with that short clips, longer clips, timestamped links, that sort of stuff that ties into whatever you're looking for.

01:01:57:25 - 01:02:19:11
Unknown
So if you're like in this thing, researching for a trip from one place to another, it might pull back clips and videos that tie into that, and then you can go off on a tangent and find like a coffee shop, and then it'll pull that into the thing. It truly is, like a version of Gemini that is specific to YouTube and interactivity there.

01:02:19:13 - 01:02:50:07
Unknown
So interesting. Yeah. Okay. Not surprised. Available through YouTube labs for us premium subscribers. And yeah, go check the interfaces we're used to are going to change and we'll see what works and what doesn't. I don't think just like the Hey Siri, who knows whether it'll get uptake. Yeah, yeah. And then finally, Anthropic Clod has a bunch of new creative tool hooks that connect into the cloud app.

01:02:50:07 - 01:03:20:21
Unknown
So Photoshop, blender, Ableton, among a bunch of others. Personally, I'm a big fan of Ableton. It's music production software, although it's not like it gets basically what it gets in the realm of Ableton anyways, is it understands or it has like, I think the entire corpus of Ableton like manuals and research documents and stuff like that. So if you run into an issue with the software, it's a good source to go to to figure out your way around it.

01:03:20:22 - 01:03:39:24
Unknown
I'm sure there's other ways to use it. I'll be curious to play around with it, but but yeah, nice to see them adding a whole bunch more creative tools into Claude to kind of build out that, you know, that feature set even further. Yeah. And it's interesting. It's a new definition of, of in a sense of distribution.

01:03:39:24 - 01:04:08:22
Unknown
It's not a distribution of content. It's a distribution of functionality. Yeah. Which is interesting, super interesting. And that is is what we got Jeff Jarvis com for all things Jeff I'll, I'll books Jeff. Anyways hot type in process preorder now Gutenberg parenthesis magazine the web we we've and probably more more in the coming months ahead the next proposals on its way to my editors.

01:04:08:22 - 01:04:32:00
Unknown
So we shall see. Dang. So so busy that of course that also includes, but differently, the new book series Bloomsbury, which is intelligence, AI and humanity, which I imagine will eventually end up here. Is that right? Yes. What's it actually asked my son to figure out how to put it there. Yes. I'll mess up the the esthetics. Yeah, yeah, yeah.

01:04:32:01 - 01:04:56:01
Unknown
Cool. I'll look for that as well. You know, you can find me if you've got a podcast and you need help thinking through how you do this podcast thing. If you're like, hey, hey inside, they know what they're doing. Well, you know, we produce it and I can help you, help you think through how you produce yours. So pod tune up for that AI inside show for everything you need to know about this show.

01:04:56:01 - 01:05:23:17
Unknown
It's all linked there, including a link to our Patreon Patreon.com AI inside show. And we have some. Let's see here. We have some amazing executive producers who help us week in and week out. Doctor Do Jeffrey Marikina Radio Asheville 103.7 Dante Saint James Bond, Derek Jason eye for Jason Brady, Anthony Downs, Marc Starker and Karsten. Thank you so much for your support, Patreon.com sideshow.

01:05:23:17 - 01:05:45:00
Unknown
And finally, don't want to forget to thank a couple of folks who help us each and every week on things in the back end. Daniel Croft, fan of this show, who helps with some shorts support behind the scenes, and Victor Bogart, who helps edit because boy, I just run out of time these days. So without Victor, I couldn't get this thing edited at quite the same speed.

01:05:45:00 - 01:05:59:10
Unknown
So thanks to both of them. Jeff, thanks for hanging out with me today and always. I really appreciate it. Always, always a pleasure. Always learn. Right on. All right everybody, thank you for watching and listening. We'll see you next time on another episode of the AI inside podcast. Take care everybody.