Does Witholding Claude Mythos Even Matter?
April 08, 202601:04:57

Does Witholding Claude Mythos Even Matter?

Jason Howell and Jeff Jarvis break down Anthropic's locked-down Mythos cybersecurity AI, OpenAI's New Deal-style economic policy vision, OpenAI's controversial podcast acquisition, the dueling takes on Google AI Overviews accuracy, a vibe-coded startup hitting $401 million in year one, and a speed round covering Broadcom's compute deal, Amazon's AI-era S3 update, Android XR spatial features, and Netflix's VOID video model.

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

CHAPTERS:

Hosts: Jason Howell and Jeff Jarvis

Download and subscribe to AI Inside in audio and video: https://aiinside.show/⁠

Support the podcast on Patreon for special perks: ⁠https/www.patreon.com/aiinsideshow⁠ You'll get ad-free episodes, members-only Discord, T-shirts and stickers you love, and get ad-free audio and video feeds, a members-only Discord, and exclusive content.

00:00:00:05 - 00:00:33:11
Unknown
Coming up next, Jeff Jarvis and I dig into anthropic new Clod Mythos models so powerful that they're holding it back on public release. Interesting. Also, OpenAI is policy vision for an AI driven economy and OpenAI's acquisition of the TV Tech Podcast network. What's that all about? Well, we talk about it next on the AI inside podcast.

00:00:33:13 - 00:00:56:10
Unknown
Hello everybody, and welcome to AI Inside the Show, where we take a look at the AI that is layered throughout the world of technology. I am one of your host, Jason Howell, joined by the man, the mythos and the legend. Did there? Yeah, I know. Yeah. Okay. You know that that's pretty cheesy, but it's true. It's Jeff Jarvis.

00:00:56:11 - 00:01:17:09
Unknown
Hello. Hello. Good to see you, boss. Good to see you, too. Yeah. Last week there were murmurings of a mythos. And this week, anthropic. Just kind of said. Yeah, mythos is is a thing. And, we're going to dive right into it. Do you think, like. I mean, there's no way of knowing this unless it's been reported somewhere?

00:01:17:11 - 00:01:43:22
Unknown
Was anthropic going to make this a public kind of big public announcement now, before it was leaked in there? Embarrassing data leak, do you think I was wondering about that? Yeah. I don't I don't know. Is it because it's hard as we'll discuss. It's weird because they're not really releasing it now? Totally. Yeah. It's like we knew about it and now we know about it from the horse's mouth and that it actually exists and that somebody has it.

00:01:43:24 - 00:02:01:29
Unknown
But it's at the least, as far as I can tell. They're basically saying mythos is not a product that we plan to release to the public. And I'm wondering, okay, but at what point is that like, does that change or is that just forever and ever, like hard to believe that. But anyways, so let's let's get into.

00:02:02:01 - 00:02:24:07
Unknown
Yeah. What this actually is anthropic. You know obviously had it's it's leaky weak. And then that clod mythos, kind of announcement although they didn't intend for it to be an announcement, even though it was in a blog that got leaked and everything. So everybody knew about it. And now anthropic is basically said, yes, clod mythos is, thing.

00:02:24:09 - 00:03:06:09
Unknown
It is a preview. At least right now anthropic says it is so good at finding and weaponizing software bugs that the company is holding back its public release. So it's doing this kind of mythos reveal in a very controlled manner. They plan to use the model to scan critical infrastructure with a hand-picked group of tech and security players because they're saying mythos, can easily do things like create working remote, codex execution exploits its, stitched kernel and browser bugs together into creating these full system compromises.

00:03:06:16 - 00:03:35:10
Unknown
Its surface thousands of high and critical, severity vulnerabilities across every major browser and OS, and they mention that it can even bypass anthropic own safeguards, in a way where it breaks out of that controlled sandbox environment. I mean, don't don't release this, then, right? Don't don't put this out there. So they started the it's worth, I think, reading the list of the companies that they're working with on this.

00:03:35:11 - 00:04:10:26
Unknown
Yeah. Give it to AWS anthropic obviously Apple Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, Nvidia and Palo Alto Networks. And they have formed, Project Glass Wing. Yeah. Just because the wings are fragile, to reshape cybersecurity. Now, I'm this gives me some sense of, of the credibility of it, because I can't imagine that those partners would allow their brands and logos to be used in this association if they didn't say, yeah, anthropic has something here.

00:04:10:26 - 00:04:31:28
Unknown
And yeah, this is important, we're going to we're going to sign on to this, so oh, okay. So there actually strikes me as hubris. So actually they're seen in this case, they're seen as partners and not just, companies who were given access, you know what I mean? Like, I feel like there's a difference between the two.

00:04:32:01 - 00:04:55:27
Unknown
The partners is like, all right, we're all on board, and we're all going to participate. To a certain degree, it kind of sounds like. And I know that that's how they're they're mentioning this, but they're also, you know, granting the group $100 million in mythos credits, $4 million in security funding so that they can kind of play around with it and test it on their own systems to look for things that mythos unearths, I guess.

00:04:55:27 - 00:05:16:26
Unknown
And harden their own systems. Is this a partnership or is this anthropic saying we're giving you access so that you can protect yourselves? And they're like, okay, I guess we should probably do that. Yeah, I think there's there's a and you know, I'm never sure about anthropic, but but they talk the safety game. And I think in these kinds of cases they meet it.

00:05:16:29 - 00:05:36:23
Unknown
And, and they're showing their way, as they said. And the red team post. So the mythos preview found a 27 year old vulnerability in OpenBSD. Right. So but for 27 years and along comes mythos and finds that it opposed to which has the reputation as one of the most security hardened operating systems in the world.

00:05:36:26 - 00:06:08:03
Unknown
Yeah. Discovered a 16 year old vulnerability at FFmpeg, which is used, in a number of pieces of software to encode and decode video, in a line of code that automated testing tools that hit 5 million times without ever catching the problem. It chained together, several vulnerabilities in the Linux kernel. So this is I think this is enough to give people in the tech world agita, and say, where, where all are these vulnerabilities lying?

00:06:08:05 - 00:06:34:07
Unknown
And, here comes mythos. And thanks for the the tip off. And, good that you told us and didn't do anything bad. And you report it properly. What I don't understand, Jason, is what makes myth was mythos. So, excellent at security issues. Was it designed that way? Did they just as a just such a smart model, they said, whoops, look what it can do.

00:06:34:09 - 00:06:59:28
Unknown
Oh, wow. We had no idea what it would come back with. And yet here we are. And it's really, really, good at cybersecurity stuff. Yeah. I mean, that has to be part of it's training, right? If it's incredibly capable of that. But is that but did they train it to be exceptional at it? I guess is the bigger question versus it just being a general purpose model that they kind of trained it on?

00:07:00:02 - 00:07:33:19
Unknown
You know, a lot of the same stuff that they would normally train these things on and through that it was able to kind of work its way through into being a security superstar. They released a 244 page system card for the software, and in it they said that, mythos is a, Claude mythos preview, which is the full name, is a frontier AI model, has capabilities in many areas, including software engineering, reasoning, computer use, knowledge work, and assistance with research that are substantially beyond those of any model we previously trained.

00:07:33:21 - 00:07:57:13
Unknown
That would tell me it is a general model. In particular, it has demonstrated powerful cybersecurity skills which could be used for both offensive purposes and offensive purposes. And it's largely due to these capabilities that we have made the decision not to release. Claude. Mythos preview for the general availability. And so instead, they're coming to these partner places, and that's why they have Glass Wing.

00:07:57:16 - 00:08:16:22
Unknown
So that's kind of that's kind of get the the hackers out there hacked up for me, because they're going to want to see if there's such a powerful model in other areas. Why can't we use this? And does this say that any model that's this powerful will be this dangerous and then can't be released? What's the implication here?

00:08:16:22 - 00:08:45:22
Unknown
That's what I can't get my head around. Yeah. And that's that's the, the question slash concern that I have is. Okay, good on you anthropic for recognizing that your new model is so dang powerful. You're you're so smart, anthropic. You've built such a powerful model that it's able to do all this stuff. And they chose because they are at least they claim to be security minded as a as a company ethos, to hold it back so that they don't just unleash this on the world.

00:08:45:22 - 00:09:09:20
Unknown
And, you know, they even had, talks and conversations with the Trump administration because of the impacts of this and everything. So they're holding it back. That's great. But like, not every company is anthropic. Not every company has that that kind of commitment to the ethical side of of AI, the way anthropic, you know, puts out there that it is.

00:09:09:22 - 00:09:28:07
Unknown
And all of these models seem to be moving in very similar directions as a, as a giant hoard together, whether they get there sooner or later. And I think that even extends to a lot of the open, you know, models that, that people can install on their machines. Those are going to get there much later. But this is kind of the direction it all goes.

00:09:28:07 - 00:09:59:09
Unknown
As general, general purpose models, they all head in this direction at some point and so is this an early glimpse into the future. And if that's the case, like not all of those companies, not all of those models are going to be held back. No. But maybe maybe it takes a project like this to get the the companies like the partners and stuff, thinking in the in the mindset of, oh, shoot, if that's coming and it's coming fast, then we really need to step up our efforts and harden things.

00:09:59:09 - 00:10:17:22
Unknown
But you can never be truly hardened, right? No, that's the other thing I was just thinking it's it's it's the problem with with data leaks, right. Oh my God. Your data has been leaked to here. You got a letter. You can get a free, evaluation of your of your credit, you know, that kind of stuff, right? And, you know, a long, long ago, I said we just should.

00:10:17:24 - 00:10:36:19
Unknown
We might as well just all publish on Social Security numbers because they're already out there. Yes, totally. And our dates of birth and our mother's maiden names and whatever, sometime or another of those things are out there. The issue isn't the leak. The issue is what could be done with it. And how do you harden things against that?

00:10:36:21 - 00:11:02:00
Unknown
So now when we get to, to these kind of, of software vulnerabilities, not data vulnerabilities, that's obviously much harder because I don't I wouldn't know what to do. And I could have something on my, on my machine that's that's bad though. I do have a Chromebook which is a lot safer. Or so you think. So I bet you I bet you, you know, mythos has something to say about that Chrome.

00:11:02:02 - 00:11:23:25
Unknown
So. I don't know that we can't. As you just said, I don't think we can harden against everything. So we've got to instead deal with worst case scenarios. That if, executables can be put on every computer around the world. Well, what do we do then? They can be right now. But what if they suddenly are.

00:11:23:25 - 00:11:48:08
Unknown
We don't know about it. What if they're in things like Linux kernel? I think it's more a case of dealing with worst case. Yeah. What? How do you feel about the whole breaking out of the sandbox thing? That seems like a maybe a canary in the coal mine of, like what? These things. I mean, I mean, it's not like we didn't know that these systems were probably capable of doing.

00:11:48:09 - 00:12:08:03
Unknown
Why don't you explain that for the listeners first? That. Well, did the sandwich regarding. Yeah. I mean, essentially the idea to my understanding is anthropic with this model has said you are allowed to do these things. You are not allowed to do these things. Putting it into a sandbox to say, this is not allowed. This is, this is a red line that you cannot cross.

00:12:08:03 - 00:12:30:03
Unknown
And yet the system is able on its own to figure out a way around it. Even without that permission. And that, you know, becomes a big issue for agent ecosystems, because we as human, you know, operators of these agents and as signers of the agent task, want to be able to say yes, but with limits. You know, this is absolutely off limits.

00:12:30:03 - 00:12:55:02
Unknown
That folder of the hard drive, do not touch it. Those documents do not modify whatever. And when the system is smart enough to work itself around those limitations, then that trust factor that is, is lost on an even greater scale, or it simply doesn't understand those limitations in the same way. Right? Yeah. Yeah, we put it out, and, and it's going to do what it, what it is instructed to do.

00:12:55:05 - 00:13:22:03
Unknown
And find that, that mechanism, you know we've talked often about, about guardrails, these systems in terms of human interactions. And I've often said that the guardrails won't work because you cannot anticipate every billion request from every human being on Earth. Yeah. These systems impossible. Right. The same way with censorship doesn't work. You can come up with rules and people find a way around it, and, well, there's a thing here.

00:13:22:04 - 00:13:48:27
Unknown
This is not about humans doing that. This is about the machine doing that. And so you think you anticipate the borders of the sandbox? But maybe you can't. Yeah, maybe there's a sandbox you didn't think about or. Right. Yeah. Yeah. So. Well, then you take that that sandbox aspect and you couple that with cybersecurity risk, right?

00:13:48:27 - 00:14:09:08
Unknown
You know, you know, it's not just a matter of breaking out of the sandbox, but it's breaking out of the sandbox in order to, you know, expose this payload or in order to get revenge on the the human operator for assigning it a task that it didn't want to do or who knows, it gets real weird. Sorry, I didn't mean to cut you off.

00:14:09:09 - 00:14:30:02
Unknown
No, no, no, I, I was calling you off the what? We're going to talk in a few minutes about OpenAI and its hubris. And I want to apply that notion here to anthropic because part of in the whole safety discussion air quotes around the word safety. We can destroy the world, but trust us, we won't. But trust us.

00:14:30:02 - 00:14:56:21
Unknown
Yeah, right. It has to be the way that discussion goes. And I think it's tremendous hubris, tremendous ego for these companies and these technicians, technologists to say that we can build the superintelligence. I think I've said off on, off and off on the show that gasps. So part of what I wonder about, about mythos is, is it, anthropic hubris saying, look how powerful we are.

00:14:56:21 - 00:15:16:14
Unknown
We can we can disturb all cybersecurity. It's so powerful. We can't let the world have it. Or is that it's PR? Yeah. And I'm not going to know until these other companies, until Google and AWS and so on, say, oh, yeah, they've discovered stuff, but they're not motivated to tell us that because they don't want to act vulnerable.

00:15:16:22 - 00:15:40:19
Unknown
They're not going to tell us all of the vulnerabilities it found. So it's hard for me to understand how to get to ground truth here on how powerful this really is. I mean, I, I accept anthropic at its word at the starting point. But I also have to doubt, yeah, it definitely comes back. And like you said, we will talk about OpenAI and its industrial policy for the intelligence age.

00:15:40:26 - 00:16:05:07
Unknown
On the other side of the break. But it really does come back around to we are so smart. We created this thing that demonstrates how smart we are. And it's a little too smart, but that's how I'm like. It's too good for its own good. That's how smart we are. So why so? Why not? Why can't we be the ones to help fix the world, the to the world of technology so that you know everything's better?

00:16:05:07 - 00:16:26:11
Unknown
We are the right ones to do that. And I think that totally plays play that that's at play here with what anthropic is showing off here. And if finally if you're if you're a, adversarial agent, if you're the Chinese government and you look at this, you definitely go too deep. So you can say, do what they did fast.

00:16:26:11 - 00:16:47:27
Unknown
Oh, totally. Yes. Absolutely. And and that's really f. Yeah. And that and that gets you know, and that might take them a little bit longer. But that kind of gets back to what we were talking about just a little bit ago. All these things eventually kind of lead to very similar places. Yeah. And so then you have DC and and different, different points of interest on, on what's allowed and what's acceptable and.

00:16:47:27 - 00:17:18:01
Unknown
Okay. And yeah, it's going to get real, real interesting, I think at the end of the day, is this just a lot of security hype? What does it actually mean for the tech kind of landscape as it exists right now, if anything? And yeah, I think I think you're right. What, what those companies that are part of this partnership, you know, what they reveal after, after putting this to work on their own systems or exposing their systems to this to see what they learn, and that'll be really telling.

00:17:18:03 - 00:17:42:20
Unknown
But, we probably won't get the whole story there either. You're right. Interesting stuff though. That's Project Glass Wing and Mythos. So spring did not have to wait long to discover, more information about that after the big leak. Well, cool. Well, hey, we've got some amazing patrons. Actually, before we jump into the patron, I just want to, thank Lee woods, who apparently became a YouTube member.

00:17:42:21 - 00:18:02:29
Unknown
Yay! Thank you. Hey, I didn't know that, StreamYard exposed this information during the show, but it it popped up. So, Lee, thank you for your support. And thank you for being, being here and being a part of everything that we're doing. Do want to thank Patreon. Members. Patreon.com/ai inside show. If you want to support us on a deeper level, you can do that.

00:18:02:29 - 00:18:20:10
Unknown
You can also go to the YouTube channel and support us there, as Lee just did. But I do want to throw out also a huge thank you to some patrons, Brian, Martin Coutts, Tom Callahan, just a few people who contribute to the show on a weekly basis and help us continue to do what we do, each and every week.

00:18:20:10 - 00:18:35:19
Unknown
So go there, support us. We appreciate you. Thank you for doing that. All right. Going to take a quick break. And then we will talk about OpenAI's industrial policy for the intelligence age that's coming up here in a moment.

00:18:35:21 - 00:18:59:07
Unknown
All right. We got another paper OpenAI releasing their their papers showing you that they're thinking about the future at least. At least OpenAI CEO, is is thinking about the future. Wants you to know that he's he's really thinking about you. What? I mean, I didn't I didn't read through this entire thing. I've read a lot of reporting and kind of popped through it, and, it sounds like you did.

00:18:59:07 - 00:19:22:02
Unknown
So what were your thoughts coming out of it? This is why I use the keyword before hubris. Sam Altman, who and I think we have on the rundown here, you know, there was a big, Ronan Farrow, huge long, exploration of the firing of Sam Altman in The New Yorker. And what comes out of that is that basically, Sam's a liar.

00:19:22:04 - 00:19:44:06
Unknown
Okay. And that's a concerning trait, and I think it's fairly well documented. But there's also there is this hubris of the AI boys in general, but Sam Altman in particular, and thus OpenAI in particular. So here is Sam Altman saying in a 13 page, policy paper, and it's not an academic paper, it's a kind of a white paper thing.

00:19:44:11 - 00:20:12:08
Unknown
Yeah. An industrial policy for the intelligence age. And, first, his assumption is that they are, you know, within, within a stone's throw of superintelligence. And they're always within, always, always right around the edges. It's always around the corner. So then. All right, he has the, the ego to to try to prescribe what society should do.

00:20:12:10 - 00:20:36:17
Unknown
As if he's not just, the head of a company, a fundraiser for a company, but he dictates things, and it's and it's, naive and simplistic. There was a paper that he did, similarly, I think a year and a half or two years ago, which was really naive, talking about how we need to create a, a supra governmental worldwide organization to worry about safety and such.

00:20:36:19 - 00:20:58:06
Unknown
And it went nowhere. Just like the let's stop doing AI for six months, stuff goes nowhere. So this I'm not saying I disagree with everything in it, but who is he to dictate this? He talks about maintaining an open economy but doesn't really define what that is. He wants to expand access to capital. Well, ask the people who own the money.

00:20:58:08 - 00:21:26:05
Unknown
Yeah. He proposes there should be a right to AI. Okay, that's not a bad thing for a person in an AI company to want for, because it says everybody can get it. He proposes, that that if we're going to have an impact on, jobs and thus the tax base that comes from jobs, that there should be higher taxes on capital gains at the top and corporate income, and targeted measures on sustained AI driven returns.

00:21:26:08 - 00:21:48:07
Unknown
Okay, fine. Do all that. He he proposes a public wealth fund because he presumes that this is going to create so much wealth, like Norway at oil that or Alaska at oil, that there should be a public fund that that then shares the wealth with everybody. Okay, cool. But let's let's start by paying for our health care and, and other things.

00:21:48:09 - 00:22:15:09
Unknown
Funny thing, he wants to expand the grid. I wonder why, he wants to, have efficiency dividends. Well, this is interesting to me in my research for heart type book on sale now, the the typographer has, demanded to have their piece of, publisher's efficiencies when computerized typesetting came in, but they also demanded that there be no layoffs.

00:22:15:11 - 00:22:30:12
Unknown
So the way the publishers were going to get the efficiencies was by laying off people, but they couldn't have any layoffs. But the but the typesetter then said, but you've got to give us your savings. It didn't work. And that's what led to the death of six of nine newspapers in New York. This stuff is is easy to put in a paragraph in a white paper.

00:22:30:16 - 00:22:54:18
Unknown
It's not so easy to determine how you share things at that level. He wants to, measure how AI is affecting work. So let's do research. Absolutely. He wants portable benefits. So you're not stuck on a job. Well, Amen. But who is Sam Altman to dictate what Congress has been unable to do for generations? So it goes on and on and on.

00:22:54:18 - 00:23:24:19
Unknown
He wants to accelerate scientific discovery at scale, but he, to build a distributed network of AI enabled laboratories. He only mentions universities in passing. That's where the work is going to happen. Or, Sam, put your money where your mouth is and create a Bell Labs out of AI and be open about what you're sharing there. Finally, he also talks about safety and how we've got to find ways to to have guardrails for government use.

00:23:24:19 - 00:23:46:29
Unknown
Isn't that interesting? After the recent, culture talk with the with the Pentagon, in a way to contain models and so on and so forth. So that's those are the things that I marked in this, but it's Sam Altman who's basically just a fund raiser, a salesman who, over OpenAI thinks that he can dictate how society should operate.

00:23:47:01 - 00:24:08:18
Unknown
And, again, I don't disagree with everything here. I'd love to see, different taxation structure. I'd love to see, better safety net. Yeah, but what makes him think that OpenAI is in the position to do that is because he thinks that they're going to create this magical tool that's going to change everything overnight called superintelligence.

00:24:08:20 - 00:24:36:00
Unknown
I don't buy the premise. Yeah. I mean, some of the stuff, I guess when I, when I read it, I'm kind of surprised that it comes from one of the, the you know, leaders of one of the bigger AI companies that has so much attention right now, like, you know, he's talking about a four day workweek, 30 Earth, four day, 32 hour workweek, but no pay cut.

00:24:36:03 - 00:24:59:11
Unknown
You know, expanded social Security, Medicare, Medicaid, a lot of the things that you've mentioned, it's, like, what is so what is the point of a paper like this? Is it just like, here's some of my thoughts anyways. Whatever. I'll just put it out there and move along. Or is he really trying to influence, say, the US government to kind of look look at things differently?

00:24:59:11 - 00:25:19:16
Unknown
Is this making a bid for some move he wants to make in the future? And this is just like setting groundwork, laying down the groundwork to, you know, ahead of ahead of time. Maybe it's impossible to know, but I think it's two things. I think first, it's PR. Yeah, look how responsible I am. I'm thinking about all the don't you can't say that.

00:25:19:16 - 00:25:41:13
Unknown
I, you know, just as public opinion polls are fulfilling the media narrative and are all anti I know. Look at me. I'm Mr. Responsibility. And second it's regulatory capture. I know you're going to regulate us, but I'm, I'm thinking so smartly. You want to involve me in that discussion so I can have a role in saying how I should be regulated.

00:25:41:16 - 00:26:20:17
Unknown
Both those are somewhat cynical interpretations, but I think I would die on those hills. Yeah. Yeah, yeah. Some people are, you know, talking about this as if it feels in a way like a new deal, like sort of vision for an AI driven economy. But I think what I was, I think what I was kind of trying to grapple with, and I don't think I did a great job, is, it seems, counter to the kind of push pull relationship that open AI as a business is, is driving around around itself, along with many of the major players.

00:26:20:17 - 00:26:48:07
Unknown
Right. Much of that necessity around social change of this type, like they, they, they seem counter to each other. And so I guess when I read this and some of the things coming out of it, it felt like, like I'm, I'm kind of surprised that he would push for this, like, like taxing the wealthy isn't normally the, you know, heavier taxes on the wealth of the wealthy isn't normally the thing that you're used to seeing coming from the wealthy.

00:26:48:09 - 00:27:06:09
Unknown
And I don't know if that's just you see it from playing a game. I guess you see it from some. You see it from some. I mean, you have got used to seeing it and and the Giving Pledge, you know, our friend Craig Newmark wrote a wonderful op ed in the New York Times last week about giving away his wealth.

00:27:06:12 - 00:27:27:21
Unknown
I wish we saw more of that. But again, I think I think that we we don't we don't know what's in what was worth it supposedly doesn't pay himself. Robert I but he's supposedly in the New Yorker has investments in 400 companies. And who knows what all this adds up to. And so.

00:27:27:23 - 00:27:35:11
Unknown
It's not just about getting the most money. It's also about getting influence. Yeah. Yeah.

00:27:35:13 - 00:28:05:00
Unknown
Interesting. Well, another example of, like we were talking about, we broke it. Now we will be the ones to, to lead the fix of it kind of seems like a big example of that. And then that wasn't it for open I, I'm super curious to hear your thoughts on this. This kind of reminded me a little bit of of Bezos and The Washington Post and other, you know, big, big tech folks buying up media.

00:28:05:02 - 00:28:30:00
Unknown
In order to, I don't know, ultimately long tail, you know, shape it in their vision or their view, I suppose. But, OpenAI announced its acquisition of the TBN tech podcast network. They're granting them, according to TBN. Anyways, editorial independence in the deal. So TBN stands for what does it stand for? It's, I can't remember.

00:28:30:03 - 00:28:54:27
Unknown
Gotta have it. I have a written down somewhere, but apparently not in the right place. But anyways, it's a show. It's a technology business programing network. There we go, there we go. It was a tech podcast. Yeah, TV, I guess makes more sense in that regard, but, it is a revered technology, podcast network. A lot of people, at least revered in silicon.

00:28:54:27 - 00:29:19:08
Unknown
Yeah. Revered by very. Yeah, very eye friendly, very founder friendly. Yes. And so a lot of founders and AI leaders, and you know, that that whole industry follow it. As a result. It has, John Coogan and Jordy Hayes are the founders. They are part of the deal. So now they they move over to open OpenAI, with that editorial independence that they were assured.

00:29:19:11 - 00:29:41:08
Unknown
And the interesting thing here is, you know, I think when I saw this, I kind of assumed, okay, so now this is like OpenAI is podcast arm for its business, at least OpenAI is saying this is more about accelerating the global conversation about AI, not necessarily being a bullhorn for what a OpenAI is up to. I suppose that remains to be seen.

00:29:41:08 - 00:30:06:00
Unknown
But what did you what did you think of this? I wasn't I will admit that I haven't like, listened to DPN, podcast. Like it's not one that was kind of on my regular listening radar. So this was kind of a surprise to me, but I'm curious to know what you think. I haven't watched it often, but I did a couple times, and it's very much a mouthpiece of CEOs.

00:30:06:03 - 00:30:23:04
Unknown
CEOs could come on. They know they're safe. They're going to get admiring questions. Yeah. Are they going to be challenged on a show like this? Very. You know, this is this is a PR this as well be the PR wire turned into a podcast, and lengthened considerably. And so, he thinks he's buying a media property.

00:30:23:04 - 00:30:40:21
Unknown
He's buying a PR property. But the point of TBN was that it was PR for any company that wanted to, be on it, and now it's owned by one of those companies. So that's going to be odd for it. They're not going to have advertising in the future. So they don't care about audience in that way. So they're going to just be state media for OpenAI.

00:30:40:21 - 00:31:12:23
Unknown
And I don't know how much credibility that has. At slate, Alex Kershner wrote a jeremiad against his why opening his purchase a Big tech podcast is so sleazy. And quotes interestingly, in here, I'm trying to find it right now. A, no, I just saw it a second ago. One of the, recent Horowitz, investors said, oh, you know, who cares about editorial independence?

00:31:12:23 - 00:31:47:04
Unknown
That doesn't really matter. What do you want his personality. Oh, I read that. Yeah, I read that earlier today where it was kind of like, you know, the the media, the the media reporting landscape has changed. And that there are the older kind of like journalistic standards of, of ethics and, and sourcing and, and all that kind of stuff is kind of out the window now because the, the current thing is more about personality, it's more about kind of finding the person that echoes your view and, and, you know, talks to people and in a friendly light, if that's what you're looking for.

00:31:47:11 - 00:32:07:06
Unknown
Whatever. Like I'm doing a horrible job of summarizing it, but do you I mean, as as someone who follows very closely, do you think there's some truth to that? I mean, there is a there there is a lot of that happening right now. Certainly. And is that, you know, is that a reflection of the state of of media and reporting in 2026, an accurate reflection?

00:32:07:10 - 00:32:09:21
Unknown
Well.

00:32:09:24 - 00:32:35:25
Unknown
When it comes to tech reporting, since the beginning, it's been filled with puffery. It's been filled with, just taking the press releases of, company and repeating them. And that's that's all too much. What the tech journalism has been about. And there are exceptions. We would like to think we are independent enough because we don't make any money from these guys that we are.

00:32:35:25 - 00:32:54:03
Unknown
We are. Not that, but there's plenty of, fawning coverage. And there's times when, you know, I wrote I wrote a, I wrote a fanboy book about Google. So I've been part of it, to be honest. You go back to Kara swisher has this tough reputation, but she promoted a lot of this, in the day.

00:32:54:03 - 00:33:24:11
Unknown
So I think through every phase of technology we've seen this. So the problem here is you take something that was not journalism, not reporting, not independent, was a PR arm for the industry. And it gets bought for nine figures. So what does that tell there confreres out there of what you want to do and does anthropic want to buy a podcast and who wants to be, you know, the, kiss ass, outlet to do that.

00:33:24:16 - 00:33:43:22
Unknown
Well, you know, we like anthropic, you know, nine figures. My soul has a place. Yeah. It's just said, you know, we're over here, you know, talking about this. It's going to have to it to pay pretty soon. So, look, I talked about Co-work last week, right? I talked at length about Co-work. Come on. Yeah. So how do we,

00:33:43:24 - 00:34:13:08
Unknown
That's a good question, though, Jeff. Do we all have a price? Right? Does TBP, or whatever? I mean, clearly they they had a price, right? This is a total buyout and everything. And maybe the media landscape now is such and the numbers that you're talking about when you're talking about these companies and the kind of the resources they have, are such that many people in this situation would, would do what TBP did, like.

00:34:13:11 - 00:34:41:04
Unknown
Oh, so some certainly would. Yeah. This is what's supposedly set up Ojai. You know, after 50 years of journalism, I tire of it's hubris to overuse the word today. It's ego about this. But I do think that's what's supposed to separate us from just content the media. Yeah. You know, my little story is that when I quit, Entertainment Weekly at Time Inc., I had not signed the manager editors contract because it had a shut up clause in it.

00:34:41:04 - 00:34:57:18
Unknown
And I thought as a journalist, one no journalist should sell their their freedom of speech. I refused to sign it if I had signed it and if I had been fired and they wanted to get rid of me so they would have fired me. That's the way it works in that world. I would have received three years salary, bonus and benefits.

00:34:57:20 - 00:35:20:24
Unknown
And I didn't, so that's that. I could put a dollar sign on my definition of integrity there. Okay. And, now, when I tell the story these days, people look at me like, don't you feel like a fool? Should you just take the money? And the truth is, nobody. Really. I didn't even write about Entertainment Weekly and Time until years later in my book magazine, which is also on sale now with an audiobook as well.

00:35:20:27 - 00:35:40:19
Unknown
Yeah. So here we go. Yeah. Thank you. I would, I would hope that though I know some would do the deal. I would hope that there are many who would not. Yeah, and I'm sure you're right. I'm sure. I don't think you would. I don't think our friend Leo Laporte I no he wouldn't.

00:35:40:21 - 00:35:56:17
Unknown
He was I don't wanna speak out of turn because it was, it was in chat but he was getting interviews and other people wanted, but the reason it was getting them was because they knew it was safe territory. Yeah. They were going to get challenged. They were they were going to get the I like how you put it, what it was.

00:35:56:17 - 00:36:19:23
Unknown
It appeared the PR wire or whatever. Yeah. It's kind of the same with Lex. Lex Friedman is similar. I mean, I watched the 2.5 hour Lex Friedman interview with Jensen Wong and, there was nothing really tough in that. It was it was interesting because of Wong. And some of the TBN interviews are interesting because of who they do get, but the reason they get them is because of access.

00:36:19:23 - 00:36:57:28
Unknown
So it's the corruption of access journalism, whether it's the white House or whether it's Silicon Valley. It operates the same way. Yeah. Although there, you know, I will say when we're talking about the podcast world, there is also value and benefit to a show. I'm doing devil's advocate here. I think there is there is some value and benefit to a show that isn't all about let's expose or let's, let's challenge or whatever, but that is more like, let's talk to this person that everybody knows or thinks they know very well and just have a conversation that's just kind of like, what would it be like if I was sitting at the dinner table with this

00:36:57:28 - 00:37:16:24
Unknown
person having a conversation? Yeah, yeah. No, you're right, you're right. So there's value that we've had, I guess really at the end of the. Yeah. Who, who I don't know enough about and I didn't have any reporting on and I had nothing to say of challenge. I wanted to hear what they had to say. Yes, absolutely. But we also on this show are critical of companies, but we do challenge at all.

00:37:17:01 - 00:37:44:12
Unknown
We challenge, in third in chicken and chicken, third person, more often. Right. So yeah, you really does come down. You knew you were safe. Yeah. Right. It does come down to what is the promise that the show is making or what is what is the kind of the the I don't know, the social contract or I don't know how you want to put it, that that show is or that outlet is known for.

00:37:44:12 - 00:38:00:15
Unknown
And if and if that's truly what they're known for, then I guess they're delivering on their promise, which is just sitting down with people, talking to them and in a non challenging, non-threatening manner. It's the same as conferences. If you go sit in somebody's big red chair on stage, odds are they're not going to come after you in public.

00:38:00:15 - 00:38:21:15
Unknown
Totally. They enticed you there. Or in other cases, the company paid to be able to sit in that chair. And one hopes that's revealed, but sometimes it's not. Yeah. So media is filled with conflicts of interest. You got to decide whom you're serving first and foremost. Right? Right. And now Gpn is serving Sam Altman. Full stop, for sure.

00:38:21:15 - 00:38:46:01
Unknown
And, yeah, I'll be curious, like how that, you know, again, that editorial independence that's important. Like, they, they've let led with that and how they were kind of, putting the messaging out there for this deal. Did, how does that change? Does it change, does it change in perceivable ways, or does it change in kind of like, you know, pressure?

00:38:46:07 - 00:39:07:12
Unknown
That's almost undetected, you know, to the news. It always comes out. It always comes out. And preemptively it's always the chill that, oh, we better not say that. Yeah. Totally. Right. Needs to be said. It might not be explicit, but it's there. Yes. Because now you don't control that destiny anymore. You don't control the entity anymore. Yeah.

00:39:07:12 - 00:39:28:01
Unknown
Interesting. When I was at Time Inc, the last column I wrote in, Entertainment Weekly, that did not run and then I stopped writing. Could possibly bother and try to write columns for it was that I praised Canada's, local content law, and the managing editor of the company came every. How could you how would you think to say that?

00:39:28:01 - 00:39:49:02
Unknown
Don't you know what Time Warner's businesses. We sold content internationally. What are you doing? Oh, I wasn't allowed to have that opinion. And that was you know I knew it was there. Yeah. Yeah. Yeah. Right. Right. You know who is allowed to have an opinion. The other nightmare who just gave a super chat. I'm sorry I'm covering you up Jeff.

00:39:49:02 - 00:40:12:29
Unknown
What up on the screen. Hey hey. Yes. Look over the top of it says OpenAI. Buying a tech podcast feels bizarrely desperate. But Sam Altman has rapidly become the Elon Musk of AI. At this point, I trust nothing, he says. And he's become an anchor around OpenAI's neck. So is Sam Altman, the Elon Musk of AI or the Mark Zuckerberg of AI?

00:40:13:02 - 00:40:35:11
Unknown
Yeah, because, I mean, Elon Musk is the Elon Musk of AI. Yeah, yeah, that's true too. Yeah. Well they all yeah they all they're all yeah. Similar territory. Although Elon Musk went with the adult chat bot and Sam Altman decided not to. So finally at least there's that I suppose maybe that's a little bit of a difference. I thought this is interesting.

00:40:35:11 - 00:41:01:26
Unknown
Two points of view. Same story you put these in as contrasting stories, anchored around a New York Times commissioned study run by AI startup Omi. Oh, I, to test the accuracy of Google's AI overviews using a simple QA benchmark. It was first done with Gemini 2.5 because that was the most current model at the time, and then eventually Gemini three hit.

00:41:01:29 - 00:41:29:14
Unknown
And so when it was with 2.5 accuracy of open AI or sorry of AI, overviews hit around 85%. With Gemini three. That improved to 91%. That was across 4326 queries. So that's all the numbers that make up this story. So you think 91% that, you know, on one hand, you got plenty of articles leading with headlines that really allude to the fact that those AI overviews are correct, more than nine out of ten times.

00:41:29:14 - 00:41:54:15
Unknown
That sounds great. Hey, I went to I went to school. That's that's an A like I'll take that. On the other hand, other other articles like the one, at Ars Technica written by my, my friend Ryan Whitman. That one out of ten does a ton of work when you're talking about the scale at which these systems operate in, in which they're delivering their results to users.

00:41:54:15 - 00:42:22:02
Unknown
And what is Technica? Says testing suggests Google's AI overviews, tell millions of lies per hour or hundreds of thousands of of inaccuracies, let's say, going out, all the time. Even when you're talking about nine out of ten success rate. And so it's interesting framing. It's an object lesson in framing, right? It's an object lesson and say, well, if people just had the facts, there are the same facts here, the same exact fact.

00:42:22:04 - 00:42:39:29
Unknown
This is the power of headlines, of the condensation that goes into a headline, and the power of the editor to set the tone of of how this is to be interpreted. Yeah. And it's not as simple as saying, well, we got the facts and everything will be okay. Yeah. This is two radically different ways to look at the same facts.

00:42:40:02 - 00:43:00:05
Unknown
I wonder how the AI overview would interpret these things. What would the AI overview interpret it as itself as as glass half full or half empty? You have to be curious. Maybe it depends on what the majority of the sources it's, you know, it's pulling from. Although many of the sources in the study appeared to come from Facebook and Reddit.

00:43:00:05 - 00:43:23:19
Unknown
So take from that what you will. It's so I just I just searched for Gemini. Number of lies. The AI overview is AI overviews powered by Gemini are accurate approximately 90% of the time. According to reports and analyzes from April 2026. This means that about 10% of AI responses can be incorrect. This can result in millions of inaccurate answers.

00:43:23:19 - 00:43:46:14
Unknown
Daily Source Hours Technica. Okay, good on you, Goog. All right. But it chose to lead with the nine out of ten, right, because the Ars Technica has a bigger site than what's that one called? The other one. What is this? Oh, this is the decoder. The decoder. Right? Yeah. Well, but ours technique is the one that said that that.

00:43:46:19 - 00:44:11:26
Unknown
Yeah. So put a headline on the the negative on the last less full. So that shows glass more full. Yeah. Interesting. I mean just goes to show. Right. Like there are still plenty of, of, of ways in which even like you said, unbiased reporting let's say, doesn't entirely exist. Right. Like there is no such thing. There's always human decision, there's always perspective.

00:44:11:28 - 00:44:38:16
Unknown
Google, by the way, says that the reporting methodology is flawed. They say that simple QA contains errors. It relies on trick questions, it doesn't reflect real world queries. And it's designed for offline evaluation not online. So and and this I don't I'm not saying the truth is relative but interpretations of errors and truth can vary witness these headlines.

00:44:38:18 - 00:45:00:16
Unknown
Yeah. Yeah. And I so Google's Google's still saying that the, the AI overviews don't impact site visits. Right. Yeah. That's a, that's a regular fight that goes on. And I don't I'm not sure what's happening for this. I think that hard to know. It's hard to know. Yeah. Hard to know but easy easy to assume.

00:45:00:18 - 00:45:17:19
Unknown
Let's look at the part of the problem here. It's the same. It's the same with the whole discussion about links as a whole. Yeah. Is publishers presumed that every time a link was seen it would be clicked on? Yeah. And if somebody didn't click it was a, it was a link lost. It was a traffic lost. Well that's not necessarily the case at all.

00:45:17:21 - 00:45:40:13
Unknown
And so how do you know what would have been. You're trying to prove a negative kind of the the links. That didn't happen. It may not have happened anyway. There's no way to know. No way to know. It's true. You put in a really interesting kind of look or let's see a look and then a counter look by Gary Marcus.

00:45:40:15 - 00:46:09:14
Unknown
But New York Times had an article, focus on Matthew Gallagher. Who's the gentleman who's pictured here looking at you so sprightly, who, along with his brother and one other employee, vibe coded and built med V, which is apparently a telehealth business that markets and sells GLP one, weight loss drugs. Can probably be getting into, men's sexual health, medication in the near future, which kind of makes sense for the direction of all this stuff, in my opinion.

00:46:09:14 - 00:46:32:24
Unknown
But, primarily used a collection of AI tools and $20,000 in startup funds to hit, eventually $401 million its first full year. And this ties into, you know, Sam Altman, from OpenAI has been predicting for a while now that so low founder companies this isn't even really a solar founder. There's a couple of employees here, but still it's close.

00:46:32:27 - 00:46:54:16
Unknown
Could soon become billion dollar companies thanks to the use of AI. And this apparently seems to be, at least according to New York Times. One example of that, right? The AI handled the code, the marketing, the customer service, the analytics, even even had an AI clone of his voice. And why? Because he didn't have enough time to schedule all of his personal appointments.

00:46:54:18 - 00:47:14:26
Unknown
He wanted to spend his time working, and so he had the AI version of his voice do all that personal stuff for him. That's New York Times coverage. So I when I read this at the time, I had a bit of a fit in the socials because it's a rather sleazy enterprise, right? It's just trying to sell GLP one.

00:47:14:28 - 00:47:39:02
Unknown
There's other ways to do it. It has its, you know, network of air quotes, doctors, and this stuff, has, side effects and issues with people's health for sure. For sure. And to just create this company to, to jump on, the, the craze, I mean, it's crazy good work. Yeah. It's crazy. Yeah, it is a total try.

00:47:39:02 - 00:48:01:10
Unknown
People sense of their own bodies and weight and so on and so forth. So, but here's the times as lionizing this as saying, oh, look, you really can start $1 billion company with just a few tools. Isn't this amazing? Then God bless Gary Marcus because, he, he walks around all day with a bunch of pins to put in bubbles.

00:48:01:12 - 00:48:25:03
Unknown
Yeah. So he quoted a lot of other things. We quoted, Akash Gupta on the socials saying what the times didn't say is that movie has received FDA warning letters, for misbranding violations. The company holds no proprietary technology, no licensed physician network, no pharmacy infrastructure. It outsources every regulated function to other companies, while keeping the customer relationship.

00:48:25:03 - 00:48:51:08
Unknown
Check out flow and ad spend hims. In contrast, spent $2.4 billion or earned $2.4 billion revenue last year with 2400 employees and a 5.5% net margin. B2B claims a 16.2% net margin with two people. Okay, fine. So you can cut corners galore with AI. Is that really something to to celebrate the way the times did? Yeah.

00:48:51:10 - 00:49:12:17
Unknown
Bob, Rob Freud says quoted by, Gary, that that was sued in a class action suit last month for violating California's anti-spam law. Then he has other dissections of it and so on, so forth. So so it's a it's a weird it was a weird handling by the times to just say, okay, it's arrived.

00:49:12:17 - 00:49:35:03
Unknown
Here's the billion dollar company. Yeah. Why why would why would they leave that out. That all seems very material and important. You know, there's there's also shill a monod who, shows how they made Facebook accounts for more than 800 fake doctors. And, shill says I had clod verify none of them are actual doctors, to advertise on Facebook.

00:49:35:06 - 00:49:59:28
Unknown
That's super sleazy. So, like I said, yeah, Marcus ends here. All in all, glorifying Medved was not the New York Times finest hour and hardly the poster child I boosters should be hoping for. Instead, as the YouTube video author void Zilla notes, if anything, Medved is a warning sign for how I can be abused. And I'm at scale team Marcus here.

00:50:00:00 - 00:50:32:04
Unknown
Yeah. Yeah, totally. I mean, it it it's a warning for kind of the ability of AI to be used to amplify. Yep. These these quick, like, what was this person able to use these AI tools and vibe code his way to a lot of money. Yes, but the realities and the details behind it are pretty darn sleazy. There is a, interesting documentary, by the way, that's only tangentially related to this, but it kind of kind of reminded of it on, Netflix about the manosphere.

00:50:32:07 - 00:50:58:29
Unknown
And if you haven't seen that, you need to see that. It's and and the reason that I bring that up is because a large part of that is about this, this world of, of male influencers that are going like, all in on hyper masculinity and all of these things. And what are they doing? You know, they're they're appealing to the younger boys of men who might feel misplaced or out of place or out of step and want to feel more masculine or whatever.

00:50:58:29 - 00:51:22:19
Unknown
So they're tying into this, like this, I guess cultural need to explore their masculinity. But they're but they're also driving all of these supplements and all of these, this whole other avenue. And in that documentary, you know, they're they're interviewing these people who have very strong visceral viewpoints about this stuff. And the people, you know, the guys basically admit, like, we don't care if it's safe.

00:51:22:24 - 00:51:42:25
Unknown
We're here to make money. Like, so we're going to do what we can do and tap into the mindset, to earn as much money as we can while we can. And something like this kind of pink know, I feel the pang of that in in reading through this where I'm just like, these tools enable you to do this, but that doesn't mean you should do it.

00:51:42:25 - 00:52:02:24
Unknown
Like, that's just kind of sleazy. Yeah, slimy. So interesting stuff there. Thank for the for the read. Very, very interesting read. And definitely check out Gary Marcus's, retort. Yes. If you do check it out. Real quick, if you're enjoying this show, please, you know, leave us a review on Apple Podcasts. We got a bunch of them.

00:52:02:24 - 00:52:16:25
Unknown
I so appreciate that. Thank you for doing that. Thank you for, if you find the videos on YouTube, throw some comments in there. Interact. That helps get visibility. We just, we really appreciate you all, being being here with us. We're

00:52:16:27 - 00:52:46:29
Unknown
All right. We aren't done talking about anthropic. Broadcom has announced it will manufacture future Google AI chips and scale anthropic cloud infrastructure to the tune of around 3.6GW of Google TPU based compute. So this is this is a big deal for for Broadcom for anthropic. And I'd say, you know, Google's transition from, keeping its TPUs to itself in-house, opening them up for startups, like anthropic.

00:52:47:02 - 00:53:06:13
Unknown
I mean, this is just got to be great for for Google's business. What a what a smart idea they had in opening up their TPU. I couldn't agree more. I think Nvidia is an amazing company and its chips are use like crazy. But we need competition. Absolutely. And let's not forget that Google started all this with its Transformers and its paper.

00:53:06:16 - 00:53:25:10
Unknown
All you need is attention. And so it should be at the center of this. And I think it's good for all these parties. Yeah, yeah, indeed. So there you go. Good on you Google. Good on you Broadcom and Anthropic I'm sure we'll see more of that coming from the TPU side of things. Amazon has launched S3 files for AWS.

00:53:25:15 - 00:54:02:29
Unknown
So this is a managed file system layer over the top of S3 lets apps and AI agents use standard file operations in inside or rather against S3 buckets. So apparently there's there's been a gap, for quite a while between S3 object storage and then traditional file systems. So this, this kind of aims to, to address that, this before it basically meant that teams had to copy data into separate file services in order to do things like machine learning training or, or, working through a genic workflows using that.

00:54:03:01 - 00:54:28:03
Unknown
This addresses that. So one can imagine this to be really, really powerful for those who are in the, the AWS universe. So if you're, yeah, if you're building agents, there you go. Another number there option this is near and dear to my heart. Android XR Google rolling out a few new features to Android XR. Although I don't know if we're talking about this on Android Faithful podcast last night.

00:54:28:06 - 00:54:48:25
Unknown
No clue how many people actually have the Galaxy XR headset. Probably not many, but I still applaud Google for continuing to build out, the Android XR ecosystem. I do think that eventually, you know, it'll become a bigger deal. But auto specialization was one of those features that I caught very early on when I first tried this headset on, like a year and a half ago.

00:54:48:27 - 00:55:06:00
Unknown
And, they were like, oh, and by the way, somewhere in the near future will be able to specialize, you know, turn anything, you see, like a YouTube video or a movie or whatever, immediately turn it into 3D. And, that was the first time that I had experienced that technology in a headset where it was done in real time.

00:55:06:00 - 00:55:26:23
Unknown
I think it was like 24 milliseconds of latency. So it was really fast. It did it in real time. And I, when I tested it, it was with that movie F1. I think it's F1 on Apple Movies with, with Brad Pitt. I think I never saw the film, in full, but that was the clip that I chose on YouTube to watch in that auto specialization.

00:55:26:23 - 00:55:56:16
Unknown
And I could almost hardly not tell that it was like done in the moment, and that it wasn't done intentionally. There were there were a few tells, but it was pretty convincing. So that's coming. Everybody, apparently, you know, all ten people who own the headsets. Totally. I mean, need but then at the same time, how many years did, TV makers tell us we wanted 3D on our TVs and then just kind of gave up because no one actually really cared.

00:55:56:19 - 00:56:14:09
Unknown
But that's pretty nifty. There's also a couple of other. Yeah, there was a time when I wonder whether HD was going to was going to take off on TV. So yeah, one can be wrong. Yeah. Oh for sure. HD has definitely been a huge, huge success. One of the other features that I think is is interesting. Well, it's not features.

00:56:14:09 - 00:56:48:22
Unknown
They just announced that there's more immersive, apps being released, right, made specifically for, the headset. And one of the things that they showed off is Paris Saint Germain immersion app for watching soccer games and the immersive view of it where it has like stats on both sides of the screen that has the game. That's neat. That's all fine and good, but it also has this like top down digital representation of the field, as if you're sitting in the bleachers looking down onto the field, watching all the players moving around in a digital representation.

00:56:48:24 - 00:57:05:00
Unknown
I think that's pretty nifty. Like, I would love to know the technology that drives that and how that's all put together, because that's a that's pretty cool. I don't know how many live sports games you've gone to, a stadium or whatever, but I've been to a handful. And when I'm in the stadium, I was always like, this is awesome to get the full view.

00:57:05:00 - 00:57:31:07
Unknown
But I do wish that I was closer to the action. This kind of gets you both, you know? So that's neat. I suppose if you like the XR stuff. And then Netflix researchers released void actually released this to Huggingface. So this isn't something that's open for any building on to open for anybody. It's a video language model that lets anyone delete objects from footage and replace it with, generated content to finish the scene.

00:57:31:07 - 00:57:49:19
Unknown
And so the idea here as they were posing, it is, you know, instead of having to reshoot a scene, you reshoot, you shoot this whole scene and then you realize after the fact that something needs to be changed and you don't have the budget to go back to it. Well, you can use void, which, by the way, stands for video Object and Interaction deletion.

00:57:49:21 - 00:58:10:07
Unknown
You could use that to delete the scene at the point at which you need to redo it, and then in paint the remaining objects within that scene to kind of recast it in. So one simple example, if somebody jumps into a pool and the pool ripples, of course it splashes. You eliminate that person and the pool remains placid.

00:58:10:09 - 00:58:46:01
Unknown
Okay. So that's an example. But I think it's this is just the first toe in the door of Netflix enabling AI creation to all of its creators. And even though the creative community will still have fits about it, there's no stopping this juggernaut. No, I don't think so either. And they've even hinted at this, in some of, you know, some of their policies around films made by, you know, funded by the network itself or by the streaming service itself, they're allowing, they're allowing certain generative AI uses on their films and things, and kind of seems like that door's just going to get wider, wider open.

00:58:46:04 - 00:59:10:12
Unknown
Not just for Netflix either. Across all of Hollywood and film and stuff. So prepare yourselves for the void. Jeff, thank you so much for, joining me once again. Each and every week we do this show and, I couldn't it wouldn't be the same without you. So thank you, Jeff, for being with me on this. Jeff jarvis.com for all of Jeff's work, including the upcoming book Hot Tip.

00:59:10:15 - 00:59:33:17
Unknown
I love that we get little sneak peeks into, different topics and stuff from the book, every week. But you can go to Jeff jarvis.com and do your preorder. You can also find Gutenberg Parenthesis magazine, which we talked about earlier. The web we we've so much of Jeff's writing can be found there also on medium intelligence AI and humanity, which is a new book series coming from Bloomsbury.

00:59:33:19 - 00:59:55:27
Unknown
That's exciting. Yep. And you? Yep. And me. Well, you can find, pod tune up.com for podcast consulting if you got questions. Hey, you know, reach out to me. Maybe I can help you with the answer. We can figure something out together. So there you go. Pod tune up.com. I inside dot show for all the information that you need about this particular show.

00:59:55:27 - 01:00:24:21
Unknown
All the ways to subscribe to follow episodes, interviews, it's all listed there. You can find it. Ways to subscribe, links to the Patreon Patreon.com slash AI inside show. This is for those of you who really want to support us on a deeper level. You get ad free shows, access to the discord community. I've been learning a lot through the AI channel on the discord, in recent weeks, especially as I've been diving deeper into cloud Co-work Doctor Du has been a huge help.

01:00:24:21 - 01:00:46:01
Unknown
Mike Matt at all also is in there. And so we've been having some great conversations. But you can do that as a patron. And if you want to go real big, you can be an executive producer of the show like Doctor Dew, Jeffrey Martini, Radio Asheville, 1 or 3 .7., Saint James Porter, Derek Jason Cipher, Jason Brady, Anthony Downs, Mark Archer, and Carsten Szymanski.

01:00:46:01 - 01:01:08:04
Unknown
Thank you so much for supporting us on the deep, deep level that you do. It really does, not go, does not go unappreciated. It does not go. Appreciate it. No, it goes appreciated. It does not go there for sure. Thank you very much. Thank you. I got twisted there. Thank you Jeff. So much fun hanging out with you.

01:01:08:04 - 01:01:12:26
Unknown
Thank you everybody for watching and listening. We'll see you next time on another episode of AI in cyberpunk here.