Is Google Handing AI Startups the Keys?
September 03, 202501:20:52

Is Google Handing AI Startups the Keys?

This week, Jeff Jarvis and I look at Google’s antitrust ruling and how it shakes up the AI landscape, wonder if Tesla’s $25T Optimus robot forecast is just a diversion, debate if Netflix’s algorithm-driven movies are killing creativity, and go deep on OpenAI’s new ChatGPT mental health "safeguards."

Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice!

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

CHAPTERS:

0:00:00 - Podcast Begins

0:02:05 - Pixel 10 Pro, Pro Res Zoom, and the Trinity Alps experiment

0:05:23 - ⁠Google stock jumps 8% after search giant avoids worst-case penalties in antitrust case⁠

0:10:18 - ⁠Read our statement on today’s decision in the case involving Google Search⁠

0:18:17 - ⁠The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking⁠

0:29:47 - ⁠Related: Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave⁠

0:36:15 - ⁠OpenAI to safeguard ChatGPT for teens and people in crisis⁠

0:39:36 - ⁠My mom and Dr. DeepSeek⁠⁠‘⁠

0:43:13 - ⁠Sliding into an abyss’: experts warn over rising use of AI for mental health support⁠

0:45:32 - ⁠Bland, easy to follow, for fans of everything: what has the Netflix algorithm done to our films?⁠

0:50:12 - ⁠New Yorker: A.I. Is Coming for Culture⁠

0:57:51 - ⁠Musk looks past Tesla sales slump, says 80% of value will come from Optimus⁠

1:00:05 - ⁠Deep Hype in Artificial General Intelligence: Uncertainty, Sociotechnical Fictions and the Governance of AI Futures⁠

1:00:54 - ⁠Rethinking How AI Embeds and Adapts to Human Values: Challenges and Opportunities⁠

1:01:44 - ⁠BirdRecorder’s AI on Sky: Safeguarding birds of prey by detection and classification of tiny objects around wind turbines⁠

1:03:37 - ⁠Amazon’s Lens Live AI shops for anything you can see⁠

1:05:21 - ⁠Doctors develop AI stethoscope that can detect major heart conditions in 15 seconds⁠

1:08:33 - ⁠Introducing gpt-realtime and Realtime API updates for production voice agents⁠

1:10:11 - ⁠WordPress shows off Telex, its experimental AI development tool⁠

1:11:50 - ⁠Anthropic launches a Claude AI agent that lives in Chrome⁠

1:14:01 - ⁠Microsoft releases its own model (OpenAI independence)

On this episode, Jeff Jarvis and I break down Google's new legal reality in the face of a major ruling in its search monopoly case and what it actually means for competition in the AI space. Turns out it actually means a lot. Plus new chat GPT safeguards around mental distress, Netflix's data-driven movies and Tesla's robot dreams. Are they just a distraction? Well, we talk about that on the AI Inside podcast coming up next. Hello everybody and welcome to another episode of AI Inside, the show where you take a look at the AI that is layered throughout so much of the world of technology, so layered in fact, that it is layered throughout my garage, which is where I'm podcasting from today. It's a converted garage. There's a lot of furniture in here to soak up the reflections and everything, so hopefully it sounds okay. Also, I've got the Heil PR-40 mic, so it does a pretty great job. It's a fantastic mic. If you're thinking about getting into podcasting, this is the mic to get. I'm one of your hosts, Jason Howell. joined not from his garage is joining me as Jeff Jarvis. right, fast up, Jason. Did you do something wrong or did you get kicked out of the house? What's going this is my new home. My new home is the garage. You were the dog. Yes, well, that's true. My dog is like four over there. No, I did nothing wrong. It's just we have a bathroom remodel happening. I think I mentioned it maybe last week or a few weeks ago or something. and today they're installing a door and they're doing other noisy things and so I've been relegating myself to the garage. It's discombobulating. Yeah, you know, it throws everything for a loop. And then my mom is coming to town so we got like all this stuff happening all at once. I really don't think that the remodel's gonna be done before my mom gets here which is a total bummer because she's gonna come into a house of chaos but whatever, she'll have the garage. Mom's deal. uh Yeah, she'll be fine. We'll do lots of fun things with her anyway, so Good to see you, sir I'm not I don't know that I'm prepared to talk about it yet on the show But when I was I was just backpacking and I was in the Trinity Alps for like five days We did lots of hiking and everything, but I did bring with me the pixel 10 that we talked about last week the 10 Pro and I used the 100x Pro Resume a bunch and that's the, uh, that's the feature that uses AI to kind of sharpen really crunchy digital zoom images. And that was like the ideal place for it. Cause the model is tuned for landscapes and, um, kind of natural environments. And I got some really cool output out of it. And so when I'm ready, I'm not ready today, but like, I think I might write an article about it for ZDNet and then we could talk about it I could show off some examples. I'm curious too, if you shoot something that far away and then you go to that point and see whether it was legitimate. Well, yeah. And I think that's the bigger question that I, like the whole weekend, the guys that I was with, they were like, ooh, do a hundred X of that, do a hundred X of that. Like they thought it was the coolest thing ever. Right. And the whole time I was just asking myself like, You know, the easy go-to thing is, yeah, but it's not reality because AI is sharpening it. And, you know, how do you know that that looks exactly like that when you get up close? And my question was kind of like, yeah, but does that really matter? Like, I'm not going to hike to the top of that peak for that one tree just to check it out. Oh, come on, Jason, do your just think it looks cool from here and I want to take a picture of it and good enough, you know? It's an interesting question about like what reality is and doesn't matter. We've talked about this before, the age of close enough for uh AI. Things are approximate. They're not exact. Yeah. Yeah. And it is fiction. It is guessing. It's filling in information based on what the model tells it to. And it didn't get it right 100 % of the time. But most of the time when I was taking a picture of something that was really, really far away, I was pretty satisfied with the picture it gave me. Am I gonna frame it on my wall? No, probably not, but it satisfied the reason that I took the picture in the first place. So. It'd be interesting to try to fool it sometimes and see if you could have it fill it. Oh, you thought that's what it was, you silly Google. Yeah, well, I did have one really weird, see now I feel like I should have just prepped this for today, Sean. I did have one really. You don't have your usual command center. It's a little, yes, I don't have my, my control center for sure. I'm on a little laptop, but I did have one where I took a picture of the moon. And of course it knows exactly how to interpret the moon, right? Based on the phase and everything. So it looks amazing when it does the AI thing on it. But one of the times that it did it, it totally got it wrong. And it had like all of these like weird like lines and scribbles through it. Like it got confused in some way. and the output is totally messed up. And that was like the only time that it really messed up though. So. Cool. Yeah. Anyways, I can show off examples another time when I have my control center, when I'm at the helm of the spaceship that runs this show. uh But it's good to see you got lots of stuff to talk about and boy, do we have a top story, right? Like Google, uh got a pretty big ruling, Judge Amit Mehta, ruled against some of the most, I'd say, consequences ah that the Department of Justice had floated as solutions on the other side of the search monopoly case Google lost one year ago. So it's been about a year since that loss. Now we're finally seeing what these penalties actually are. There was the possibility that Google was going to Maybe lose the Chrome browser though. I think you and I both agreed that's probably not gonna happen. We don't and it is not going to happen apparently um Also, Google can keep its Android division Google can also keep its search deal with Apple and others and uh Mozilla very importantly and Mozilla right, right? um But Google cannot make Exclusivity a part of those arrangements. Is that right? So yeah, I'm confused. I'm a little confused about this. I think the exclusive thing that bothered the judge were phone deals. Um, in the case of Apple and Mozilla, I was, I was on chat with our friend Leo Laporte earlier today about this cause I was very confused and he said, well, uh, it's never been exclusive with Apple. You can always go to Apple and choose your search. It was just a preeminent one. Um, and if it were not premier, let's put it that way. that it's worth less to Google. So Apple doesn't want to get less than $20 billion. So it changes nothing, I think. think the judge was very smart about it. I was very surprised and actually happy about the ruling. I think it was quite smart in two ways. One, he realized that to take away those fees to Mozilla and Apple would have harmed those companies and in turn their customers. Right. It isn't just about Google. It isn't just about punishing Google at that point. Cause I don't think either of them was saying, yeah, you should totally make Google not pay us money. no. So Google stock is now up uh as we're talking 8.94%. And Apple stock is up 3.54%. And the rest of the NASDAQ is up just a squanch, not much. So those were two direct reactions to this. The other thing, uh that is so important that is relevant to the show, of course, is that it was about AI. Yeah. That the judge recognized that AI is presenting competition to Google. And even from a year ago, the notion that Google had a monopoly in this sector is wrong because AI presents lots of competition to uh search, including within Google, but also certainly from without. The other point of the decision that I haven't got my head around yet. Jason, I wonder whether you saw anything about this part is that Google has to share data with others. But I'm not sure what that is. And the interesting to me is, and I want to go back to our friend Rich Greta at Common Crawl. I wonder whether that isn't so much helping other search competitors, but does Google sharing some measure of its crawl help uh AI companies with their training data? I think I think 100 % it does. Actually, Dr. Do in our discord, I, know, big thanks to him for posing some of the questions around exactly this that kind of got my mind turning, you know, and he's, basically saying like, this is an open gift to companies like perplexity, to open AI to AI startups. I mean, I don't know that we know the depth or the dimension of how much of Google's data is required to be shared around its search product. And that's, that's the thing that I think is still confusing. Yeah. And I think it's like, here's everything. No, no, no, no, no. No matter what it's Google's secrets that, it holds very close to its chest. And it's been the thing that's given its search product so much value for so long. And now unless they appeal if they can and it's overturned, they would be compelled to share a lot of those secrets. And yeah, I have to imagine that's very good news for companies like Perplexity, OpenAI, all of them that could potentially benefit from it. Google has said that they're studying this. And what they've mentioned in this context is privacy. So I'm not sure whether the presumption was that they would get people's But I think it's more about what they crawl. uh that that crawling has knowing what to crawl itself. has benefit because Google has gone through that. So we'll see where that goes. think that's, that's, um, it'd be really interesting to see. Now the question you just raised, will Google appeal this? think there's one school of thought that says, no, you kind of won. Just shot, right? Just leave well enough alone. But on the other hand, they do have a monopoly judgment against them, which may have impact in other countries. Um, and there are, you know, there are punishments here nonetheless that have I think minor impact on the business. I'm not sure. I can imagine that when the decision came out, there were a lot of pop champagne corks in Mountain View. This was a, this was spectacular for Google. It was really a win and a major loss for the prosecutors in their efforts to, uh to smash Google. then the discussion of a lot worse. Oh, really was selling Chrome was just idiotic. Yeah. The, um And I said that I thought it was a fine decision and I was happy about it. And I thought people would come after me. Oh, Google, Google. I didn't even get much of that. think it just makes sense. Judgement is right. But the discussion among Google lawyers has to be fascinating now because it's a strategic decision as to whether they're better off or not to go ahead. Uh, let the sleeping dog lie. They do have some other cases upon them. Uh, and, uh, they have an ad case, which I think we're going to, I think they're much more vulnerable. and all Europe's always on their tail. So I'm not sure what their strategy is gonna be here, but again, this was a big win for Google for having lost. Yeah. mean, I guess I'd be kind of surprised if Google didn't appeal. Cause I just always expect that is like, Oh, well there's, you know, something more that we can get out of this, but you know, do you want to really open that Pandora's box again? I mean, if you do open that Pandora's box, there's always the risk that the next judgment is, well, actually, you're right, we should force you to sell off Chrome, right? Like, again, I don't think that would happen. And I guess that's part of Google's decision making is like, oh, that's pretty unlikely that that would happen. Do we stand to lose more than we've already lost if we choose to open it up again through appeal? the market's happy. Yeah, that's what they care about. Okay. And so, okay. So Mountain View, they're popping champagne bottles. Do you think companies like perplexity, open AI, other AI startups, are they also popping champagne bottles? Well, perpore perplexity doesn't get to buy Chrome. Yeah. was never going to happen. I don't think they really truly believe that they would at all. Yeah. I think so because I think, I think it recognizes that they are legitimate competition. Yeah. Right. That's true. giant is challenged by them. And that includes Microsoft. includes, you know, which is always trying to poke Google in the eye uh and open AI, certainly in perplexity to an extent and uh an anthropic and company. I think that, you know, the thing that I hope for so much in the AI world is that it's not controlled by those companies we just listed that open source and small models and new paradigms come in to take away from this scale for scale sake that exists now in the AI world, but that's what it is. So it's one giant versus others. Tokyo Bay just got more crowded. Tokyo Bay, say that again. Tokyo Bay, that's Godzilla versus Mothra. Oh. Fighting in Tokyo Bay, sorry, was a reference. I was. None of those companies were Japanese, Jeff, what are you talking about? I you were referencing something specific on this that I just didn't read up on. I'm like, do I pretend like I know what he's talking about or? As I usually have to when he goes off on these odd things. Mothra. Um, yeah, I mean, they have to, this has to be some sort of recognition of the fact that the dynamics of all of this have changed for the first time, probably since Google really got into and became so dominant in search to begin with, you know, no, no company stays on top forever. And you know, Google's been, been so incredibly dominant for so long in these few key areas, advertising search. you know, really wants to be that dominant in AI because that's the new wave, but they got a lot of competition and this is a, this is an acknowledgement that Google isn't quite in that dominant position. It's kind of a catch 22 or a double edged sword because on one hand it's good for Google because it went this way and they didn't have, you know, to suffer dramatic consequences, but through it. they recognize that, boy, we aren't as strong as we used to be. know? So let me turn your question around. Who's, besides the prosecutors and government, both administrations, who's unhappy about this decision? Yeah, that's a good question. I guess anyone who was hoping to acquire Chrome would be unhappy, whether they truly believed it or not. um Yeah, I don't know. That's a really good question. There's the folks who just generally are Google haters. Yeah. Yes. Putting companies that think that Google has an unfair advantage, blah, blah, blah, blah. They would have liked to have seen Google cut down to size. But I think there was any specific thing they would have gotten out of any of decisions. No. would have Speaking of Chrome, it wasn't going benefit anybody. uh And now, by the way, the other thing is Google is now fully free to integrate AI into Chrome. I think they had to hold back a little bit before, but now all bets are off and they can change the nature of search and the nature of uh browsers and the nature of their OS. OSs. Yeah. OSI. All right. Uh, curious to see what that all leads to. Yeah. Very, very interesting. Um, yeah. Any other thoughts? No, I think it was a big news day and I think it's going to get chewed around a lot, I, uh, I'm interested that I haven't really seen any negative. I saw one column in the New York times that said that Google got clobbered. only person who's thinking that then they also in the New York times in their news columns had a, uh, explainer on it and their graphic for it. was Mr. Moneybags from Monopoly carrying the Google logo. Which struck me as very much an editorial comment on this. think the New York Times and certain journalists are unhappy because they like to demonize Google. And by the way, I wondered whether using Mr. Moneybags was a fair use, but we'll put that off to an end. Yeah, good question. I don't know, how long has Monopoly been around? But I looked it up and you can't copyright the game. but the characters are copyrightable. So take that. Hasbro, go after the times. It'll be funny to watch. Well, we want to thank our patrons for helping us and supporting us in everything that we do. Patreon.com/AIinsideshow. Gonna throw out a couple of names today. Burke Norton. ah Who is awesome? Thank you a recent patron good to have you on and then also Steve Remington who upgraded his membership There was something about what we're doing that made Steve go. You know what? Thank you Steve love to see that so I think each week I'm probably gonna thank more than just one person because there's a lot of you and uh We'll just kind of cycle through so you'll get your name read. There you go patreon.com/AIinsideshow Thank you so much for supporting what Jeff and I are doing here We appreciate you. Gonna take a quick break and then we got a bunch of really interesting conversations to have on the other side of it. So don't go anywhere. All right, Gary Marcus wrote an opinion piece. You linked me to it. So thank you for that. It's very interesting. And the New York Times where he says the fever dream of imminent super intelligence is finally breaking, which is really in line with what Gary's been saying for quite a while is y'all are crazy. Super intelligence, AGI not happening anytime soon. This idea of scaling up these models, you know, just throwing tons of money and tons of compute in order to continue to see massive amounts of gains. His argument is that we're kind of reaching the end of those gains. And that GPT-5, as one example, was set to be a milestone. It ended up disappointing experts. It didn't evolve beyond its old issues. It still hallucinates, although maybe less so. Still provides unreliable answers. All these kind of long term in the, in the scale of LLM technology, the last couple of years, uh issues that continue to crop up, even though, you know, the companies are so bullish on the drive to super intelligence, the drive to AGI, the fact that all of this compute can actually make these models exponentially better. And Gary's kind of pointing once again to say, well, look where we are right now. Things are not exponentially better maybe, and things seem to be changing in the opposite direction. So I suspect that Gary asked the Times to carry as a headline over his column, yeah, yeah, yeah, yeah, Because that's kind of been his attitude toward all of this. Yes. ah And I read Karen Howe's book, Empire of AI, uh which is very good. And it's very clear how much they cannot. Dan, Gary Marcus inside open AI. uh They are his punching bag and he is their punching bag and it goes back and forth. So I thought this was actually for Gary. And I don't say this mocking Gary, but Gary loves to be very blunt in social media and in his, in his newsletter and such. And he was very restrained here. I thought he had a few digs at open AI, but, it was very restrained and very sensible and very clearly explained. uh where he's been saying for some time that scale alone won't get us there, as you explained, Jason. And I think that makes sense. And I also think it makes sense just because um it's expensive and it's expensive with the environment, it's expensive otherwise. And if we just think more, it's a very American way to think, just more, bigger, More power. was the sitcom? Home Improvement. Is that, tell me you're not too young for. Tim Allen, Tim Allen, okay. Arrrr. That scared And then the guy on the other side of the fence, the one who ever saw the eyes. I remember Pieces, it wasn't my favorite show. More power, more power. And that's kind of American male macho nerdy way. So I think that we do hit a wall there, and he's been saying that for some time. But he also explains the other things that are needed. The other things to explore. Yes. And I think that's what's most useful about this column is uh that the first and most important thing that he mentions, which is what Yung Lakun mentioned when we interviewed him here, please look it up if you haven't watched it. It's a great chat. Is you need world models. You need to know what the real world operates like. And Jensen Huang talks about that too, in terms of robotics and cars, because that's the real value back to the AI. If you put the AI into the real world of robotics, it has to learn lessons. that will inform in turn the models and how they operate. So I think that's critical. He then talks about uh the field of machine learning likes to task AI systems to learn absolutely everything from scratch. And he argues in here uh that the human mind is born with some core knowledge. That's Steven Pinker, you could debate that. But his point is that maybe you do program in certain things to give it a head start, rather than making everything learned. uh And then it can get a better handle on the world from there. And then there is the question of symbolic uh logic here. And that's been the fight that's been going on in AI for quite some time. And there's still a strong symbolic force out there that believes in that. And Gary is one of them who thinks that there's some combination. And he talks about Daniel Kahneman and the question of neuro-symbolic AI, which, to quote Gary, bridges statistically-driven neural networks from which large language models are drawn with some older ideas from symbolic AI. Symbolic AI is a more abstract and deliberative by nature. It processes information by taking cues from logic, algebra, and computer programming. And this is very Stephen Wolfram as well. So there's a different school of AI. And what's happened in the last two years is that every dollar and every uh chip at every uh square foot and data centers and every megawatt has gone to scaling large language models as they've been and scaling the transformer model. And yes, it's not a lot of stuff. It has been pretty amazing. The progress is there, but it won't take us so far. My only other point I want to make is that this discussion still occurs around the goal of AGI. And folks who've watched the show know that that's what I always say, AGI equals bullshit. And I don't think that's the goal. I think that the goal should be to see, what can this stuff do? How else can we do it? What's possible here? And that's where I think we've got to free up the thinking because right now it's in a rut. It's in a very expensive, big rut. a fast moving rut, but a rut nonetheless. And so I welcomed Gary Marcus's column here, because I think it gave a very clear, sane view of saying there are other paths we should explore. Yeah. Well, and he also pointed out that there needs to be legislation on the side of the fact that these companies are passing the buck to the users in a lot of sense. many senses, passing the cost, passing the harm onto the public, that sort of stuff. that there might, you know, there might be benefit to, and we will definitely talk about that in the next story a little bit, but you know, that these companies will need to address these things um so that they aren't, so that they aren't just passing the harm or the, or the cost. I think the question remains, how do you do that? How do you do that for sure? And, and, and how safe can guardrails be and again, I've said it before on the show, I don't think they can be very safe and so we've got to, we also have to reckon with that. That just as it's approximate and it's good enough for AI and that tree might not be a tree off of the distance of your picture. Similarly, somebody can come in and ask these machines to do things that weren't intended or weren't anticipated and there will still be harm. So when it comes to something like symbolic reasoning and building world models and everything, are those, And I'm not asking as if you automatically know the answer, but it's just a question that comes up for me is are those problems that scaling actually can provide answers for? How do you get to those answers? Because when I think, because when I do think about, you know, compute, you know, the compute needed to figure out a certain problem or whatever, not being a developer, not being someone who ever works with these things. So in my limited knowledge of the technical aspects of this, I would sort of automatically assume that more compute means get me to the solution faster. And so does that scaling actually get you there? I don't know. um And as we always say on the show, we're here to learn. So we don't, there's we don't know. uh But I'm not sure that's the case. I think it's a different paradigm. It's a different way to look at the problem. uh scale came along at the right time. The chips that were capable, the size of the capital available, the desire was here, the progress that was made that then encouraged that investment, that all kind of came together to make scale the monster. And it also fits in with the culture of Silicon Valley. I hope that's not the answer, Jason, because then we little guys get left out. Then universities get left out. Startups get left out. And the hope that I have for this future with AI, and it's not ruled by AI, it doesn't take over the whole world, but it's a factor in the world. uh The only thing that gives me some hope is the fact that the tools are made simple enough for any of us who are not technologists to command, A, and B, that open source and small scale models uh can operate such that we can have innovation there. If we don't have those two things, we're screwed. But I think that if we do have those two things, the question is, what do we do with that? You know, I'm doing a lot of research now, as I mentioned before, on the beginnings of the amplifier and broadcast and so on, the decisions that were made then. And interestingly, as the amplifier was invented and things occurred at that time, the inventor, Lee DeForest, in the midst of one of his many bankruptcies and divorces, sold the rights to the a triode vacuum tube to AT &T so they could use it as a relay to make a promise that they could get phone calls from coast to coast. Wasn't for broadcast, right? Wasn't for all the things that followed. And uh DeForest held on to the rights to sell tubes to amateurs because AT &T said, ugh, that's nothing. And all of the radio sets, as we used to call them, well, not my day, that was before my day, uh were built in the beginning by amateurs. until RCA realized, hell, that's the business, is building them. And DeForest was feeding uh that tremendous burgeoning industry, but then it got taken over by huge companies, right? And that was that. Similarly, uh this month is the 50th anniversary of Byte Magazine. And Byte Magazine was there for homebrew computing. Right. were computers that people built similarly to the kids radio sets that they built back in the day. Those were amateur movements that started it off. Right. Then comes the internet. What do we have? We have blogging and that was an amateur movement. And the question is, is it inevitable? Is it determinative? The big corporations are always going to take this stuff over and steal it from us or can we hold onto it this time? That's the question I have. And if it is only scale that wins, then we can't, we've already lost. But I hope that's not the case. Yeah, I hope it's not the case too. Perhaps a chunk of this, I don't know, I grouped it, but now I'm realizing like maybe it has very little to do with the Gary Marcus story, but it's proximity um is Meta's TBD, that's its super intelligence lab. And I'm thinking about Gary Marcus's kind of point of, know, scale. Scale ain't working. These things aren't getting better at the rate that you know, a lot of these companies imagine throwing massive amounts of money into compute to translate into, you know, super intelligence and AGI doesn't seem to be working. And yet you have Meta and their TBD lab where they're doing exactly that. All the money in the world flowing into this one department or this reorganization of Meta's business around super intelligence. And I think it's interesting and kind of part of, I think plays into Gary's point, just the fact that, you know, these top minds are defecting from meta, even after a few days, in one case, like made it through the onboarding process and decided after onboarding, like, eh, you know, I don't think I'm good at this. This is not for me. This is not for me. And I wonder like what drives that? Is it culture? Is it, okay, this is just not. Doable like is it a disbelief in the in the mission? it a boss you don't like? Is the foods bad? The other guy But but again, this goes back I think of the discussion a few minutes ago is that I think that Zuckerberg has put this goal AGI and all these companies put this goal That's what we got to do and that's how they judge themselves compare that to pardon me. I'm gonna stand up for a moment So I have my, I've shown this on the show before, my deforest tube that I mentioned a minute ago, right? I also- It's just a light bulb. the purposes of teaching a class. I went to Greenbrook Electronics, which is an amazing, it's like a used bookstore for electrons. It's got all this amazing stuff. Oh, here's scissors, good. So I went there because I wanted to show students when I speak to a class next week what happened next. And this is what happened next. That's the transistor. It's one transistor. It's one valve, right? And of course, there are trillions of transistors in the data center that uh OpenAI is building. This is one transistor. Where did this come from? AT &T Bell Labs. And it was there because Bell Labs did pure research. Oh, we're going to hire the smart people. going to do the same thing that meds do. We're going to hire really smart people. Not going to pay them that way, but it was a different world. And we're going to let them do stuff. We're going to let them explore things. That's what we need more of. And that's what universities do, but that's being cut off there. the research in these companies is being focused into scale AI and where are we gonna get the next transistor? Where is that gonna come from? So yeah, I think it is related in terms of how we grow and what happens. Does this, how the TBD kind of story is evolving in such a short amount of time, does that say that Zuckerberg is out of touch with something that he thinks he's That's I've been wondering. What do you think? I've been asking myself that too. I don't know. mean, it seems to point that way that maybe, you know, and also, and I say that because like he's been out of touch before, like, yes, he's one of the longer standing Silicon Valley, you know, CEOs from back in the day at this point still looks like a kid to me, but, and not just a CEO, but a founder who's still a founder. Yes. That's actually what I meant to say is founder CEO. Um, but he's made, you know, When they changed the company to Metta from Facebook, it was like we're going all in on the next thing. We think it's the Metaverse. I don't know, maybe that's a long, long game that still hasn't proven itself, but it still kind of feels like maybe it was a decision that was not, hasn't proven itself over time. I think you're right. Yeah. Yeah. So, so I do think that maybe there's a little bit of, of just kind of out of touch. And the thing that he has, the thing that Metta has is boatloads of money to throw at these problems. And so that seems, you know, it seems like the right approach to them. Yeah. If you look back at the, at the history of Zuckerberg, when he started newsfeed, people smashed him. They thought this was awful. It was a violation of privacy. This is wrong. And he hung stubbornly on and said, this is the essence of Facebook. And he was right. He was absolutely right. So I think that spoiled him as to his own judgment. that when I still stood on stubbornly, you know, like an old fart back on the day when I said newsfeed is the thing for us. Then by God, you know, then he tried to do it again with a ridiculous goggles. Didn't turn out the same way and legless characters didn't turn out the same way. Meta Glasses is a success, but it's a minor success. It's a minor success. Maybe it's the beginning of a bigger success. I think it is. They're going to announce something else in what is it, two weeks? And think I'm looking forward to that. I think it'll be really interesting. think that has more potential than all of the VR. Yeah, I do too. But yeah, I've been saying from the beginning that Meta feels desperate when it comes to AI. yet they've, it's rather like the early days of Google because Google, uh, won a lot with Android. Then Google won a lot in the early days of AI, but didn't get that credit for it. And Metta has done really well with Lama. Lama is good. Lama is open source. It's they, I think they could have built more on that model. Meanwhile, this, report that we were just talking about from the financial times says that Metta is not going to release its flagship Lama behemoth model. uh... because of poor performance gains so they even they're not that that satisfied with what they're getting out of throwing all that computer thing which is what which is what gary marcus was saying about about uh... opening i and check your pt dot because they sleep what he's what he says in the end is they have to walk hit the wall interesting well shifting gears a little bit uh... friend of both of ours, Megan Moroney. She has been writing at Axios for a few years now and writes a lot about AI. We should probably get her on sometime to just talk about AI if she wants to re-enter into the podcast world. I don't know. She might be happy just writing at this point. Last time I talked to I think I kind of got that impression. She's just kind of like, eh, it's cool. But she wrote an article about new safeguards. and I think every time we say safeguards, right, it's drink. um Coming to ChatGPT by way of OpenAI, where they are renewing a commitment to protecting, or maybe just stating a commitment to protecting teens, protecting people in emotional distress, making some changes that are coming by the end of the year to address this, because as we've talked about in previous shows, you know, there are... There are a lot of lawsuits, lot of reports of younger users or emotionally distressed people faced with information that they gain from platforms like ChatGBT that do things like encouraging self harm or giving really poor emotional advice, a lack of intervention when a bad situation is actually detected. That would be part of some of these changes apparently, parental notifications where parents can like link their accounts with their teens accounts and then uh have the ability to monitor for signs of acute distress, disable history and memory from the kids account, that sort of stuff. Also having human reviewers on staff to, I guess, intervene or step in when certain patterns are detected that signal some of these things. I know. think that's, I don't know. I don't think that's automatically bad. No, what do you think? A lot of this comes out of the furor over the last two weeks that a young person talked to Chachipiti and then committed suicide. And let's make sure that we mentioned that if you or anyone you know is in distress, please don't go to Chachipiti. Call 988. uh and find a therapist and expert help. Absolutely. uh So it's an understandable furor around this, but it is an edge case. And you have to recognize, I don't say that coldly. I say that because it's really hard to manage to edge cases. It's the rare moment. And how do you know that rare moment's gonna occur? And I think there's a double edged question here. On the one hand, uh okay, if somebody seems to be in distress, cut them off. this is dangerous place, don't let them go any further. uh That makes some sense. On the other hand, Facebook found that its algorithms got really good at recognizing people who were in distress and intervening. And if you allow people to interact sufficiently such that you have the opportunity to intervene, is that life saving? If by cutting them off, are you harming them? It's not an easy discussion either way. Um, there's a story I put in here that I don't want to go into detail on, it's really well done. Well, report the story from rest of world where a woman in, uh, now in the U S but from China talks about her mother and her ailments and, and she gets to see a doctor for three to five minutes once every two months and that's it. So she goes to deep seek and she gets advice from deep seek and deep seek is empathetic and takes time with her. and understands, but then the reporter took the advice from DeepSeek to nephrologists, it's kidney disease question here in the U S who said, this is nonsense and dangerous and wrong, but the mother still wants to be feeling like she's heard. And that says a lot more about medical institutions, both in China and here and the time that they have to deal with things and insurance and whatever else. Um, but This is not as easy as it seems, I think. Well, this happened in one case. This is clearly awful. They must do something. And we're going to hold them responsible. But there's all kinds of other factors in people's lives and all kinds of other questions about how you operate. I think I agree with you, Jason. Is this worth trying to come up with these safeguards? Is it worth doing these things? Absolutely. I don't question that for a second. But once again, we cannot rest thinking, well, we've dealt with that now. as you have it. Yeah, totally. Yeah, and I think the other point that comes up often, and I've seen this in some of the comments from our conversations on this in the past, is there is also just a question of accessibility access. Yes, yes. And not everyone has the ability, not everyone has the insurance or the money to spend out of pocket to... Insurance doesn't cover... I haven't come across a therapist who takes insurance. And there's a responsibility that exists there too. uh So only the privileged can afford therapy. And that's a problem. So that drives people to these tools. Yeah, I think you're right is the solution should not be, well then it's absolutely impossible to do this sort of thing because then, that person is left feeling even more so isolated like without any sort of option or opportunity or place to turn in a way that's discrete, least seemingly discrete and uh confidential. Again, is that true? Is it actually discrete and confidential? I think that's another question, but they feel that they have somewhere that they can turn with a question that they don't might make might not have another place to get some sort of answer. And if that sends a signal to say, hey, okay, we've got someone here that needs further help. And then that, you know, kicks off to a human reviewer to, you know, point them in the right direction. I think that's a win. Yeah, I'm very cautious about especially young people. Yeah. Yeah. Well, there's a lot of vulnerability. Yeah. I think it's vital that'd be supervised, but you raise an important point too. If you're, if you're a trans kid in a community that's not open to it and you have no one you can go to, but you can go to this, your privacy matters too. So saying these places should reveal these things so that they can be dealt with. Well, maybe they send them into greater danger. Humanity is hard. It's complicated. We are. Yeah. Yeah. And you also included an article uh from the guardian that writes about a number of therapists and experts who are also warning about this reliance on AI chat bots, you know, saying that, um, you know, for emotional help anyways, just saying that they, at the AI often amplifies delusions. It reinforces negative cycles. It does all the, know, it increases, uh, anxiety, self diagnosis, emotional dependencies, obviously therapists, people who are skilled and, and, you know, uh, Certified and all the things to do this on a professional level are not it just like doctors doctors I imagine I'm not a doctor, but I play one on TV Hate it when when people come in and they have self diagnosed. what's going on? they're like, well, don't you think you should it's like no, dude I've got the education. I've got the experience to know what what should be done here. So I'm not surprised You know that therapists really probably do not like this at all Right, but I wonder but I do wonder Like I can get where they're coming from 100%. But I also wonder like, but do they see the potential positive impact of like we were just talking of having at least some solution or not even a solution, but some place to go when you have that, when you're faced with a situation where you truly have no other option, you know, would they still say absolutely not? Well, when your computer breaks, you have to go through that before you can get a warranty call or whatever. You have to go through this process where you, did you try this? Did you go to this helpful thing? Right. Yeah. And that's going to be more and more AI where you say you're, you're, you're required to describe all of the symptoms of your sick computer so it can try to focus you better. Or were you talking to a phone mail jail? Tell us more so we can correct you the right place. Right. And what's that? That's AI doing the same thing at some point with your body. Um, it's going to be helpful to more completely describe the symptoms. And um maybe people get into a new habit of doing that, I don't know. Yeah. Don't know. Interesting stuff. The Guardian writes about the algorithmic movie or the algorithm movie, I don't know if it's a genre necessarily. It may become that, yeah. It kinda, yeah, it is in its own right. Oh, sorry, this is the wrong article. Hold on. There we go. Also on the Guardian, the title is Bland Easy to Follow for Fans of Everything. What Has the Netflix Algorithm Done to Our Films? And it is a very long read. It's a really long read, but I found it fascinating because it's to me, this is the last gasp of mass media where Netflix wants to find every way that they can to... uh Guarantee success by going over after previous successes. Yes. And coming up with rules to do that. ah Yeah. One of the rules in here, by the way, is that people don't necessarily watch the screen when they watch something. So you can't just show things. You have to have a character say, well, we're going to rob the bank now. ah Because they're looking at their phones. their eyes are somewhere else. Right. Oh, it's so true. ah And what this is doing to our culture and our art. but I think this is kind of the last gasp of the old ways. They're trying to eke out every penny they can, and there's a surfeit of creativity and imagination. It's all sequels and formula. Yeah, yeah. Generic, formulaic, like you said, forgettable art, as it says in the article. um Netflix, of course, denies making its movies by strict algorithmic choice. Insiders according to the article say that the data has a very strong influence over what gets made and I think you know when you take I Found I can't remember if it was this article or a different place, but in 2017 alone 700 billion data events were collected by Netflix about what its Viewers and subscribers were watching were clicking were not clicking were interested in all these things probably within the movies themselves Are there points at which people get distracted and click out and decide, I don't want to watch this anymore. All those signals in a data driven society are indicators of what a wide swath of people are interested in or not. And then you've got these AI systems and algorithms that can take that data and you go. give me a concept for the perfect film that will appeal to everyone and keep them locked in and engaged. Or it's not even everyone now. It's for this target audience. For this target audience. who like buddy movies, we're going to give them the buddy movie to beat all buddy movies, but it's going to look like every buddy movie that came before. You can have all the indicators of an excellent buddy movie as classified and defined by every buddy movie you've ever seen. And there's this desire to slice things up. I I complain all the time about the mask. uh That's what I've written about a lot. And in 1964, I think it was Daniel Yankovitch, a market researcher, began to slice people up. The trying to advertise to everyone was inefficient. So we started to slice people up not just by demographics, but also by psychographics. And this is not one of the examples, but it's the example always used is Joe Sixpack. What does Joe Sixpack want? What does Sally whatever want, right? And uh what struck me about this Netflix story was that rather than slicing us up as a population, they turned it around a little bit and they started slicing up the entertainment. And they came up with all these basically we would, as bloggers, would think of tags for these, these, these character characterizations of the entertainment. And then they tie that to the data. And so it's more than just buddy movie. It's a mixed race male, female buddy movie in the seventies with banks. you know, whatever. And so it has all these data points and you know how many people like Bank Heist. seven of those things. Of course I'm going to watch it. Exactly. And we know how to market it to you because you watched the last buddy movie. it's, yeah, it's this, we're all turned into stats. Yeah. Well, and what's also interesting to me about this is, they're getting some notable actors to tie into these things, you know? They're not quite a plus. Right, exactly. They're like A minus, B plus. Before you go and do the insurance commercial, this is an opportunity. Right. And so that lends itself a bit of legitimacy as well. Yeah, it's an interesting thing when we're talking about creativity and AI and what algorithms do to creativity. And actually, the New York Times also looks at this. in their article where, let's see here if I can get my windows on my tiny little machine straight here. AI is coming for culture. This is New Yorker. Oh sorry, the New Yorker, sorry, yes, thank you. The New Yorker, gotta change my notes, because I don't want to get that wrong. New Yorker looks, oh hello. Sorry, it's my fault. I forgot to change that before you always go, are you sure your phone is on silent? No. oh No, was that a normal phone or was that a ring on your actual smartphone? That's my mobile phone, yes. Okay. I thought maybe you're actually- But I have an old fart and I like it to sound like an old style phone, so I know it's a phone. Yes, exactly. That's what phones should sound like, because that's what they sounded like. Yes, not a Super Mario Brothers sound effect. Anyways, New Yorker looks at the impact of AI on how culture is produced and consumed very much ties into the the previous article. um And it poses some questions about what originality is, what meaning is, if there's meaning to find inside of content as it's increasingly uh constructed through the use of assistance of AI. yeah, just kind of looking at ways in which creativity um is being kind of further defined and shaped by the technology. itself, but also kind of how that impacts the cultural landscape. Is it eroding our cultural intelligence, let's say, if that makes any sense. So I put this in here and I didn't, frankly couldn't, it was such a really long, it's really New Yorker. So I didn't read every, every paragraph carefully and I asked AI to please summarize it for me. But I think it's another lament about what happens to our creativity. And all I'll ever do is play back to previous times that when television came along and the mass audience came along. the exact same lamentations were made then and the same ones we just made about Netflix. That the impact on culture of whether it's technology or a huge audience or advertising, whatever it may be, we're gonna hear the same lamentations all the time. Not that they're not correct, they may turn out that way, but it's really up to the creatives to determine to still be creative and it's up to the surprises of the audience discovering something that's new. that then sets off sequels in the future, but at least you get that moment at the beginning where it's new for a time. And is it usually that the older generations have the hardest time being okay with it and the younger generations are just kind of like, well, this is just the air we breathe. Like this is just the way it is. And actually they find the things that are interesting and worth finding because they don't immediately have their barricades up around it. They just have an entirely different view of things. I was talking with a colleague this morning about a syllabus for a course. and in there was using uh pro tools, not pro tools, but uh video tools, Adobe video tools, Premiere. uh create a video thing, as it was of course by Mickey Media, and I said, TikTok too. A lot of folks think that that's the grammar of video now, it's not, that's, I'm doing an interview on Friday with a European TV group. and they were driving me nuts. They're flying over, they're doing this thing about America falling apart, and they needed a place to do it. so, Montclair State is kind enough to let me use a room. And they're going, we need a picture of the rooms, we need to understand the light, we need to understand this. And I said, you're asking about, this is why TikTok and YouTube are beating you guys, because you're concentrating on the wrong stuff. They're spending days worrying about that, when people just say, I'm gonna make a TikTok and I'm gonna have millions of people watching me now. Yeah. And of a certain, you know, certain category of people of which there are a ton of them because they're the younger ones, the, that, that the over production signifies a lack of authenticity. Yes. And that's what they want to connect with is the more authentic run and gun shoot from the hip because, know, and when you get too overproduced, I mean, I've certainly learned that in, what I've been doing the last year and a half is When I started, started with a real twit mentality of like, I've got to get the best gear and I've got to be overproduced, blah, blah. And over time, I've just realized like, no, actually I can just use my phone and I don't have to get perfect with the edit. in fact, you say, is more authentic. Yeah. We have, we have our dear friend, Ant Pruitt in the comments. Uh, on the one hand saying, wow, just show up with your gear and shoot, which is very ant and very right. On the other hand, Ant's been doing a lot of acting gigs now. So he's had to sit there all day where they're setting up the shot, right Ant? And they're doing all this stuff. So you're seeing both ends of the world. you know, we're just, he says, sadly, I know I'm over killing my production. Also, hello Ant, love you. Yeah, but it's hard to. it's hard to reprogram ourselves because we are firmly entrenched in the way that like, this is just how media and entertainment is. I also saw something, I didn't put it in here cause I don't know that it was necessarily AI related, but I saw something a couple, maybe a week ago, an article that focused on studios that are built around 45 seconds to one minutes, but creating an entire like motion picture quality narrative. and shot style and everything that like, I mean, by all accounts, way overproduce these top to bottom stories that are intended for social media. And those studios are doing insane, getting insane views and everything. I've witnessed, it's been a question that I've had because my daughter, my younger daughter will be watching YouTube and these things will come up and I'm always watching them. I'm like, what is this cut down from? Like this can't just have been made for this because the quality is too good. The actors are reasonably good. Like this must be from some like lifetime, full length lifetime movie or whatever. And it turns out, no, these are just studios that are, this is how they, how they're creating that specific type of content. And we're older. It's hard for me to comprehend it, but it works. Yeah. And I think that genre started because people were cutting up real shows. and real movies and sneaking them on to TikTok so you could watch them a little tiny chunk at a time, well then the chunk itself became an artifact of culture. And that's a new thing, that's not bad. Totally, totally. That's I think at the end of the day, that's what I've, I continue to repeat to myself and try and live my life as I get older. I'm pushing 50, I'm almost 50 in a few days. And I have to remind myself like, just because I don't automatically get it doesn't mean it's bad. Amen, brother. Amen, dad. You know what I mean? Like I want to continue to evolve and to continue to understand what younger generations automatically do because they're breathing that air. And, you know, it's not a matter of like, I don't want to be old and blah, blah. But I don't want to close myself off from the possibility that there are other ways to do these. And actors work hard in a certain direction. They're going there. and creators and actors and now says that he's seen several casting calls for what he puts in quote verticals. Yes. Oh, so creates a new industry, a new form, a new genre, a new opportunity. See, we're, we're young at heart, Jason and me. We're young at heart. That's right. Uh, let's see here. Elon Musk said on X that 80 % of Tesla's future value will actually come. not from its Tesla vehicles, but from its Optimus humanoid robot. I wonder if it's the humanoid robot that was actually a costume that a human was wearing. Or the actual. I think he sees Tesla falling down, doesn't know what to do, and it's a very must statement. I just put it here because I was amused. uh Oh, Elon. How's the boring company doing, Yeah, well, that's a good question. Probably pretty boring. At he didn't have a rocket takeoff well. He didn't have that. That's gonna be a credit. Look, he has, we were talking about this on the backpacking trip because a friend of mine worked for Tesla for many years. He is actually on the design team, one of the original designers of the solar tiles. So he's big into the solar industry. And we were talking about Elon Musk and you know. you can, you can say everything that you want. And I've said plenty about kind of the person of Elon, especially in the last handful of years, he's also created some pretty remarkable things or, or at least his team built them. His team has, and he's built them. Yeah. And so, you know, I have to give him a little bit of credit for really shifting the DEV industry worldwide. Cause he definitely, I think Tesla has been pretty critical to where that, you know, the direction of that. Um, And also he's a questionable human being. So it's all the things. anyways, humanoid robots come into your home, at least according to Elon. He has plans to produce around 5,000 robots this year. He envisions that robots will be integral to making Tesla a $25 trillion company. I think you're absolutely right. It's deflection. It's like, look at the Tesla stuff. Look over here. This is the thing that I want you to think of when you think of. Elon Musk. All right, it's time for our recurring segment. Jeff reads arxiv.org and finds the good stuff. So you I'll make this real quick. uh Three papers this week. One that I really enjoyed reading uh by, oh, I'm gonna be unpronounced, Andreu uh Milsonces Consolvis from the University of Catalonia. So I have no idea if I pronounced that anywhere nearby. a deep hype in artificial general intelligence. just love this. uh It's more of an essay, but I love this definition of AGI deep hype. And I quote, deep hype, a long-term over promissory dynamic that constructs visions of civil, civilizational, civilizational transformation through a network of uncertainties extending into an undefined future, making its promises nearly impossible to verify in the present. while maintaining attention, investment, and belief. I thought it was just beautifully done definition of what drives me crazy about AGI and what this is about. So that's a quick mention. I'll keep going on the speed here. uh The next one is rethinking how AI embeds and adapts to human values, challenges, and opportunities. ah And this one drove me a little nuts because it just struck me that this is the essence of the hubris of these AI folks. is to really think that you can embed, they believe this, you can align and embed human values into the machine, means that you think you really know what human values are and you can put it in an algorithm. As we talked about earlier, human values are complicated and there isn't one set of them. And we argue about that all the time, that's what civilization is, is that debate. So it just struck me as highly hubristic, which is a form of deep hype. And then finally, AI for good. uh Some researchers went and found ways to detect and classify tiny objects around wind turbines, otherwise known as birds, that are endangered. And there's lots of dangers to birds, Build high rises and cats, but do what you can. So they're proposing a structure here where they're successfully finding certain breeds of birds that are more vulnerable, and then they signal to the wind turbine. to slow or stop. whether the turbine company is really gonna wanna do that. Yeah, when they're nearby or when they're coming nearby. So it's just a little bit of AI for good. That's it. I like that. Yeah, that's the kind of stuff the AI is really good at. Every week I'm trying to read the headlines at least for every paper in arxiv.org, every preprint that has AI. So then if you're doing that every week, how many are you presented with? I have to imagine there's a ton. It's three to 400. Oh my goodness. But you know, lot of me could just tell by the word, it has some word I've never heard of. I don't know what that is. So most of them is, don't know what that is. But I find a dozen that might be of interest worth, worth calling up. And so of those, a half dozen end up in a rundown where a few end up in a lightening round. Speaking of lightning rounds. Love it. Yeah, we're going to get to our lightning round in a moment. Real quick, if you are enjoying what Jeff and I do here on AI Inside, please leave us a review. Go to Apple Podcasts, wherever you get your podcasts. Probably I don't know which other pod catchers actually have the review capabilities. I know Apple Podcasts does and I know there's a massive amount of people there looking for podcasts to listen to. So leave us a review. That's going to help. do a lot for this show and get the word out and get people to check it out. We really appreciate if you do that. uh Quick break on the other side of it. Yes, we got a lightning round coming up. In a way, the lightning round has kind of become the, what was it called on this week in Google? The, the, uh, uh, Google change log, change log. Yeah. It's kind of become a change, a little bit of a change log, but it gives us the opportunity to very briefly say, Hey, these things happen. Often it's like, Hey, this company released this thing or whatever, which is neat, but we don't need to spend 20 minutes talking about it. So that's kind of the idea what we have here. Starting with Amazon, Amazon introduced Lens Live for iOS, driven of course by AI. Users can shop. All you have to do is aim your camera at a real world object and it will identify that object, match it with a product on Amazon, of course. And then you'll be given similar products on Amazon, because Amazon has an insane amount of products. So there's probably like 20 vases that look like that one that you're taking a picture of. If not more and it also integrates Amazon's AI assistant called Rufus. If you didn't know for things like summaries about the products, you can, you know, ask it questions about products and it'll give you some answers and that sort of stuff. Android support coming soon, but not yet. I'm kind of surprised that this didn't already exist. This seems like a like a no brainer for Amazon to get people to buy. Yeah, but what Amazon was able to do was look at. things like book covers and tell you that what that is, because it can read the text. This is a little harder in technology. I'm surprised that Google hasn't screamed about using the lens brand since that's Google's brand as well for this functionality. uh But Google had more functionality available than Amazon did. Interesting. Well, if you are an Amazon shopper, you'll probably want to uh check that out when it comes to your phone. Doctors at Imperial College London have developed an AI powered stethoscope. And this is really cool to talk about like, you know, again, AI being so incredibly useful for certain things. And this really falls in line with that. um It can diagnose heart failure, heart valve disease, abnormal heart rhythms. And all it takes is 15 seconds to kind of get a sense of subtle heartbeat and blood flow differences in that 15 second sample. It then performs a quick ECG. uh sends those details to the smartphone that it's connected to, and it's been really successful in trials. 12,000 UK patients, detection of heart failure doubled, detection of AFib tripled, so super effective. Yeah, as a cardiac patient myself, I've had AFib since 9-11, um and this matters greatly to me. I use the Cardia, which is the... the thing you see advertised all the time, don't you know your heart? They sell it wrong. But it's been really important to me because I'm sure when I'm in AFib, I'm very symptomatic. A lot of people are not symptomatic at all. They don't know when they're in it. They just know they're tired. So what is the symptom, if you don't mind me asking? The symptom is, have you ever had palpitations? Yes. So it's like palpitations that don't stop. Oh God, that's frightening. I get them every once in a while. Very rarely I've experienced them, but when I do, It's freaky, like it stops me. like, whoa, I have to kind of focus on breathing. Maybe I should get more of that checked out. I've mentioned it to my doctor and they don't seem very concerned about it. It happens to people. It feels like a skipped beat, but it's actually a premature beat. But the cardio is important for me to also know when I'm not in it, because I'm a hypochondriac. I'll go to the doctor right off the bat. I think I'm dying anytime. And so, It's useful to say, no, no, it's normal sinus rhythm. That's that. But I'm of an age where things like heart failure become an issue and other things. This makes perfect sense. So on the cardia, I make an EKG with my fingers on this thing and I send the result to my doctor and it's a medically approved EKG and he looks at it and he judges it as a result. It's not the same as having, well, it's also not the same as having stickers all over your body, because it has more leads in. But it's useful. And so this, being able to listen to more and understand more, I think it's invaluable and I wonder how it's doing it. And at some point, every phone should have the same sensors. Could you imagine the next Pixel phone? right, a couple of years ago, the Pixel integrated a temperature. Right. uh I don't know, you can take your temperature or check the temperature in your water. if they were like, no, now you can check for AFib. I mean, put up against it. lot of regulatory stuff goes Hey doctor, how am I doing? Oh boy, they're like, no, don't self-diagnose, don't use these tools, come to us, we will use these tools for you. Yeah, interesting. uh OpenAI's new GPT real-time model and real-time API are publicly available. This is really geared towards developers who want to build their own voice agents into their apps and their services. So you can imagine like a phone based customer support agent driven entirely by this AI, all the proper permissions so we can access and carry out different things that come up throughout the conversation with the customer in their interactions, that sort of stuff. So it features new AI models, improved safety guard, guard rails drink. I think we said that like five times now, lower latency. image input, um remote MCP server support. like to get that in there, model context protocol. um Yeah, so this is really for developers. Developers will care about this. Do we care about the fact that when we call into customer service, almost certainly if we have it already, we're gonna be talking to voice agents, AI? Well, the question is if the AI has given authority to solve your problem, okay, because the humans too often aren't given that authority and they've got to Check it up and they gotta read the scripts to you and all that stuff. oh If you can get to the solution better, okay. Yeah. Yeah, escalate. Operator, operator. How many times have I said operator repeatedly until it puts me through someone? damn it. I can't do this. I just can't. hate it. Maybe it'll be more effective though and maybe I'll. Or worse. Or worse, it could be worse. You're right. WordPress revealed Telex, an experimental AI tool that users can create modular content blocks for WordPress sites using AI props. So it's still early, it's experimental. So you can imagine, you you want to create like a product testimonial carousel or something like that. You know, give me five star ratings, give me customer photos, all this kind of stuff. And Telex would create the content block. and then export that as a .zip plugin that you as a WordPress user could then add into your WordPress site and implement it. So it's another coding, vibe coding agent, if you want to call it that, vibe coding model, but specifically tailored for boosting and bringing new capabilities to your WordPress site. It makes perfect sense that they add this on. just hope we don't end up with lots of blog slop. Blog slop, that's a new word that I hadn't heard before. mean, yeah, yeah, you're right. It'll be interesting. I mean, I guess that's the bigger question about coding agents in general is will we as users know that we're being presented with code slop or whatever you want to call it, know, sloppy coding. It either works or it doesn't. It either does the thing that it promises to do or it doesn't. And if it doesn't, do we automatically blame bad AI or is it just bad coding, bad QA, whatever. Anthropic launched a research preview of its Claude AI, uh let's see here, agent for Chrome. We were talking a little bit earlier about what Chrome could do if they integrated AI directly into the browser. Well, this is Claude saying, install this extension and then you would get like a sidecar window. for your interaction, keep track of the context in the browser, all integrated with the Cloud uh AI, course, users can chat, can delegate browser tasks to Cloud. All this stuff sounds very familiar to me, because I'm very familiar with Perplexity's Comet, which is kind of Perplexity's version of exactly this, except Comet is a standalone download. This sounds like it's just an extension for your Chrome browser. And I hope this pushes Google hard. I complained last week about because I'm a workspace client, I don't get the same integration that other people get in their Chrome OS. There's no, and I'm using Chrome in, I'm using Gemini rather in uh Drive and in my Gmail and that kind of stuff. I need it in the browser. I want to do what you did, what you demonstrated with Comet and Google's got to get off its butt and, and make that available for all. And maybe they will now, now that they know Chrome's not going anywhere, you know. I'm sure they had to have had that already in the works. And maybe they put it on the back burner to not waste resources. I was going to experiment. If we joked last week about making a podcast out of the links without us, I tried to do it with the papers list. And I went to Jeb and I and I said go to the spreadsheet and go to all those links. I can't do that. I can't go to all those links. Sorry, you're out of luck. So then I went to notebook LM and put it in there and uh it did it to an extent. And I did start a podcast on it it was really kind of dorky. So I didn't even bother sharing it with you. uh But uh yeah, I think that the integration of AI is not just search. needs to be in our hands. Google doesn't decide when to use it. We decide when to use it. Absolutely. Yeah. 100%. And then finally, Microsoft AI has launched its own in-house AI models, MAI Voice One and MAI One Preview. Obviously, this is kind of decreasing its reliance on its partners like OpenAI. The voice model is kind of similar to what we were talking about a few minutes ago with you create like customer service bot, have voice interaction, expressive speech, it's integrated into Copilot already, apparently. And then MAI won preview, a more traditional model, summarizations, insights, et cetera. It was all trained using Microsoft's own infrastructure. So less reliance on OpenAI as a result. Yeah, I think they need their freedom. They need their thing. Their freedom. Yep. Yes, their freedom. All right, well, that is it. That was Speed Round. That is the end of this episode of AI Inside Garage Edition. Lots of stuff, I know, always so much fun. Jeff, thank you so much, jeffjarvis.com for people to find everything that you're up to. What's the latest taps on the new book that you're working The new book will be out in the spring, Hot Type, the magnificent machine that gave birth to mass media and drove Mark Twain mad. So that's out in the spring from Bloomsbury Academic. Nice, and I'm sure it'll be listed on your site as well. As soon as they get the cover up online and a uh pre-order, I'll be bugging y'all with it. Heck yes, please do, every time. Well thank you for that, JeffJarvis.com. Thank you to everyone for going to our website, aiinside.show, where you can find everything you need to know about this podcast, all of our episodes, who we even are. Reviews you can even contact us you want to get involved with the show like I feel like we never get any emails for this show But if you sent us something it's really good. Well, you know, go ahead and read it You know, why the heck not go there AI inside dot show and then finally, of course patreon.com/AIinsideshow go there and you can support us on a deeper level you get access to the discord community I mentioned earlier at free shows A whole lot of things if you go to patreon.com/aiinsideshow, including at a certain level, executive producer level, you get your name called out, whether you like it or not. DrDew, Jeffrey Marraccini, Radio Asheville 103.7, Dante St James, Bono De Rick, Jason Neiffer, Jason Brady, Anthony Downs, and Mark Starcher. You know, I envision someday where I have to talk for like a minute with names. That would be amazing. That'd be wonderful. but we do appreciate all of you for your deep, deep support of everything we do on this show. All right, that's it. uh Checking out from the garage in my home. Now I get to go upstairs and see where the bathroom is in the last hour, see if it looks any different. Thank you again, Jeff. It was a lot of fun. Thank you, boss. you, everybody, for watching and listening. We'll see you next time on AI Inside. Bye, everybody.