[00:00:00] Hello, AI Inside fans. This is Jason Howell, one of the hosts of the AI Inside podcast, the podcast that you're used to getting on Wednesdays with me and Jeff Jarvis. And I wanted to let you know something because I've been workshopping over at our Patreon at patreon.com slash AI Inside Show, a new daily version of the AI Inside podcast.
[00:00:25] Yeah, it's not quite the same. It's not a two-person podcast. It's not a conversational thing. What I'm putting together and what I'm launching and announcing for you right now is a daily podcast, usually five to ten minutes long, with a couple of the key AI stories that are making the news in the last 24 hours.
[00:00:46] Monday through Friday, I get it out in the morning. It's just me telling you the details about the news and then expanding on it a little bit with my own feelings, thoughts, context, that sort of stuff. For right now and through this week and through next week, I've just decided, by the way, this daily podcast is going to be free. What you have to do to get to it is go over to patreon.com slash AI Inside Show.
[00:01:16] And if you haven't already, sign up for a free account. And then you should have access to the feed where I'm publishing each episode for all members. That includes free members for the next two weeks. At the end of two weeks, this will become a paid perk, but this should give you a chance to check it out and see what you think. And once that switchover happens, I'm going to be putting it onto the $5 tier.
[00:01:44] So one of the lowest tiers gives you a lot of benefit, gives you a lot of value. If you like AI Inside every Wednesday, the long version of it, now you can get a shorter version too and stay up to date on the latest news. And once you do that inside of Patreon, you'll be able to get a podcast feed that you can plug into your podcatcher and so you don't have to do it through Patreon going forward. Pretty cool stuff. I'm super excited and I'm really enjoying it.
[00:02:13] And so I'm going to put it into the feed right now. I hope you enjoy this preview of AI Inside Daily. And you can check it out through the rest of this week and next week at patreon.com slash AI Inside Show. Thanks for listening. Inside Daily for Tuesday, May 12, 2026. Only for patrons. That's right. You. Still free. It's going to be a paid thing next week. I'm Jason Howell. Here's what's in the news today.
[00:02:40] Like I said, just got off of a daily tech news show with Tom Merritt. Just talked about Google's big news, which, by the way, we had early access to this information. So it's kind of exciting when they let us in early on this. And we have a wonderful interview, which I'll tell you about in a second. But Google held the Android show. I think, in fact, as I record this, it might still be going on right now. It's the pre-IO event that's become Android's own kind of stage before the big stage, before the big developer conference happens next week.
[00:03:10] Most of the Android faithful crew are going to be there at the developer conference. We're going to be at Google I. If you're going to be there, please come up and say hi. But the headline here is Google Book, which is a new line of laptops built on Android with Gemini baked in from the ground up. You got Acer, Asus, Dell, HP, Lenovo, lots of partners making them. And they're coming this fall. So you can't get them yet. But it's definitely a step up from Chrome OS computers.
[00:03:40] Google is basically saying, like, you know what? Chrome OS is great because it makes them a lot of money for the education side of things. And now we've got this extra kind of step up. And this is really meant for people who just want a little bit more out of it. And it's running Android. So it's not emulation, right? Very interesting. The hardware looks really nice. And the standout feature is something called Magic Pointer, which is a cursor with Gemini built into it.
[00:04:09] You wiggle the cursor and it surfaces contextual suggestions based on what's on your screen. Whatever the cursor happens to be over or drawing two things together, comparing them. I'm very curious to see how that works, especially because I'm so used to wiggling my cursor in order for my cursor to get large so I can see it on my mammoth screen. So maybe that's going to change how I use it. I don't know. We'll see. They also showed a feature called Cast My Apps.
[00:04:37] That pulls any app from your Android phone onto your Google Books screen because this is, after all, running Android. So this is a big value prop is that this is a computer that is tailor-made for the kind of collaborative use cases between your smartphone, if you're running Android, of course, and your laptop. So that's interesting.
[00:05:01] Create your widget tool where you describe in regular words what you want a widget to be. And Gemini builds you a custom widget on the spot that you can place onto your home screen. That sounds really interesting. And in a cool little callback, all Google Books will have a glow bar, which if you remember the old pixel notification light, nice little nod there. I'm curious to know if this is the, what was it called? Pixel Glow that we've been hearing about.
[00:05:30] And the Pixel Glow code that's been spotted has made references to phones and laptops. So I kind of think this is what they're talking about. Is this, however, the fruits of aluminium operating system labor, right? Aluminium OS is something we've been hearing about for the last couple of years, which is really a development of a version of Android that is, you know, meant more for like a laptop environment. And boy, we've been talking about this forever.
[00:05:59] I feel like at this point, I asked Google specifically, I said, can you go on the record and tell me that this led to that? And they went silent. So essentially, they refused to comment on that when I asked them. Maybe more happening at Google I.O. next week. I'm guessing when we get there, we might learn a little bit more about that.
[00:06:21] But beyond the laptops, beyond the Google book, Android 17, getting some attention with some redesigned multitasking interface, improved screen recording. There's a new speech to text feature called Rambler that basically strips out all of your filler words. So you can just ramble, essentially. This is how I talk to LLMs a lot is I'll just hit record and I will just ramble for like five minutes.
[00:06:46] And it will synthesize down all of the complexities of what I'm talking about. This is, though, meant on a much smaller scale and it's meant to feed into any app that you're using. So if you're like just rambling about, you know, what your plans are for tomorrow night with your friend and messages, instead of, you know, it being this long kind of text to, you know, voice text thing that ends up in the field. Even though there's something endearing about that, I think it will kind of synthesize it down to just the nuggets.
[00:07:16] How will it work or how well will it work remains to be seen. Those custom generated widgets, by the way, also coming to smartphone. Gemini intelligence getting deeper system level integration across the platform so it can work across apps. It can pull context from messages, emails and other apps more fluidly. Any agentic actions happening on the smartphone, Chrome browser control, a ton packed into one event.
[00:07:45] And by the way, IO hasn't even started yet. That's next week. So we will be there. If you're going to be there, make sure to come up and say hi. And definitely do not miss our in-depth conversation with Android President Samir Samat on this morning's Android Faithful podcast. Went live an hour ago with the embargo where we talk about all of this in much greater detail. And you can find that in your podcatcher of choice or androidfaithful.com. It's a fantastic interview. It's probably one of my favorite interviews with Samir that I've had to date.
[00:08:14] So Mira Mirati's startup Thinking Machines Lab, which, by the way, she founded after leaving OpenAI, announced what they're calling interaction models. The idea here is AI that doesn't wait for you to finish talking before it starts processing. Their model, TML Interaction Small, does full duplex communications. That's in both directions. And means that it can listen, it can see, and it can respond simultaneously.
[00:08:42] More like a phone call than the turn-taking text exchange that current voice AI does. The technical approach breaks conversations into micro turns of around 200 milliseconds and uses what they call encoder-free early fusion to take in raw audio and video signals directly instead of routing through those heavy external encoders.
[00:09:05] And they say at the end of the day, they're claiming 0.4-second response latency, which is roughly the pace of natural human conversation. There's also a background model running behind the scenes, handling harder reasoning and tool calls, feeding results into the live conversation as they come in. It's still a research preview. No public access yet.
[00:09:29] But if the latency claims hold up, this is a different take on voice AI than what OpenAI and Google have been building. And potentially, you know, conversationally speaking, it just might be a little bit better. But remains to be seen. We don't have access to it yet. And finally, the Financial Times reports that some Amazon employees have started feeding unnecessary tasks to an internal AI tool called MeshClaw just to inflate their token consumption numbers. Yes, that's happening.
[00:09:56] Amazon had set a target for more than 80% of developers to use AI tools weekly and tracked usage on leaderboards by team. Employees told the Financial Times the pressure to hit those numbers was impacting everything, even after Amazon said the stats wouldn't factor into performance evaluations. And the practice you may have heard of has been dubbed token maxing.
[00:10:25] And I don't know. I don't even know what to think about this. Like, it's just so silly. Like, are we really there where a company says you got to use as many tokens? Like, define your reason. Define the reason. Don't define the usage. Define the end goal. And what gets you there. Because if you do the other, you end up with something like this. And I just find this to be ridiculous. It's almost certainly not what Amazon intended.
[00:10:54] And it's wasteful. Like, when we're talking about environmental impacts and everything. You know, I had a sit-down conversation with my wife this weekend where, you know, she's like, all right, show me Claude. Show me Claude code for her business. And we're kind of taking this work that she was doing. And I was kind of feeding it in. And everything was great. And it was working. She's like, oh, I really like this.
[00:11:19] And then suddenly, we ended up kind of going down a rabbit hole where it started getting kind of, you know, in her words, kind of pulling the user into perfectionist patterns. Where it's like, instead of just like getting to 80% or 70% or 80%, it starts narrowing in on this specific thing and saying, well, we can get this really better. And then you end up spending 20 minutes on that one little thing.
[00:11:41] Versus all of the time that was saved by just doing it, getting to 60%, getting out of the tool, and doing the remaining 40% yourself, let's say. And, you know, she made a really valid point. She was like, this is just wasteful. It's just like, it's keeping me engaged. And I can tell. And I don't like it. And all it's doing is using the resources in order to make that happen. And it just feels very intentional. And I hadn't really looked at it like that before. And so this kind of falls into that bucket for me.
[00:12:10] Anyways, that's a few of the stories that are happening today. And there you have it. AI Inside Daily. See a couple stories. A little bit of talking. Keeps you up to date. Pretty low commitment. Anyways, patreon.com slash AI Inside Show. Thank you for letting me take over the feed. I promise, tomorrow, your regular scheduled programming, which is the normal AI Inside episode, will appear in your feed. And we'll see you then.

