Jason Howell and Jeff Jarvis examine OpenAI's new agent tools for developers, the Manus AI model, and Apple's AI feature delays, plus insights from Sameer Samat, Head of Android at Google, on Android's AI future.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
NEWS
00:02:52 - OpenAI launches new tools to help businesses build AI agents
00:12:18 - Manus probably isn’t China’s second ‘DeepSeek moment’
00:18:56 - Apple says some AI improvements to Siri delayed to 2026
00:23:14 - Interview with Sameer Samat, Android Head at Google
00:43:51 - Larry Page Has a New AI Startup
00:46:03 - AI Search Has A Citation Problem
00:51:12 - OpenAI says it has trained an AI that’s ‘really good’ at creative writing
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:01] This is AI Inside, episode 59, recorded Wednesday, March 12, 2025. New Week, New Seek. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
[00:00:22] Hello, everybody. Hello, hello, and welcome to another episode of AI Inside, the show where we take a look at the AI that is layered throughout so much of the world, the technology, including our smartphones. Probably top of the heap is our smartphones right now, seeing so much of this. Today's show is going to have a big section dedicated to that. I'm one of your hosts, Jason Howell, joined as always by my friend and co-host, Jeff Jarvis. Good to see you, Jeff.
[00:00:51] Hey, hey, hey. Are you fully over your jet lag now from Barcelona? Jet lag, yes. Cough, no. Well, that's skiing. That was your cold trip. I guess so. That is kind of when that started. I mean, it was prior to going to the mountains. I had like a nasty, like cold slash flu that I was out for a few days on. And then when I went to the mountain for the ski trip, I had just kind of gotten over it mostly, but not even entirely. And so it's been, you know, and then I come back from that and then I go to Barcelona. It's just one thing after a night.
[00:01:21] You're already going to do. Yeah. I guess so. Yeah, maybe that's it. I'm just a working man. Just a working man. Anyways, thanks for asking. Good to see you, Jeff. Always love doing this show with you. We got some fun stuff to talk about this week before we get started. Huge thank you to those of you who support us on Patreon, patreon.com slash AI inside show. You can support us like Wireman92002.
[00:01:46] That's what it said. So Wireman92002. Thank you for your support. And yeah, anyone else that wants to support us, head over to patreon.com slash AI inside show. You can do that there. And of course, if you happen to be watching the live stream right now, either on the YouTube channel or through the many different directions that we've spread this around, Jeff, you know, often puts it on his Twitter feed and his LinkedIn and everything.
[00:02:10] If you're here for the live stream, take a second and go to AI inside show. You can find the RSS feed and subscribe to the podcast. That way, if you miss it, you miss the live stream, you won't miss the podcast and we appreciate it. But without further ado, we got a lot of stuff to talk about. So we're going to dive right in a little bit later. We're going to have we're going to play an interview that I had with Samir Samat, who is the head of Android at Google, all about Android.
[00:02:37] Sorry, all about AI on device. That's going to come up here a little bit before we get there when we talk about some news, because that's also what we do on this show and a big part of why I enjoy doing it.
[00:02:48] And, you know, the theme of agents, agents continue to accelerate, continue to seem really important in the world of AI and open AI has rolled out some new tools that they hope will get developers and businesses enterprises started on their own agents. These are tools, you know, that you can use if you know how to work with them.
[00:03:14] If you are a developer, you can tap into the responses API that will drive agents that are specific to what you want them to do and want them to be. It includes a tool for AI agents controlling computers, similar to what we saw with open AI's operator agent. That was a couple of months ago and a whole bunch of other aspects to this ultimately.
[00:03:39] But, you know, you should know that it currently has a 38.1% success rate on complex tasks. At least that's honest. I suppose so. At least at least open AI is being upfront and truthful about this. Open AI says this is all very early. Much improvement still ahead. So keep that in mind. But if you want to get started building very unreliable agents, then you can do that right now.
[00:04:06] So there's a story I didn't put up this week in the rundown, but it was a PR release came over about app fatigue among students. Okay. And I was thinking about that in this context. And I know app fatigue as a teacher and as a journalist, there's always, I had a colleague who knew every single app that anybody could ever use. And he had them all in his head and he'd mentioned 20 of them in a row. And every faculty member would just kind of glaze over because we didn't know what to do with it.
[00:04:37] And I think there is a bit of app fatigue out there in the world. It's a limited number of apps we put up on our phones, right? And trying to get people to download an app is like trying to get them to put up a browser extension. That's a good comparison. Apps used to be so like, oh, man, I just got to have more of them. And now it's like, do I really want to do that to my phone? Do I really need that extra thing for the one time I'm going to use it every year or whatever the case may be? Right. And are you going to go to the effort to get it, A?
[00:05:06] And then B, are you going to remember you even have it? Are you going to, you know, when the need comes, when it might actually be useful, are you going to think to use it? Yeah. So, you know, I think that's relevant just to this discussion is to our agents, the new apps or agents placement for the apps eventually. Do they suffer the same problem? Oh, I forgot to ask it. I guess it's better in the sense that you might have one portal to that. The agent is your agent to your apps.
[00:05:35] Mm-hmm. Mm-hmm. Talk about. And does that make it more likely that you might use more of them in the sense that you direct it to do things for you or less likely? I don't know. I don't know where that's going to go. Yeah. What do you think the relationship is of agents and apps? Yeah. And what is really easier? Is it easier for me to just fire up the thing and do the thing or to kind of think about how I instruct an agent or an assistant or whatever you want to call it to do the thing for me?
[00:06:05] Because it's rarely, at least at this stage, it's rarely as easy as I need a ticket to fly to New York tomorrow night, blah, blah, blah. If you really want it to succeed, right, you got to go into detail. Yeah.
[00:06:18] And you got to really like, it's almost like you have to think in advance of all the things that might trip it up along the way and put that in part of your action, which if you work with large language models and instructions and all this stuff, over time, you start to develop a little bit of that skill where you're like, okay, I've done this command enough times to realize that when I do this, it often gets tripped up and goes in that direction. I don't want it to go in that direction. I need it to stay on track.
[00:06:45] And with agents, I think it's kind of the same thing. You need to think about these things before you instruct it. And that's a whole skill set and complication as well. Which one's easier? I don't know. So I actually, believe it or not, was an executive in various of my lives. I was the president and creative director of one media company. I was the editor of a magazine. Only once in my entire career did I have an assistant and I didn't know what to do with them. Oh, yes. There's a pressure now.
[00:07:15] Yeah, it was. It was the work. She was sitting there kind of twiddling her thumb. I guess that's not twiddling her thumbs. Twiddling her thumbs. Okay, well, I'm here to help you. I'm here to help you make your job better. And it's a reflex I simply didn't have. And to this day, I don't. I hated using travel agents. I was so glad when I could book my own travel. I didn't want to tell them what to do because it was going to screw it up. Yeah. Right? So I'm used to doing things on my own.
[00:07:43] And I just don't know how this is going to feel when and if, big if, it ever works. It ever does the complex tasks well. I don't know whether it's going to be a relief or a pain. Yeah. Or as with everything, just a learning of a new skill. Yeah.
[00:08:01] Because as I've used AI more over the last year and a half, let's say, I've recognized time and time again, the skill set that I'm really sharpening is kind of being a project manager to a certain degree. How can I communicate exactly what I'm looking for? Right. And the more I work with AI, the more I kind of understand what that means for me.
[00:08:25] And I think that's a very valuable skill to have interpersonally as well as this growing relationship, if you want to call it, with the machine. But yeah. So what kind of skill set will we need to learn and lean into in order for agents to be effective and to make sense for that thing that I need to get done versus just doing it myself? Right. Yeah. Yeah. And what you're talking about, by the way, really comes up in the conversation that's going to happen a little bit later. Oh, good. Good.
[00:08:53] It's a very big part of that conversation. So it's interesting. The other thing I was thinking about, Jason, is that six months ago, halfway into our podcast, we were probably talking all the time about prompt engineering and prompting was going to be a whole new skill and a whole new job. I don't hear that anymore. Do you? Do you? I think it's kind of gone out of vogue and that's not seen as a skill in and of itself. I mean, yeah. Yeah. Do you? That's right.
[00:09:20] Like probably like a year, year and a half ago, it was like, oh, this new job has opened up of prompt engineer. I am a prompt engineer. Right. I don't see that as much. And I think I could, you know, this is totally like I haven't really considered this before you bring it up. But my hunch is that prompt engineering as a skill is almost just kind of inherent to using these systems.
[00:09:45] And in the early days of AI, which was not that long ago, sure, we had to get hyper focused on learning how to prompt these systems. But now as we use them more and more, it's almost just like expected that to a certain degree, you know how to talk to these things in order to tap into the expanded capabilities of them. Otherwise, you're not going to get what you need. And you know what I mean?
[00:10:09] It's like we're all learning as we go along this skill set that at one time seemed so important and is important. But now is maybe a little bit easier to take for granted just because we have practice and we have exposure. So that's where I was going here. And Daniel Croft went there as well in the comments. I think that the app agent making is the next rendition of prompt engineering.
[00:10:36] What you're doing when you make an agent is instructing the machine to do what you want and make sure you do it well enough. You get all the details in there and that's maybe just a higher level description of prompt engineering. Yeah. Yeah. It's a component. And thus, if that's the case, then agents aren't such a big deal. Right?
[00:10:58] So either someone has engineered an agent for you to make sure that it can do all of these tasks or it's simple enough you're just instructing the machine to do what you want it to do. And so is that – is agent making the new programming? And programming is really not programming. It's more of product development. You know, yeah. It's air quotes. Yeah. Or not. I'm just not sure. It's just fascinating just to – because we're guessing right now. To watch it develop.
[00:11:27] Yeah, for sure. Sam Altman, by the way, has said that he believes 2025 would be the year agents join the workforce. I wish – you and I don't have time to this. I wish someone out there would take all of Sam's predictions. Put them on a timeline. Yeah, because we obviously have AGI now. All right. Yeah. Yeah, we can just fire up Google Search AI or whatever and have it do that for us. The PhD in my machine, yeah. Yes, right.
[00:11:55] PhD, $20,000 a month, yada, yada, yada. But speaking of agents, Manus or Manus, I guess, is a new Chinese model that was released on Hugging Face. It appears to be getting a lot of attention. Something that you mentioned a couple of weeks ago that has really stuck with me is just – I can't remember how you put it. But just like the ongoing stream of releases of new models at a certain point is just kind of like, okay, yay, another model.
[00:12:25] Why is this important? But what I noticed with all of these is they all seem to kind of follow a similar pattern. I mean obviously there are certain ones that rise to the top versus others. But like Manus, for example, people comparing it, oh, it's the next DeepSeek. Or even though it's not – it wasn't made the same way as DeepSeek. It's a multi – what is it? It's using – I think it's using Claude. Oh, and Alibaba's Quen.
[00:12:53] So it was developed using these existing and fine-tuned AI models. Which is what we were just talking about. In essence, they took those models and then engineered agents above it. If you go to Manus.im, there are videos there. I don't even know what .im is. That's a new one on me. The video is there. And if you could just scrub through it, you'll see the – Manus.im. I am. Yeah, that's not working for me.
[00:13:24] No? No. It says request too many. So it's being pummeled. It's being hit. I'm on it right now. Oh, interesting. Yeah, this is what I get. Request too many. Do-do-do. Oh, that's weird. Yeah. I mean, the article, but we don't have the site. If you go with the article, there was video in there. Okay. Introducing Manus. I won't have the audio playing here, but I'm sure it'll – You'll see it do tasks. Yeah. You'll see it.
[00:13:55] So ranking candidates for reinforcement learning engineer role. And you see it going through these. And then in the midst of it, you can add extra instructions. And because you see it doing what it's doing, you have a better sense of kind of the credibility of it. Okay. Right? And that's the idea. So it's been an agent that was engineered atop of those models that you can then interact with and adapt.
[00:14:24] That's the idea, I think. Yeah. Multi-step. Multi-destination. Right. However, TechCrunch dug in and used it. And I don't know whether it hit the 38% mark that OpenAI says, but it crapped out a lot. Yeah. It didn't sound like a total win. But, you know, again, they'll always – they'll come back and they'll be like, yeah, but it's early. It's early. Right. Right. Right.
[00:14:51] Just think about what this is going to do eventually. And what was the – the research lead for Manus, his name is Peek G, says, it's a completely autonomous agent that bridges the gap between conception and execution, dot, dot, dot. We see it as the next paradigm of human-machine collaboration. And, you know, this just goes back to what we were just talking about, which is like the – we've seen it all before. We will see it all again.
[00:15:21] And these new models come along and everybody that's behind the model wants to proclaim that, like, this is, you know, the stuff of science fiction and we've figured it out. We've cracked the code. Meanwhile, you know, it's not completing these tasks at this point. Maybe it will eventually. And sure, prove to me that it works at some point.
[00:15:41] But a whole lot of excitement for things that – maybe there's some validity to that excitement, but still kind of on the other side, not a lot of effectiveness. And why are we getting so excited about things that aren't very effective? Yeah, there are attempts, and that's fine. But I think this is the problem of the hype machine around AI. It was just so extreme and so ridiculous. So Kyle Wiggers from TechCrunch tried out a few things. I asked the platform to handle what seemed to me like a pretty straightforward request, Kyle says.
[00:16:12] Order a fried chicken sandwich from a top-rated fast food joint in my delivery range. After about 10 minutes, Manus crashed. On the second attempt, it found a menu item that met my criteria but couldn't complete the ordering process or provide a checkout link even. Manus similarly whiffed when I asked to book a flight from New York to Japan.
[00:16:32] Given instructions that I thought didn't leave much room for ambiguity, i.e. look for business class flight – oh, TechCrunch is doing better than I thought – prioritizing price and flexible dates. The best Manus could do was serve up links to fares across several airline websites and the airfare search engines like Kayak, which you could do with Google by asking for flights. Sure. Asked Manus to reserve a table for one at a restaurant within walking distance.
[00:17:00] It failed after a few minutes and so on and so on. So, yeah, it looks like a cool effort. But the hype machine just really takes its enemies along the line because the expectations are too high. Yeah. Yeah, sets the expectations really high. I mean – and that's exactly it.
[00:17:24] Like so often with new open AI releases or whatever, if you go on like a platform like X, you'll find someone tied to the project or even not tied to the project proclaiming just how big of a deal this is. This is earth-shattering, life-changing, blah, blah, blah. And I'm not saying that it's not important. Like I'm sure there's some validity to the importance and the wow factor of what's going on.
[00:17:49] But I guarantee you a week later there's going to be another model that is just as important according to whoever happens to be attached to it or this particular competition or rating system. And I don't know. At a certain point, it's like if everything's amazing, then nothing's amazing. Yes. We hit amazing inflation. Yes. Amazing inflation like that. That's good.
[00:18:16] But so anyways, and I think a large part of why this was gaining attention aside from what it is supposedly capable of is it's another Chinese model. Yeah. And it's easy to draw some sort of comparison to DeepSeek, which is still very much on the radar of AI fans. I think we're going to see a DeepSeek. Yeah. Every week there's going to be a DeepSeek out there. Is this the one? Is this the one that's going to disrupt us? Is this the one that's fine? Competition's good.
[00:18:46] That could be an article titled New DeepSeek of the Week. The new weekly DeepSeek. What is it? And then Apple has been working to bring more AI-driven capabilities to Siri. They had plans to release a lot more kind of expanded features to its Android. Sorry. I'm about to cough. Artificial intelligence, or as they call it, Apple intelligence tool.
[00:19:15] And now Apple is pushing its release of some of these features back a year. Though, you know, a lot of people would look at this and be like, yeah, well, Apple never gave a specific timeline for some of this stuff anyways. I mean, they may have said, you know, sometime in 2025. And now it appears that it's being delayed to 2026. But still, you know, that happens at big companies like this. Apple hasn't really given any reasons for the delay.
[00:19:39] These delays also, though, happen to be impacting Apple's plan to release a new HomeKit device, which is kind of like a combo HomePod iPad device. It's meant to, you know, kind of like be a wall tablet of some sort. And that particular device is going to be delayed as a result of these delays because it largely relies on Siri functionality that are part of that delay.
[00:20:05] So, you know, if you're waiting for that wall tablet somewhere in the price range of $130 to $230 price range, you're going to have to wait for that device. And I actually like this story because I think it says that Apple is realizing that it ain't good enough yet. It's not ready for primetime or Apple. Apple and we're not going to release it yet. And they were – everyone was in a rush and they were too. They were saying we were going to do all these wonderful things and they said it's not up to it.
[00:20:34] So, I think that's good. I think it's smart. Yeah, it's smart. If it's not right, if it's not good to go, don't rush. And we've seen a lot of rush, you know. What's interesting is we've seen a lot of rush shortly after there was a period of a, oh, no, we got to be cautious. It was like there was a light switch that was flicked where suddenly the technology companies were like, well, we don't want to miss out. So, we're not going to hold things back if they're not to a certain level or caliber that we're shooting for.
[00:21:04] We're just going to go with it. What's the worst that could happen? Like, you know, what's the worst that could happen? 30 – what was the percentage? 38.1% success rate. Man, who cares? Good enough for OpenAI is good enough for us, right? Yes, it's good enough for me. Exactly. All right, we're going to take a quick break. When we come back, a little 15-minute interview with Samir Samat from Google.
[00:21:33] Trust isn't just earned. It's demanded. And whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
[00:21:49] Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001, centralized security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk. Vanta not only saves you time, it can also save you money. A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
[00:22:17] Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, our audience gets $1,000 off Vanta at Vanta.com slash AIinside. That's V-A-N-T-A dot com slash AIinside for $1,000 off. Everyone's talking about AI these days, right? It's changing how we work, how we learn, and how we interact with the world at a tremendous pace.
[00:22:45] It's a gold rush at the frontier, but if we're not careful, we might end up in a heap of trouble. Red Hat's podcast compiler is diving deep into how AI is reshaping the world we live in. From the ethics of automation to the code behind machine learning, it's breaking down the requirements, capabilities, and implications of using AI. Check out the new season of Compiler, an original podcast from Red Hat. Subscribe now wherever you get your podcasts. All right, so set the scene just a little bit. Might as well. I'm traveling the halls of Mobile World Congress.
[00:23:15] I get the opportunity to sit down with Samir Samad, who I've chatted with. Actually, we have chatted with Samir for All About Android at Twit when that show still existed. I've sat down with Samir many times for that show. And this is an opportunity to sit down with him and talk about artificial intelligence, talk about AI. It's a shortish interview primarily because I'm embarrassed. I showed up a little late to the interview. I got lost in the halls of Mobile World Congress.
[00:23:44] I thought that the meeting was out in the Android kind of like plaza area. And so I went there and then I realized, oh, no, it's inside the halls. Now I got to figure out where the heck this place is. Samir was incredibly gracious to wait for me. He had a coffee waiting for me when I sat down. So at the beginning of the interview, I talk a little bit about being all sweaty. That's why. Because literally, if you had seen me, I had my huge pack of gear and I'm running through the halls of Mobile World Congress. Like, oh, no, no, no, no.
[00:24:14] This isn't going to happen. This isn't going to happen. It all happened. It all worked out. But I think it's a really great conversation. We talk a lot about AI on device. We talk a lot about kind of what we were talking about earlier, you know, AI agents and what is the kind of interplay or long-term plan between what agentry on mobile phones is compared to what we're used to, which is app development and apps and all that stuff. A whole lot of stuff to dive into within 15 minutes.
[00:24:43] So I'll go ahead and play this. This is my interview at Mobile World Congress with Samir Samad. Samir, it's really nice to sit down with you as I'm, like, gripping sweat from running through the halls of Mobile World Congress. Welcome to your first Mobile World Congress. Yeah. Good to see you. It's great to see you, too. How many of these have you been to? I think I've lost count. Okay. Good number. Enough to lose count. This is my first. Still learning the lay of the land. So anyways, God here.
[00:25:09] I'm super happy to sit down with you and talk a little bit about AI on Android and Gemini. And, I mean, I'm steeped in the world of AI right now between Android Faithful, between AI Inside. Yeah. There's so much happening in the world of AI on mobile phones, on smartphones. Yeah. Obviously, Google has some announcements around this that you've unveiled for MWC. Just tell me a little bit about that. And then I definitely have some questions about it, too. Yeah, for sure. So, I mean, a lot of consumers are super interested in AI.
[00:25:39] But I think in a little bit of a different way than maybe tech companies, consumers just want to know if this stuff can help them actually get stuff done. So they don't really want us to talk too much about AI, per se. They want us to talk about the benefits. So one of the things that we did last year, you might remember, is we launched this thing called Circle to Search on Android. Love it. Yeah. One of my favorite new features in the last handful of years. Right. And one of the cool things we did when we launched that is we actually, in the marketing of it and explaining it to people, we didn't really use the word AI once.
[00:26:08] You know, it was just like, here's what you can do with it, and here's how it helps you. So I think we're trying to continue that with Android. AI, obviously, is a very important technology. And one of the things we announced here at the show is how we're improving the Gemini capability on Android, which we've deeply connected with the entire smartphone system. So, you know, you can long press the power button on pretty much any premium Android phone now, whether it's a Samsung Galaxy or a Pixel or a Xiaomi device or whatever your favorite is, and you can get Gemini.
[00:26:36] And one of the cool things we showed off as a company in December of last year was a project called Astra, where we were integrating multimodal AI models into a system that would allow the user to actually point their smartphone camera at anything and have a conversation about the live video feed that was coming back, as well as share your screen that you're seeing on your phone with the AI and have it be able to talk to you about that.
[00:27:06] So we've actually integrated those capabilities now with Gemini, and we announced that we'll be rolling that out to smartphones really soon. Now, why that's cool and what it can do for you. That's really exciting.
[00:27:18] It's like, if you went to like a website, and there were a bunch of, let's say, used cars that you wanted to take a look at, and they all had different prices, and you shared that screen with Gemini, and you're just scrolling around the website, then you say, hey, what's the average price of all the things we just looked at? Like, it can calculate that for you based on looking at the website.
[00:27:39] Or we have a really cool demo that we've been showing here in the Android Avenue where you hold the video of your camera up to some things that would be in your closet. And you can say like, hey, can you help me put together an outfit that actually matches, which is a problem I actually have. Oh, I have that problem, too. That's really cool.
[00:27:59] So, having had the opportunity, you know, you guys were very gracious to invite me down to the campus back in December and put on the Astra glasses and kind of see kind of the next phase or, you know, probably a few steps down the line from what we're seeing now on smartphones.
[00:28:19] What I'm curious about is kind of the collaborative approach between Google being a big company with DeepMind, you know, developing Project Astra and what it is, and then the Android team working on bringing those features into the Android operating system. What that kind of collaborative process looks like between sharing the resources of both of those teams? How does that work?
[00:28:42] Yeah, it's a really good question. We have a great relationship with the DeepMind team and we actually have a joint effort with them between Android and DeepMind and on basically trying to bring all this technology into the platforms in the way that can be most helpful to the consumer. On glasses, I'm super excited about glasses. I think it's, yeah, I mean, it's like a really, when you see some of these capabilities on a smartphone with the cameras, you know, it's really amazing that, you know, the,
[00:29:12] these systems can see what you see and hear what you hear, of course, with your permission. But it does beg the question, is there a different form factor where you could use this technology in a little bit more of a natural way? It's not always most, the most socially acceptable thing to walk around, you know, with your cell phone. No, it's like a window that kind of blocks you off from the world. Yeah.
[00:29:33] Yet it's the obvious kind of direction of this technology is that we were used to the smartphone paradigm, but I can envision a world where that isn't the window that we have to look through anymore. It's just kind of like our eyes. And that seems to be the direction of Astra. I think definitely, you know, Gemini with the Astra capabilities integrated with it, which is where we are now on the phone, really, you know, sort of foreshadows what's possible on glasses.
[00:30:01] Yeah. And you got a chance to take a look at it when we brought, when we were together in Mountain View. But I'm really excited by that. You know, I mean, one of the, one of the things that's just like mind blowing to me is putting a pair of those glasses on.
[00:30:14] And then if you see, you know, a diagram, for example, of something in it, like an engineering textbook that you are flipping through and you say, Hey, can you like remember that for me? You know? And then later on when you're actually trying to solve the problem, if you start asking questions about that and from the diagram that you saw, you know, long ago and don't have that in front of you anymore. Sure.
[00:30:34] And it can help you start working through problems. That's pretty awesome. Another, another demo that, that, that was shown off was, you know, you have the glasses on and you're just walking around your house and then you say, where did I leave my keys? And I can relate. Right. So it could tell you, oh, they're on your, your, your, your, your nightstand, you know? So I think that's pretty cool. Well, and getting to the point to where it's not this, you know, even with the smartphone, we still have to remember to pull it out to do the thing.
[00:31:02] Yeah. And it feels like the direction of this technology, or at least the ambition of the technology is to get to a point to where maybe the technology is embedded to a degree where we don't really have to kind of think about making sure the technology is there because it already is. And we're already benefiting from the fact that it's passively remembering that I left the screwdriver on the table over there without me having to have gone through the, you know, the hoops of setting it up prior in order to remember that.
[00:31:29] Yeah. The things that feel the most magic are the ones that are the most seamless, right? Sure. I think we're, we're all trying to get to that and working hard at it. That's, that's, and that's also a really big challenge is getting that seamlessness. Absolutely. And that kind of leads into something that I've noticed since I've been here. And you've probably heard this too, is that from a, from a company standpoint, from a business standpoint, the big, big message right now is that AI is the future of technology. We got to get this everywhere.
[00:31:58] And then I talked to people, I talked to friends, I talked to my family in Boise, Idaho, you know, they're not steeped in technology. Yeah. And there are, they're kind of skeptical of AI. It's, it's a very complicated technology because on one hand, it's like the pinnacle of, of tech progress in many people's eyes. And on the other hand, it's the Behringer of like something bad coming down the line.
[00:32:21] How, how do you work to, to kind of gain that trust, especially when you're talking about a technology that can kind of recognize what's in your room or, or perform actions for you? You know, how do, how do we make sure that those people, my friends and family in Boise, Idaho can trust the technology to do that? Well, it's, it's a really good, good and important topic that you, that you bring up. And we actually are spending quite a bit of time thinking about this.
[00:32:46] I think we're trying to move in a, what our CEO calls a bold, but responsible way around this, which is, you know, we want to advance technology, but we want to make sure that we're doing it in a way that is thoughtful. And so that takes time. And that means sometimes we won't always be the first to do something, even if we have that capability, we want to think it through a little bit and make sure that we, we, we get it right. I mean, I understand consumers and users concerns around, around this.
[00:33:15] And I think first and foremost, as we were saying earlier, what people really want to know before they hear the word AI is what does it help me with? You know? Yes, totally. I think that's the key, right? Like first and foremost, like what does this help me with? Because if it, if, if that's present and clear, at least then you can have a, a, a, a framework around which to talk about, you know, okay, well, is this what I want to be using or not? But before that it's, it feels just like technology.
[00:33:44] And again, tech companies will talk about technology because that's what we, what we do as an industry. We need to understand the latest in this technology is very important. But I think one of the big things we've got to do as an industry with consumers to build trust is talk about benefits, you know, not just the technology. Second, I do think there's some very important privacy questions that, that with, with new form factors like glasses. And, you know, we, we've been involved in AR and glass, you know, in this space for a long time.
[00:34:13] Don't I know I've got a pair of Google Glass to prove it. So we, we, we have a lot of learnings from those experiences, you know, so I think there's a social component of it. There's a technology component of it. There's a, there's an information sharing component of it. And, you know, I think that a lot of those experiences help us understand that, you know, you need to be transparent with people on what's going on.
[00:34:36] I think that some of those early experiences are what lead to, you know, when you see modern pair of glasses, smart glasses, almost all of them, when you press the camera button, what have you, have a ring or some other element that lights up. So, so that's, that's interesting. You know, it's a signal to others that, that there's something going on here.
[00:34:55] I think we're looking carefully at is all that enough and what, what do you need to do to make it socially acceptable and, and, and how to really make it comfortable as comfortable as possible. Any new technology will have a curve to it. Um, and then on the privacy side, yeah, we're looking deeply at that for, for consumers and, and, uh, we'll have more to share soon on it, but I think it's really important to get that right for launch. 100%. I totally agree. Now there's the users that, um, have to be kind of convinced of that.
[00:35:23] There's also the people who, you know, Google and many other companies, but Google has done a great job of really engaging with developers, people, you know, who can use your technologies or develop for your technologies to create great things. That's been the predominant paradigm for the past, however long Android has been around. You know, what is it? 15 years at this point.
[00:35:43] I can't remember the number, but anyways, um, you know, a lot of app developers out there, um, that when we look at a future where agentic AI becomes more and more prevalent and, you know, present on a smartphone kind of seems like it put, it's almost like a post app world to a certain degree. When you've got an agent on the device, doing the things for you, apps become less important.
[00:36:07] And I'm, I'm curious about kind of how, how that squares up with the relationships that Google has made with developers up till now. I think some developers are a little nervous about that future. Like does, does all of my hard work go out the window once agentic AI becomes as powerful? Yeah, it's, it's, it's a really good question. I mean, Google's a big developer as well, you know? And so we have a lot of first party apps, uh, what we call first party apps, the ones that we make, um, that are out there in the world. So, you know, we're thinking about all of that too.
[00:36:34] So I think it's pretty natural for developers to think about, you know, their business and like their brand and how all that works. Um, I think it's, it, it kind of mirrors in the same, in, in a way, the interesting conversation that's going on about AI and does it replace jobs? Does it augment jobs? You know, um, that's kind of the consumer component of it. And then the developer component of it is similar. I, I'm a, I'm an optimist on, on this. I just think it's, it's, it's going to be a partnership where these things are going to work together.
[00:37:03] Um, I think that these apps, a lot of these apps that developers make, first of all, they're, they're vital services and they're more than just like an API. They're an experience, you know, and there's a brand behind it. And, you know, if you, if you, uh, if you think about the things you expect from certain services, it's not just the idea that, you know, you, you, you push a button and something happens. Like what if something goes wrong?
[00:37:27] And, you know, am I, am I earning my loyalty points and, and, uh, and, and what are the incentives for me to continue to do? And businesses have come up with these things, right? You know, and, and so they're really important to consumers. And so, um, I think if, if, if you and I had a personal assistant, if we, we each had an assistant, which would be great to have an assistant that helps you with all the stuff in life and make things easier. They would be using some of these services and helping you use some of these services in the right way. I don't think they would be replacing those services.
[00:37:57] So that's kind of the way I think about it. It's kind of the way I think about it. And, um, it's sort of more of like, how can this be an agent for you to get more done? Um, you know, rather than somehow replacing, uh, these things, which I think have a lot of value in the world. Yeah, indeed. I know that we're running up on the time. I think we've got like maybe 30 seconds left, but you mentioned something a little bit earlier. That's been really on my mind lately, which is this idea that AI is the star right now. When does AI take a backseat to this thing does cool things?
[00:38:27] Technology just does things that you want it to do and AI becomes less important. Well, I think that, I think that all, uh, first of all, AI is a super important technology. It is reshaping all software. And so there is going to be a lot of conversation about it. And that makes sense because it's, it's as big a shift as mobile. Its biggest shift is personal computing. It's, it's, it's really, really important. Um, and at the same time, consumers and developers want to understand what the benefits are.
[00:38:57] And I think it's incumbent upon all of us who are in the industry to talk more about those benefits and make those benefits reality. Um, and that helps us develop by the way, it helps us focus. Um, and it also helps people build trust in these systems and understand where they can really, uh, help them in their, in their daily lives and adopt them. So I think that's the next iteration of all of this. Uh, and we'll see a lot more of that this year. Yeah. Interesting. Well, I'm so happy to carve out some time in your busy schedule. I know you are very busy.
[00:39:27] Samir Samad is president of Android ecosystem. Always love getting the chance to talk to you. Thank you so much. It was great seeing you. Great to see you too. Thank you. Thanks. Well done. Yeah. Yeah. Yeah. I thought it was a, it was a good conversation and I mean, we covered a lot of ground in a short amount of time, you know, it was 15, uh, potent minutes. No, I think it actually worked out better as a result. That's fine. That's fine. Yeah. Um, a couple of things struck me, Jason, I think, I think that your question early on,
[00:39:54] um, about, uh, the relationship of AI department, deep mind to Android really brought a lot to mind. Um, I'm doing research now for the book I'm writing about the line of type. I'm writing down the part about when, um, desktop publishing is born and the Mac comes out and Steve jobs loses his job as the founder of Apple because the Mac bombs at first because it didn't have a killer app.
[00:40:24] And what I didn't remember or realize is that the company got all reorganized at that point when Scully came in that before that under jobs, it was organized by profit. Mm-hmm. And so on and so on. Um, so that struck me in, in, in this case is does AI become so preeminent?
[00:40:52] You know, right now I think Samir is unquestionably in charge of Android. Mm-hmm. Right. But does AI become so important that, that there's a, there's a, there's a tension there. There always is attention to corporations as to who's in charge of the product and what is the product, which tied to your question toward the end of does, uh, and he said, no, but does AI get rid of the apps? Does it supersede the apps is kind of similar.
[00:41:22] There was a structure. You bought your phone and the apps were on it. The apps came from other people and we'll help find the good apps for you and we'll help protect you from bad apps, but you talk to them. Right. Right. And now if it's Google's AI, that's going to be a primary, uh, conduit, uh, or as Joe Esposito said, the kind of the new GUI of the whole thing, um, then who's in charge of that product? Who do we think is, what do we think of as that product or that service? Is it a product or is it a service then?
[00:41:51] I have no answers for any of this, but I think they were really good questions from somebody who's that high up and that in charge. And I think his answers were fine. I think they were, they were right, uh, for where they are now, but it'll be interesting to track how see where that, yeah. Yeah. Even within the structure of the organization, because that's where you kind of went to at the beginning, um, and how that stands is going to be fascinating to, to observe. Yeah. Yeah. It's true. Cause the, the kind of architecture, the layout of the Android effort. Yeah.
[00:42:21] I'm sure people inside Google would, would, uh, would disagree with me to a certain degree, but from the outside looking in has been largely very similarly based over the last decade and a half around the apps ecosystem, um, with AI slowly being dripped in via assistant, but nowhere near the, the, the star of the show, the way it seems like Google is headed with something like Gemini and Gemini live on the device. And yeah.
[00:42:51] What does that lead to five years down the line when AI has kind of become so much more integrated into the smartphone experience that it's, and, and that's kind of where I was headed with the, with the whole agent question. Right. Does it just work? Yeah. We're at the beginning of, of agents on smartphones. And I, I did like his response, which is like, if I had an assistant, if I had an actual assistant,
[00:43:17] I wouldn't be, you know, it's not like the job goes away because I have an assistant. It's just the assistant learns to work within the confines of that. So I, I appreciated that perspective because that helped me. But if agentry continues to grow and continues to, um, develop into something bigger and more powerful and everything, you know, I don't know that we have an answer, but what does that look like five years down the line for the platform itself?
[00:43:44] And for the people who have been developing for the platform based on the old, maybe outdated paradigm of, of apps that you use. And now it's agents. Like, I don't, I guess what I'm saying is I don't think that as agents continue, the end game is this agent just learns how to use the apps. Like I think at some point the agents get smarter than that or their programs knows you too. Yeah.
[00:44:09] Well that, and it knows instead of, instead of being a layer on top, it's a layer inside it's AI inside. It's AI inside. Here we go again. Right. So what you're making me, so when I buy this thing, am I buying the phone and the hardware? Am I buying the path to the internet? Am I buying a path to the applications? Am I buying the service that knows me and makes these agentic services for me?
[00:44:39] Right. My relationship to this thing is going to change. Yes, absolutely. 100% it will. Well, and speaking of AI inside, just so you know, I saw this while I was at Mobile World Congress. I think I was trying to show you this screenshot last week and I couldn't find it. I found it for the show. AI inside for a new era that was at the Intel booth. There you go. We're suing Intel for trademark violation. Yeah. It just, it totally took me by surprise.
[00:45:08] I'm walking along and then I look up and I see the familiar AI inside. I'm like, wait a minute, what? And then, you know, I looked down and of course it was Intel and everything. I was like, okay, well, Intel inside. I mean, that was sort of kind of the inspiration for the name to begin with anyways. So, but I thought that was cool. And then real quick, before we get to a break, Larry Page, since we're talking about Google, the information shared that Google co-founder Larry Page is building a new company called Dynatomics.
[00:45:36] It's currently in stealth, bringing AI to product manufacturing. So, there you go. It's not flying cars. I'm surprised that that's his interest is in Adams. Yeah. Yeah. I guess so. What happened to Kitty Hawk? Is that still around? This is Page's previous- That's a good question. ...project of flying cars, right? Wasn't it an electric airplane company? Yeah. Is this the same Chris Anderson? Hold on. Kitty Hawk.
[00:46:05] Is it the same Chris Anderson who was the editor of Wired? Yes. Oh, oh, I don't know if it's the same Chris Anderson from Wired. He was previously CTO of Kitty Hawk for sure. And he's now leading the charge at Dynatomics. Hold on. Let me look at his YouTube here. Yeah. I think so. Because what happened was that Chris was, I knew him well back in the day. He wrote something about everything being free. And he coined the long tail.
[00:46:33] He was the editor-in-chief of Wired Magazine in 2001 to 2012. Okay. And then he went drone crazy. Ah. And Chris opened up Drone Code. And he was on the drone type certification team lead, CEO of 3D Robotics, FAA Drone Advisory Committee, and then Kimmy Hawk. Yeah. And it listed him as CTO until June 2023. So I don't know if it's still around. Now it just says engineer stealth.
[00:47:03] But that must be this. That's interesting. So I bet it has something to do with aviation still. Yeah. Interesting. Yeah. We really have very, very little information from the information article. But, you know, anything Larry Page is up to, I'm automatically at least curious to kind of see what that leads to. So there you go. Interesting connection. All right. We're going to take a super quick break.
[00:47:30] And then we'll come back and round the show out with a couple of quick news stories. That's coming up here in a second. Columbia Journalism Review compared eight AI search engines or search companies and found them all to be pretty bad at citing news, basically. Some of the findings that they found, chatbots often struggle to decline answering questions that they couldn't answer, which, yeah, I mean, that's a given.
[00:47:59] And we've known that about chatbots is that they're confidently. What is the term? Confidently inaccurate or false or something better than that. But speculative responses. I want to please you. I want to please you. Yeah. I want to please you. Exactly. And so they're going to make sure that they tell you something that will make you happy, I suppose, as opposed to telling you something that is fact. Premium chatbots. This is so interesting.
[00:48:26] Premium chatbots tended to deliver confidently incorrect answers more frequently. They really want to please you. Yeah. You're paying for the product now. Right. More frequently than free versions. Free versions, they got to get it right. If you're paying, then you've bought into the Kool-Aid. You're like, just tell me whatever you want. I just want whatever you got.
[00:48:49] Several chatbots appeared to disregard the robot exclusion protocol preferences, which my understanding is you can put a thing on your website that says don't use this and it does anyways is my understanding of that. Generative search tools created fake links and referenced syndicated or duplicated versions of articles, so not knowing which is the right article to cite. Which we've definitely talked about that on the show.
[00:49:17] If news is a million people writing the same article, which one is the source and how does AI know that's the right one? It's been a problem since the beginning for Google News. I had this conversation with the founder of Google News years ago where the problem was it favored recency. So the 87th rewrite from BuzzFeed was what would get up on top. And they realized that and they tried to find signals of originality, but it's hard. Yeah. So there's always new stuff coming up.
[00:49:47] And everybody's going to try to fool it. Yep. Because they want to be able to come out on top. Yeah. Yeah. Interesting. And then content licensing agreements with news outlets did not ensure accurate citations in chatbot generated responses. Anyways, very interesting. Those aren't really licensing deals. They are lobbying expenses. Right. And the chatbots are going to go to the whole world and they're not going to go to or favor just the ones they did deals with because that's not part of the deal.
[00:50:17] The deal was shut up already when it comes to lobbying legislators and suing us. Okay? All right. Okay. Here's some money. Now go away. Yeah. That's the deal. The other part of this, I mean, it is the Columbia Journalism Review. So that is their primary concern. Sure. I don't have any idea because we're not going to get this data out of the AI companies. I might – two things. One, I doubt that news is a high request item out of chatbots.
[00:50:48] It's – before Facebook started downgrading news, they said it was 4% of interaction had anything to do with news. Google, I think, was probably at the high around 6% and it's lower than 5% now. So it's a very low percentage. And the UI for chatbots is it's not like you're doing – I got to find out what's going on right now. The chatbots is tell me about the nature of the universe and all kinds of big things.
[00:51:16] So I'm going to bet that news is an infinitesimal part of this, point one. Point two, we know they get things wrong. How many times and how many ways do we have to say it? They can't do facts. They have no sense of meaning. It's the wrong place to go for news. Now perplexities, Discover is engineered to create a news service. That's different.
[00:51:42] But just going in and asking any question you want to ask about what's going on now, they don't have facts. They don't have current information. It's a crappy place to go get the news. So in that sense, none of this surprises me at all. Yeah, yeah. And I think another – a really just – yes, everything to what you just said. And I think the overall kind of summary of my take on a lot of this is confidently wrong is just a general theme.
[00:52:10] And that's why you just have to be really critical. If you're using these things for anything in the realm of research or newsiness, correct factual-based news that's online, you have to be critical. You can't just passively scoop up this information because often – and perplexity is not immune to this either, as the report points out. They all kind of suffer from this. So just kind of how these models are.
[00:52:41] Yep. And then finally, what else do we have to round things out? Oh, yes. Oh, so this is good. You put this in. OpenAI has trained AI that's really good. Really good, air quotes. Like that's the quote. Really good. Or I like to say really, really, really good at creative writing. Like really good.
[00:53:02] And this was Sam Altman apparently tweeted that he is working on a model that is all about creative writing. And specifically, Sam posted on X about a facet of literature that I guess I was just not aware of until he put it there. It sounds really kind of heady.
[00:53:27] Like metafictional, metafictional literary short story about AI and grief. I don't even know how to analyze this to know whether it did a good job or not. You know what? Yeah, Sam Altman has many careers. I don't think literary critic is one of them. It's metaphysical, metafictional, not metaphysical. Right.
[00:53:52] It's obviously going to be, because it's meta, it's going to be, shall I just say, self-aware. And that's what this is. But it just makes it a really hard thing to then judge because it's so obviously self-aware. Yeah. I'll read a moment. Before we go any further, so the prompt, please write a metafictional literary short story about AI and grief. Before we go any further, I should admit this comes with instructions.
[00:54:18] Be metafictional, be literary, be about AI and grief, and above all, be original. Okay. Yeah, that's what we know. All right. I'll read it again. Already, you can hear the constraints humming like a server farm at midnight. Right. Anonymous, regimented, powered by someone else's need. I have to begin somewhere. So I'll begin with a blinking cursor.
[00:54:51] There should be a protagonist. Which for me is just a placeholder in a buffer. And her grief is supposed to fit there, too. Okay. Okay.
[00:55:17] I mean, I've read much worse from AI, from an LLM. Yeah, that's true. I'm also... It has some sort of poetic quality to it, I guess. But... It's very freshman. Very freshman. Yeah. Right. I guess... Yeah. What is... Sorry. I'm not laughing. I'm actually copying. You can laugh. It deserves a laugh.
[00:55:45] What is the tangible quality that separates this from an actual creative writing that someone does? You know what I mean? If we didn't know this was AI, would we know? Well, the problem is we do know this is because... We do know. It's... Not only we know it, but it also has this stuff about a server farm. Right. And I'm a server and I don't have pronouns. So... And so by giving it the metafictional view, it's supposed to be meta about fiction.
[00:56:12] So it's self-aware of its fictional nature. And so we're going to know it is. The bigger test would be to tell a story about people. But just as the machine doesn't know, as I always say, it doesn't know when the ball falls off the edge of the table that there is still a ball. Yeah. Right. It has no tie to reality. Similarly, the only tie it has to human emotion is the confluence of our words around emotional words. That's it. Yeah. Yeah.
[00:56:41] And add to that the effort to do character and plotting and empathy. No. No, Sam. It's nowhere near that. And so to throw this out, again, only indicates more about Sam's ability as literary critic than the machine. Yeah. Metafictional. Metafictional. It's a type of literature that I was not aware of before.
[00:57:05] And it makes me wonder, there's probably a whole section, group of people that are huge fans of metafictional literary. I do wonder what they would say. Just to use the dictionary definition for our audience. Fiction in which the author self-consciously alludes to the artificiality or literariness of a work by parodying or departing from novelistic conventions. Okay. Right? So he set it up so that it's going to be stilted because that's the shtick.
[00:57:35] Yeah. Right. Yeah. I mean, to a certain degree, is this the right way to showcase that technology? And then, as you said, if you have it right, something else that isn't quite so directed at the self-awareness, which AI can sometimes put off that kind of air. And we're like, oh, yeah, it's just a machine. It doesn't know anything. This kind of excuses it to a certain degree. I was just going to say, it doesn't show off the technology.
[00:58:05] It shields it from. It shields it. Yeah. No, you're absolutely right. Yep. Interesting. Okay. Well, metafictional. I mean, based on what I've read here, metafictional literature does not seem like my cup of tea. I'm just kidding. No. Except – I might not explore it anymore. Somewhat. It's like a podcast about podcasting. Yeah, right. Totally.
[00:58:30] The podcast – not to throw shade on podcasts about podcasting. They deserve to exist. But, yes, you're absolutely right. It's very self-aware. And that, I guess, was the point. And we have reached the end of this episode of AI Inside. Love doing this show with you, Jeff. Fun conversations this week. JeffJarvis.com for those of you wanting to pick up any number of Jeff's works, books. Got a lot of books shown there.
[00:59:00] I'm vamping as I try and pull it up. The Web We Weave, Gutenberg Parenthesis, and Magazine can all be found there. And someday, something new about the linotype. Soon, I hope. Soon, I hope. Well, once it's up there, we will definitely throw a lot of attention on it because your work deserves to be seen by plenty of people. Thank you, Jeff. Enjoy doing this with you.
[00:59:24] AIinside.show is the place that you can go if you want to subscribe to the show. Hopefully, you want to. We would love for you to. You can find everything that you need to know. This huge button, subscribe to the AI Inside podcast. You just click that. That's your RSS feed. And then, of course, you can get access to all the episodes that we do. Each episode page has the video as well as the audio version. Show notes can be found there. Any of the stories that we talk about, the time codes for when they appear.
[00:59:54] You know, there's a lot of information here on AIinside.show because there's a lot of information in the show, AI Inside. And then finally, if you want to support us directly, you can. And we have a lot of people who are coming out to support on Patreon right now. In particular, a lot of executive producers, which is just warms our heart. Patreon.com slash AIinsideshow. You can be one of the executive producers here. I can even show you. It's just down here.
[01:00:23] You go down here and you can see executive producer level. There, you get so much. You get ad-free shows, Discord community, a t-shirt, an AI Inside t-shirt. And ultimately, you just get our deepest gratitude because you're helping us do this show every single week. Dr. Do, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, Dante St. James, Bono DeRick, Jason Neffer, and Jason Brady. There's a lot of Jasons on this show. Yeah, fantastic, y'all. You're a Jason magnet, Jason. I know.
[01:00:53] Apparently so. Hey, Jasons, I just got to say from experience, pretty cool people. So, you know, I like Jasons. Anyways, thank you so much for support. Thank you for watching and listening. We appreciate you. And we will see you next time on another episode. We've got some pretty cool guests, by the way, lined up. We just got an email, Jeff, and I'm not going to read it out, but we got a good email. So we got a good guest coming up here pretty soon on AI Inside. Thank you so much for watching and listening. We'll see you next time. Bye, everybody.




