Jason Howell and Jeff Jarvis are back for the latest AI Inside. We cover OpenAI’s nonprofit reversal and Musk’s ongoing lawsuit, Altman’s Orb stores for iris-scanning identity, Google’s confusing new AI Mode in search, Gemini 2.5 Pro’s IO Edition and coding challenge, Amazon’s Vulcan robots with a sense of touch, Apple’s AI search comments shaking up stocks, Reddit’s new human verification rules, why admitting AI use sparks distrust at work, an AI-generated impact statement in court, and the Associated Press’ Alberta-Quebec AI blunder.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow
Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice!
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
CHAPTERS:
OpenAI reverses course, says its nonprofit will remain in control of its business operations
0:12:45 - Welcome to Sam Altman’s Orb Store
0:25:38 - Google: New ways to interact with information in AI Mode
0:41:42 - Gemini 2.5 Pro Preview: even better coding performance
0:44:57 - Amazon makes ‘fundamental leap forward in robotics’ with device having sense of touch
0:49:28 - Apple to add AI search partners to Safari as Google usage falls
0:52:03 - Reddit will tighten verification to keep out human-like AI bots
0:54:20 - Being honest about using AI at work makes people trust you less, research finds
0:59:53 - AI of dead Arizona road rage victim addresses killer in court
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] ServiceNow unterstützt Ihre Business-Transformation mit der KI-Plattform. Alle reden über KI, aber die KI ist nur so leistungsfähig wie die Plattform, auf der sie aufbaut. Lassen Sie die KI arbeiten, für alle. Beseitigen Sie Reibung und Frustration Ihrer Mitarbeiter und nutzen Sie das volle Potenzial Ihrer Entwickler. Mit intelligenten Tools für Ihren Service, um Kunden zu begeistern. All das auf einer einzigen Plattform. Deshalb funktioniert die Welt mit ServiceNow. Mehr auf servicenow.de
[00:00:31] This is AI Inside, Episode 68, recorded Wednesday, May 7th, 2025. The Orb Knows Your Humanity. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show. If you like what you hear, head on over and support us directly. And thank you for making independent podcasting possible.
[00:00:58] Hello, welcome to yet another episode of AI Inside, the show where we take a look at the technology that is layered with artificial intelligence everywhere. It's just sprinkled throughout. And joining me, sprinkled throughout this episode, is Jeff Jarvis. Good to see you, Jeff. Hey there, how are you? Doing all right. Artificial intelligence is the gravy on all meatloaf. That's right. What would the meatloaf be without the gravy? Seriously. Where would we even be?
[00:01:26] A big thank you to our patrons, patreon.com slash AI Inside Show. And of course, our patron of the week, Charlie, De La Vida. De La Vida, De La Vida. I don't know if you say it with the Spanish the or duh, but there you go. De La Vida. And I'm sure you can let us know. Contact at AI Inside. If I slaughtered it and you want to correct me on that. Also, just a friendly reminder, reviews, reviews.
[00:01:54] It's the thing that I keep harping on because I would love to update them. We're getting a few, but I'd love to get a bunch. So if you haven't done it and you're thinking, you're like, oh yeah, I really need to do that. Right now, while you're listening, go over to Apple Podcasts. Just log in and update your review. And just be like, this is what I think of the show right now. And then I'll look at it and it'll put a smile on my face anyway. So thank you for doing that. But yes, indeed, indeed.
[00:02:20] And just real quick, we had last Saturday an extra interview episode hit the feed. We're going to be doing it that way with interviews as we go forward. We have another interview related to Intel that's going to hit not this weekend, but next weekend. And yeah, figured that keeps the news train rolling, keeps the interviews separate, and we can kind of feature them. And the Emily Bender, Alex Hanna interview was awesome. I learned a lot and I hope you all enjoyed it. Yep. Yes, indeed. All right.
[00:02:49] So this has been not being an interview episode. This is a news episode. So we're just going to dive right in and talk about some news. And yes, it is OpenAI up the top. But I thought this was kind of important because we've been talking for so many months about OpenAI wanting to shift from a nonprofit into a for-profit company and the Elon Musk drama related to that. And as we know, you know, with this story and OpenAI in general, things can change on a dime out of nowhere.
[00:03:19] That's exactly what seemed to happen. A reversal of its plans to spin off as a fully for-profit company. Now going to stay nonprofit, raising their capital, more like a conventional business. But the nonprofit board is going to continue to oversee its mission to benefit humanity. Right. So the nonprofit board stays in charge. They were trying to get rid of that. A. B. The for-profit company that does report to the nonprofit arm will be a public benefit
[00:03:47] corporation like Anthropic, which means they can make all the money they want, but they're also not as obligated to make as much money because they have this public mission. Okay. And I don't know what this means to Microsoft and its capital structure. I don't know what it means to the big investment that they got recently on the condition of turning into a for-profit company. Um, and I also don't know what it means to the war with, uh, Elon Musk, except Musk says the war is not over.
[00:04:15] He's going to continue his suit, uh, and not going to back off. We know that it means that nothing like that's not slowing down Elon Musk's kind of rage, um, personal vendetta, I guess you could call it. He's a living rage tweet. Yes. He really wants to keep going there. Um, yeah, lawyer Mark Toboroff is arguing that the outcome does not address Elon Musk's
[00:04:42] concerns that open AI is still developing closed source artificial intelligence for private benefit. And Musk doesn't like that at all. So that's not going to change things. Um, yeah, I don't know what else there is really to say about this other than, uh, uh, the drama just keeps on going on. It's just, it's amazing. Yeah. Um, the world turns. Yeah. I mean, it, it started with Musk's as one of the co-founders and, and, and a lot of his money.
[00:05:10] And so he's not wrong to have a voice here in what it becomes, but he just is delighting in keeping the drama going as much as possible. And it did set in this one thing that this teaches folks, it is really, really difficult to switch between for profit and not for profit. It's not something you can just go do it in a flash. Uh, newspapers are thinking about trying to do it and some are succeeding, but it takes a lot of work and that's going the other way. That's going from for profit when there's no more profit to become a not for profit and
[00:05:40] try to become a charitable venture. Uh, Philadelphia did it. Um, uh, a paper in Washington just did it. Uh, and I think there's a couple other examples here and there, uh, Salt Lake City. Uh, but in this business, it's the opposite because being not for profit limited, uh, the investment potential here. Cause the thing about profit is it is a way to pay back your investors. It is a way to give them the dividends of their investment. And, uh, if you're not for profit, nobody owns it really. So this was a switch on that.
[00:06:10] And either way, whichever route you want to go, it's very difficult when it comes to taxes. Also, obviously we saw this case, it raises all kinds of issues with, um, investors and equity. Uh, SoftBank also, uh, conditioned its $30 billion investment on being for profit. Now, is that condition going to be met by being a public benefit corporation for the main company? Or was that intended to be the whole thing?
[00:06:37] I don't know how the contract was written, but pardon me, that's a factor as well. So, uh, Altman just finds himself in the middle of drama after drama, after drama. What, what occurs to me when I'm listening to you talk about the challenges of transitioning from one to the other is that if you're going from a for-profit to a non-profit also, there, there is the perspective or the, or the, um, the, what, what is the saying of something
[00:07:05] is reality, um, not perspective is perception is reality. Perception is reality. The, the transition from a for-profit to a non-profit kind of has an air of like altruism or maybe that's the wrong word, but it's kind of like, you know what? We were making money. Now we just want to do this for the good of humanity or whatever. In going in that direction, there's kind of like a good feeling about it from, from a perception perspective, going the opposite direction, I'd say it's the opposite perspective.
[00:07:35] Yes, totally. It's like, oh, wait a minute. Once you were for humanity and now you just recognize an opportunity to make tons and tons of money and say you're going in that direction. Like does that, um, I don't know, place a bit of a, more of a stink on a company like OpenAI now that they're kind of undoing? I don't know the answer to that. Yes and no. I mean, I think capitalism is not evil. That'll be my most controversial statement of the year.
[00:08:03] You know, it's what we live under and profit is what drives that and profit is what drives investment. And so it's not a bad thing. You know, in, in my world and in journalism, uh, cause we're all a bunch of commies, uh, we tend to, my students especially come, come in thinking, well, I'm just gonna be not for profit. Everything's okay. And I, what I do is wonderful. People are gonna give me lots of money and all as well. No, you still have to run it like a business. The only difference is it's a tax structure really in the end.
[00:08:31] Um, and an ownership structure that there's a lack of ownership, uh, in that sense. I mean, there is kind of an ownership, but it's not, not the same. And, um, uh, you still have to run it with all of the extrancies of business. Mm-hmm. And so, um, in that sense of the way you run it, there isn't necessarily that much difference. Right. Because you're trying to make it sustainable. You're trying to bring benefit back. You're trying to be able to invest in the future. Um, it's not like you're truly working for nothing when you're a nonprofit.
[00:09:01] I think a lot of people hear nonprofit and they think, oh, well, there's no income. There's no, there's still a business to be run. There's still people to pay because they're doing the job. They're doing the work. It's just the, I don't know, the, the mission is different. Right. Yeah. And so they're trying to mix the two by having a public benefit corporation, but that's still a for-profit company. Um, uh, and, um, so Etsy is also a public benefit corporation. Um, and I, I don't know what their Craigslist is. I think it might be.
[00:09:30] Uh, but basically what it says is you're not obligated to profit Uber Alice. You have other structures. So that's okay. That's fine. Um, but I don't, I don't know where this all ends out. And in terms of those who've invested in the company, I'm not sure what their views are. I haven't really seen much about, I was surprised how little reaction I saw on the news because I thought this was a big deal. And as soon as I saw it, I went running to put it in the rundown and said, Ooh, we're going to talk about this this week because it's big news. Of course we are.
[00:09:58] And we always start with open AI, but now we have actual news about open AI. Um, so I'm fascinated by it, but we'll see, uh, we'll see what, uh, what the reaction is and whether it has any impact on Altman's ability in the future to fundraise. Cause he's going to have to keep fundraising because this thing is a money hog. Yeah. And he wants to build star, whatever it is. The, um, uh, star base. No, what are they calling their, uh, not star late, not star fleet, star, whatever they're hosting. Stargate. Is it Stargate? I think it is. Yeah.
[00:10:28] Uh, they're hosting stars out there. By the way, I just saw that another thing happened. He's now talking about how that's going to be international. Well, I thought the whole point was to please Trump by making it all domestic. But anyway, um, just another detour on the road to opening his future. So, um, uh, if I were a competitor to open AI right now, I'd be pretty happy because I think it looks like turmoil. Well, and especially if you're, you're the, the Musk competitor, well, you're feeling
[00:10:56] like, yeah, I stuck it to the man and I'm going to continue sticking it to the man. Let's see where we can take this. Yeah. But you know, the funny thing is Jason, I don't know that Musk actually benefits that much in his business. If you're not, if you're meta, if you're, that's exactly it. That's what it all is, right? If you're meta, if you're anthropic, if you're Google, um, you're thinking, okay, they're, they're in turmoil. Uh, that opens up space for us. For sure. If you're a mosque, you live in turmoil.
[00:11:24] So does this change people who work for open AI who are like, oh, we're chance for transitioning to a four prop company. Great. Great. Stick around. And now suddenly things are changing. They're like, you know, plans for the future are, are different than what they were planning for. With a public benefit corporation, as I understand it, there still is equity. You can still own the equity and you can still be profitable and that can, equity can still rise.
[00:11:51] However, I don't know what the ownership structure is from the nonprofit to the for-profit because the nonprofit owned a lot of this equity that was going to be freed up, uh, in all of this. And again, there is Microsoft and its whole investment in this. What changes there? Is there still a renegotiation of the cap table? I don't know. Um, the governance here is that the nonprofit board is in charge. Now that board has been changed in, in the image of the current CEO. Yeah.
[00:12:20] And they got rid of the people who got rid of him. So, uh, in that sense, there's going to be some more stability in the leadership, but, um, it's a mess. And in a way, I don't blame Altman as much as I blame Musk because he set it up this way with his fellow investors. A and then B, he came back to fight it at every turn. Mm-hmm. So the, the turmoil is, is his victory. You're right, Jason, to that extent. But I think it's a pyrrhic victory in terms of his own business.
[00:12:50] Yeah. Yeah. Indeed. Well, that is, yeah, it is fascinating. And obviously with the lawsuit not going anywhere and more details about how, you know, how they stay the course as a nonprofit and what that's actually going to look like. There's a whole lot of steam on this story and I'm sure we'll be talking about it again and again. Um, but there's this other part of Altman's business that, um, I don't know. Well, businesses we should say. Businesses. Which is, uh, Altman's Orbs.
[00:13:19] Maybe that should be the title. But, uh, Sam Altman's Orbs are waiting for you. If you go to a new store that just opened up in San Francisco's, uh, Union Square District, you can go into the Orb store. And what is an orb? It is part of World, which is formerly WorldCoin. This is essentially a flagship store that they're opening in downtown San Francisco. One of eight, uh, no, sorry. I know that they're opening more than one.
[00:13:48] I can't remember how many other ones that they plan on opening, but, oh, six. Six U.S. locations are opening up. To allow you to go in and peer into these futuristic orbs in order to get an iris scan. So to scan your eyes. To verify your humanity in a world that is, as we know, swimming in advanced artificial intelligence.
[00:14:14] And then join the real human network is the tagline for this. And, of course, people are saying, dystopia. So, um, I'll get two questions. First, are you going to go do it? I, I'm, I don't know. I'm curious. Like, I know San Francisco sounds like it's super close. It's definitely a trek for me. You know, it's going to take me an hour, hour and a half to go each way. All the way to IO. Oh, yeah, it's true. I could do it on the way to IO. Oh, we're on the way back. Yeah.
[00:14:44] Get my iris. I don't know. Is there a reason that I shouldn't do this? I mean, I know that was privacy experts would say biometric storing or biometric data. Do you really want to do that? You know, that would be my hesitancy. So here's the thing about, about this. I, I, I, the point where Altman's right that we need something that verifies human existence in public discourse and public interchange. Um, but is he the one who should do it?
[00:15:12] Is a, is a crypto related company, the one that should do it? Would it make more sense? Well, I was going to say it would make more sense of the postal department, but God knows where that's going these days. Yeah, that's true. Yeah. Whom would we trust to be an agent of verified identity? This does go back. I mean, when, when I used to help organize, believe it or not, um, conferences on the future of the postal department, postal vision 2020, there were about four or five of those conferences.
[00:15:41] And one of the opportunities we talked about a lot for the future of the, of the USPS was that it could be a, a, an identity verification, uh, entity authority. Right. Right. Um, and, and I think more and more, the more that AI makes stuff and jams the internet with it, the more we need, uh, a mechanism to say, no, it's, it's me. It's verifiable me. I'm talking now.
[00:16:09] Um, and I don't know how to do that. I mean, I can, uh, has taken up a bit of that accidentally, not, not through their own planning, but if you go to Mastodon or if you go to blue sky and you verify yourself, you do it by verifying through the, um, uh, domain that you own. Right. And the people trust that that's actually you making the stuff there. So I'm buzz machine.com and I have Jeff Jarvis.com.
[00:16:37] Um, and if you trust that I am in fact, the human being who is doing that when, and I haven't gotten around to doing this, but if I showed up using that domain on both, I think I actually did do it on Mastodon. I haven't done it yet on, on, um, blue sky. Then you have to make a judgment that yes, I know there's a domain and I know Jeff Jarvis is the one doing it. And yes, okay. So if that shows up here, that's him. That's kind of the closest thing we have right now to that kind of verification.
[00:17:05] Um, and I think we need it ever more. Uh, so I might do it for that reason, but yeah, I just don't trust it. It's not just Altman. It's the, it's the connection to crypto. It gives me the creepos. That is, yeah, right. That ends up being kind of the, the kind of tricky thing for me too, where I'm like, oh, really? Okay. But you know, you, you raise the question, like, is Altman the right person to do it? And absolutely.
[00:17:32] That's an absolutely fair question to ask. Is anyone there? I mean, I'm not saying that he's the right person to do it, but he's the person, he's one of the people doing it versus not doing it. And so is anyone the right person to do it? Well, is any company? I mean, there was a time, uh, you know, five years ago where I would have thought Google or, uh, Amazon might be trusted to do it, but now they've got cooties. And so I don't know deep enough.
[00:17:58] Probably any of these big tech companies have cooties, you know, it, you dig deep enough. You're probably going to find some reason to not trust them with something as critical as biometric data storage and all that kind of stuff. So then it just depends on whether you feel it's necessary to have it and that the risk is willing, you're willing to take the risk. Or is it something that we think that a new and independent, non-governmental, non-corporate entity should be started as an identity verification authority? Okay.
[00:18:27] Um, with some, uh, uh, security of your identity, but also some measure of privacy and control. Uh, you know, and that's, that is- Can we build it with that in mind, with some sort of community involvement? Right. So that it's built with us in mind instead of built from the frame of Sam Altman and whatever it is that he stands to gain out of something like this, being the person in control and in power of this particular technology.
[00:18:57] So that makes me think that what Altman should do is donate the technology. I don't know how he's planning to make money with Orb, uh, but, but if he's not planning to, if it itself is not the center of a profit structure. Uh, and by the way, if it were the center of a profit structure profiting off my identity, you know, I don't know. So it tracks me, it needs to be an ICANN-like organization. And I'm not, I'm not sure what else is of that size.
[00:19:25] And in essence, the, the ITU, the international type of, uh, not typographical, uh, telecommunications union, um, has some role in that. The FCC has some role in that when it comes to broadcast outlets. Um, I wouldn't trust my phone company either. I don't like them. I wouldn't trust my cable company. Um, oh God, can you imagine Comcast? Right. No, thank you.
[00:19:52] Uh, you know, my TV screen right now, of course, is the Vatican's and maybe they should do it. I don't know. Uh, but that, that, that brings some baggage too. Um, for some folks. I think it all does. Yeah, it all does. But I think, I think we do need, point of all this once again, we do need some verification of identity.
[00:20:09] Um, now the risk to verified identity when it becomes required, if you live in a country like China, is that there's no opportunity for anonymity if it's required. And thus freedom of expression is severely limited. Mm-hmm. So even having verified identity is a threat. Right. It's in those circumstances.
[00:20:35] If you find yourself in a totalitarian regime and, hey, it could happen anywhere. Um, then, uh, to, to be able to undertake certain activities, having to have verified identity becomes a threat. And this is the problem with the UK right now and, and Florida right now and access to porn. Mm-hmm.
[00:20:54] When you go and verify your age for those purposes, you're also registering as a porn fan, not fun in itself, and you become vulnerable to all kinds of other identity theft. Other identity theft. If cultural norms around pornography change and shift and, you know, suddenly the administration decides, you know what, that is something that we want to, like, litigate around and, and criminalize or whatever.
[00:21:21] Now suddenly they have an immediate pool of people to go after. And that wasn't the case when you signed up for the verification, but now it is. Right. And, oops, can't put the genie back in the bottle there. Yeah. Yeah. It carries a lot of risk. Also, is, um, is, is, is Orb on blockchain? I think it is. Yes. A decentralized layer three blockchain design, right? So the other issue there is that thus there is that, that sounds secure, but that also means there's no erasing. Hmm. It's.
[00:21:51] Mm-hmm. I don't know. There's so much I don't understand about blockchain and I've tried. Like, it's an interesting, uh, protocol. I don't want to say it's technology, but it's an interesting standard. Um, but it raises number one, environmental concerns. Um, but, but number two, cultural concerns, because these jerks with all the cryptocurrency that's associated with blockchain, it's not all blockchain, but associated with it becomes an issue.
[00:22:19] So I, uh, this is not for our show to discuss, but this week I was looking and there was a big, um, crypto party in, uh, was it Dubai? I think. Yeah. Dubai. And then Trump's having his crypto dinners. Ah, the folks hanging around crypto, thus blockchain are creeps. Not all of them. Yeah. Don't worry. Don't write me. Don't write me. Don't write Jason. Right.
[00:22:44] You're not all, but I'm saying the reputation is taking of, of, of those technologies takes a hit by association. Yeah. There, there's definitely a stink around their reputation, you know? And then there's the Melania coin. I think I saw the news there that like right before, like the two minutes before that launched, there were a bunch of people that got in before it was announced necessarily. And then, you know, they ended up making massive amounts of millions of dollars. There's just, yeah, it's, it's just a strange thing to, to hook your wagon to.
[00:23:13] And, uh, yet, I mean, you know, yet I think blockchain fans, people who are, or rather blockchain stans would say there are plenty of really, you know, useful. Absolutely. Applications for a protocol like blockchain that have nothing to do with crypto. Absolutely. I, I, in my, in, in two of my books, I wrote about what an idea I have that to, to update copyright with something I call credit rate.
[00:23:41] Where, where you can note people's contribution to a string of creativity and, and thus allow, but not require compensation or deals or whatever. Right. Um, that is just tailor me. I wrote that before there. I knew about blockchain, but it's tailor made for something like the blockchain. Uh, there's tons of uses for it. A lot of people are trying to put journalism on the blockchain so that it can't be censored. The one also kept erased then. Yeah, I do remember that. Um, so yeah, I'm not, I'm not against blockchain.
[00:24:07] Um, and even crypto in theory is okay, but in cultural. Right. Yeah. Just ends up being a, all right. So after all of this, are you going to stop on the way back from IO? Um, I don't know. I don't know. I'm curious. I'm curious, but I'm still hesitant. I'm still hesitant to go in there and go over and hand over my iris scan. I think at the same time, how many dumb ways have I given up my identity online as is, you know?
[00:24:37] So it's kind of like, is it just too late? And like, I, I, I'm putting up a stink when it really doesn't matter in the long run. I don't know. It feels, it feels like a different thing though. It feels like something that I should scrutinize a little deeper. Well, I, I, I'd be very curious to at the very least walk in and hear the pitch. Yeah. Yeah. And kind of see the technology and get it, get the energy of the room. And that's why the wired story was, was a fun kind of thing to, you know, they had lots of really great pictures of like what the environment was like.
[00:25:06] And you get a total like Silicon Valley tech bro, uh, vibe about it, uh, which is kind of unsurprising to be honest. Um, and you get what? $40 in crypto. Oh, oh, yay. Oh, that's just think of what that's going to grow to. Um, Visa, match, Tinder, all partners of this, you know, dating online dating. That's a, that's a big one. Like verify you're, you're a human. So, so I agree.
[00:25:34] There are obvious reasons for something like this to exist. Um, I just don't know that I'm warm to it yet. Yeah. Yeah. I'll be, I'll be curious to see how this ferments in your brain. Ferments. Does it, does it? Uh, does it get sweet or does it sour? We'll find out. We're going to take a quick break. And then when we come back, I'm going to show off AI mode because I've had access to it for a little while and I'm a little puzzled by it. And so I'll see if you're puzzled by it too, Jeff.
[00:26:03] We're going to talk about that here in a moment. Let's talk about something. We don't talk about enough. What happens to all the data we share with AI platforms like chat GPT or Claude? Every question we ask, every idea we brainstorm. It's all being collected and tied back to us as individuals. But then what? Does it get sold to advertisers, corporations, maybe even governments? We've also grown accustomed to social media companies selling our data over the last decade.
[00:26:31] And I'd like to think that maybe we've learned a thing or two so we don't make the same mistakes again. That's why I've been using Venice.ai, who's sponsoring today's episode. Venice.ai is private and permissionless using leading open source models for text, code, and image generation. And it's all running directly in your browser. So there's no downloads, no installs. In fact, your chats and history live entirely inside your browser. They don't even get stored on Venice's servers.
[00:27:00] Their pro plan is where things get really interesting, though. You can upload PDFs to get insights and summaries. You get a user controllable safe mode for deactivating restrictions on image generation. You can customize how the AI interacts by modifying its system prompt directly. And finally, you get unlimited text queries along with high image limits that I couldn't even hit if I tried. We talk often on the podcast about the benefits of open source AI, and that's exactly what Venice.ai is using.
[00:27:29] If you care about privacy like I do, or you just want an uncensored and truly open AI experience, Venice.ai is worth checking out. Go to my sponsor link, Venice.ai slash AI inside. Make sure to use the code AI inside to enjoy private uncensored AI. Use my code and you'll get 20% off a pro plan. That's Venice.ai slash AI inside with code AI inside for 20% off the pro plan.
[00:27:57] And we thank Venice.ai for sponsoring the AI inside podcast. This episode of the AI inside podcast is sponsored by BetterHelp. I've noticed a big shift in recent years towards taking mental health seriously. And I welcome that change because I recognize firsthand the benefits of taking care of my own mental health. Therapy can be a transformative experience, and it definitely has been for me. But no question, it can be pricey.
[00:28:25] Traditional in-person therapy can run anywhere between $100 to $250 per session, and that adds up. And it really should not stand in the way of getting the help that's needed when it counts. BetterHelp is online therapy that can save you on average up to 50% per session. With BetterHelp, you pay a flat fee for each weekly session, and that adds up to big cost savings over time. And not only that, BetterHelp is much easier to access than traditional therapy because it's
[00:28:54] an online experience that meets you where you are at with quality care from more than 30,000 therapists at a price that makes sense. You just click a button to join. Your therapist is there from wherever you happen to be. You can get support with anything from anxiety to relationships to everyday stress. And if you just aren't feeling it with your current therapist, you can easily switch to another at any time. It's mental health within reach, and it's totally worth it.
[00:29:22] I know firsthand, I used BetterHelp a few years ago myself. It was incredibly convenient, and more importantly, impactful to my life. I felt heard and supported, and that's what I really needed. Your well-being is worth it. Visit BetterHelp.com slash AI Inside today to get 10% off your first month. That's BetterHelp, H-E-L-P dot com slash AI Inside. And we thank BetterHelp for their support of the AI Inside podcast.
[00:29:54] All right, so Google has AI mode, which is essentially kind of an extension of its AI overviews that you see at the top of your Blue Links page. But it's a separate page, and they've been kind of testing this, and you can get to it through labs. And if you're on a wait list, you might have had access to it. Now that wait list has opened. So anyone can... Not for me! Sorry, I have to complain, because I use... Whatchamacallit, yeah. Yeah.
[00:30:24] I mean, do you have like a personal account that you can use in that case? Yeah, but then it doesn't relate to anything of the rest that I have. Yeah. Yeah. And then it doesn't tie into stuff, yeah. Like, I've got a personal, and I've got a business, and depending on what I'm doing, you know... I just had to get that complaint in. Keep going. I'm sorry. No, I hear you. Google! It's a complaint that... Have you had this complaint for, what, 10 years now? Yeah. Yeah. Yeah. Well, and it is kind of crazy that like... The one thing I pay Google for, I get restricted in what I can do with it. It's weird.
[00:30:53] That is a really weird reality. Yes. I totally support that. So, essentially, this is now open for most people, except for Jeff. And you can... You know, it is... And let me just pull it up so you can kind of see what I'm talking about here. As it stands right now, AI mode is a separate destination. Like, I could go to the All tab, and apologies to audio listeners who might lose some of the context.
[00:31:19] Although, All is not looking very normal. This is interesting. I thought going to All would take me to my normal Blue Links search bar. Oh, maybe this is just what it puts because I don't have a search. So, if I did a search for Jeff Jarvis, it would just show me the normal kind of bloated Google search results that we get now. If we go to AI mode, it will put Jeff Jarvis in there. It will, you know, think about the query, look for results. Does it give me a whole... Oh, okay. Good.
[00:31:50] It stopped at the single sentence of description. So, it gives me kind of probably a lot of the similar kind of results that we saw on the other page. But it does a little bit of the LLM quality kind of summarization slash organization. Some of the stuff that you might see in AI overviews definitely extended and expanded upon, though. So, you're not going to get that small little block up at the top. You get a little bit more of an organizational quality.
[00:32:16] Now, they've added features like visual cards for local places, products that show details like ratings, reviews, real-time prices. Some of those things that we've gotten used to over time seeing in Google search. And then, I guess, past searches. You can get to those. You can... You still... If you want to test this out, you still have to go into labs. And I think it's labs.google.com. And you have to kind of opt into it. And then, you will get this on your search page.
[00:32:46] You'll get this little mode that says AI mode. So, you know, some people are saying, this is the wave of the future. Google search is going to be AI mode in the future. I don't know how I feel about that. I don't know that that's necessarily the direction. Maybe it is. I'm not saying it's not. But I'm just not certain on that. But for right now, it is separate. And so, you literally have to go in there and play around with it. And the confusion that I have in this... Well, let me test a few things. So, if we've got local results, let me bring in a prompt and say, ask AI mode.
[00:33:16] I'm going to say, birthday brunch spots in Petaluma. Although, maybe I could just say near me and it would know. But I'm just going to get specific in Petaluma that are known for mimosas. And so, it's going to look it up and search. It's going to make a search plan. Sift through the results. Obviously, there is a restaurant in town called Cafe Mimosa. So, that's going to be an obvious choice. Cafe Bellini, Sax's Joint offer. You know, and then it gives your ratings.
[00:33:45] You've got some of your ratings on these. 4.4 stars on Google. Bunch of reviews. I can't click on that. I can on the Sax's Joint. Anyway, so, it's collecting some of these search results. In some ways, it's pretending or purporting, whatever the word is, to do some of the stuff that perplexity is doing. Use this as a search engine, but a search engine on steroids. And, okay. So, that's great.
[00:34:13] If I test like a product search result. So, let's pop this in here. Find me folding camping chairs that fit into a backpack for under $100. I found this prompt online. I did not come up with this on my own. Finding foldable camping chairs. Compact under $100. And it gives me a little bit of a list on the side. It gives me, you know, a list of results, I guess, that I could click into.
[00:34:41] And I think where my confusion comes in is, okay, so this is an interesting way to tackle search, I suppose. If I'm using this as my search engine, I've come back with the information that's organized differently. The problem is, what I want to do then is I want to go in here and start issuing, like, generative commands. Like, okay, now turn this into a doc that blah, blah, blah, blah, blah. Because that's what I do with all the other things, right? Like, I'm not constricted into just a search product.
[00:35:12] And this kind of does some of the things that Gemini does, but some things that Gemini doesn't do. And I think as a user, as I've been using it, I run into situations where I'm like, oh, it doesn't do that. Why doesn't it do that? I want it to do that. Oh, it's a different product. So now there's the cognitive load as a user of I need to remember that this only does these things. But this other AI that's very similar to this but not quite the same as this does more.
[00:35:40] And that can just be a little confusing to know which one I use for what purpose, you know? I don't know. So I went into my other – I reluctantly, under protest, went into my free Google account. Okay. Thank you. For the purposes of the show, the show thanks. For the good of the show, yes. For the good of the show. So, you know, people have been arguing that AI was going to ruin search. And I've been arguing that, no, that's not the case. Google's ruining search.
[00:36:10] It's just so weird that they're screwing it up themselves. So I asked three different kinds of questions. I asked, explain the history of mass media. Okay. And it gives me a pretty straightforward and bland thing. The next question I ask is, pizza near me? Because if this is supposed to be search, it should operate with the utility and value of good old search. Okay. And so what it does is, as you showed, it does paragraphs listing the places with no links.
[00:36:40] The links are to the side. To the side, it says here are eight sites. Okay. So I can go over there. They're disconnected. They're disconnected. I can show that. You've got to kind of like do some work to like connect the dots between the two. Yeah. Yeah. And by the way, it's not in the same order. So I can't say, well, the third one in the paragraphs should be the third one in links. Nope. Right. It's out of order. It ruins it.
[00:37:01] So then the next thing I did was ask for, because I just got, Jason advised me, I got the cheapest possible Samsung S10. Which is not cheap. Not cheap. No. But it's cheaper than the most expensive. It's less than half the price of the most expensive S10. Yeah. Yeah. So I can annotate PDFs, because I was doing all kinds of research about the remarkable and the books. I hope it works for you. It did. I know my family was rolling their eyes at me because I'm cursing the first three hours using it.
[00:37:31] I couldn't forget. How do you mark and scroll at the same time? And I didn't know you got to do two fingers to do the scroll so you don't leave your mark when you're doing the thing. And my wife says, well, isn't there a manual? I said, no. All you can do is go to YouTube and people are going to spend 15 minutes telling you one simple thing. Yeah. Yeah. So anyway, so I asked it, what's the best tablet to use to annotate PDFs in sync with my Google Drive? All right.
[00:37:58] So in this case, it gives me three paragraphs, and then it gives me links that are not necessarily directly related to those. So if I wanted to just look up the Samsung Galaxy Tab S9, I hope that wasn't a lot cheaper. There isn't a link to that. It's cool. There isn't a link to that. Right? There's links to other things that are part of their research shtick. Here's a story about these things.
[00:38:24] Then the other problem is, to your point, Jason, about things you can expect to do elsewhere, I wanted to ask a follow-up question and say, which of those comes with a stylus? No, no, no, no, no. That becomes an entirely new search. Oh, right. Okay. So which comes with a stylus? Oh, no, but yours it did. What are you talking about? Well, focusing on tablets that come with a stylus included, here's a break. So I don't know. I mean, it knew that I was talking about tablets that come with a stylus.
[00:38:52] What I don't know is if it understood my question, which comes with a stylus, to mean which of those that you listed comes with a stylus. Right, right. Well, the bottom of my query, the answer is, your next question will start a new search. Well, why? Why? So what if there's a limit to the amount of follow-ups before, or, yeah, or I don't know. That's a good question. I don't know. Before, yeah. Okay, because I'm not getting that warning. Maybe it's just me.
[00:39:19] Which of these is the best? That way I'm like referring to the last thing. Yeah, okay, it's still not for you, it wouldn't for me. Well, let's see here. I mean, maybe. And it's doing that fakie reasoning thing. Well, I'm doing this, I'm doing that. Give me credit for all this work I'm doing on your behalf. The best tablet. That can be a treat, yeah. Yes, exactly. Which of these is the best? Okay, so it gave me, okay. I mean, I don't know if the, I'd have to go back and like, you know, refer to know.
[00:39:47] Right, but if you want to, what are the related links on the side? Are they, does it? Top 50 products of the year. What is the best Samsung tablet for work? Best back to uni technology with Samsung 2023. So it's, it's, best tablet. It's abstracting. It's disambiguating. It's intermediating all of that content to create its own thing, which is going to drive media companies absolutely berserko. But it also, so that, that kind of hurts them. All right. We'll have that argument in the future.
[00:40:17] But it also hurts me as a user. If you want to just link right now to the, to the books note error, there's no link to say, okay, what is there at the bottom of that thing? Is that a link? What does that go to? So this is a link. I don't know if it's the link. Let's see what it goes to. This, this refreshes the area on the side. And so it gives me Scott Hanselman's e, e notes or e paper article from 2021. It gives me a guardian article, eight best. So it does not link me to the product.
[00:40:46] It just kind of takes me to more. Nor does it link you to a story that's just about that product. No. It links you to one of the stories that it used to disambiguate, to, to, to, to, to a couple of stories that probably mentioned it in some way, shape or form that it pulled that information from. So my process of research. But it doesn't say like you're looking for a product. So maybe I should have like a link to the product here. Exactly. Yeah. Exactly. So it's, it doesn't know my user needs.
[00:41:14] And so then now what you got to do is you got to go over to your all and remember to do a search for the Onyx books, whatever it's called. See, I can't even remember what it's called. I got to go back to AI mode, go into my previous history and you know, it's a whole web, I guess. So yeah. So I'm confused too. It's a little confusing. It is also, you know, Google would probably be fast to say, yeah, it's, it's labs, it's beta, it's trial.
[00:41:41] Do you think that this is a product that could or will replace search the way we're used to? No, I think that's the point of what we just did. I think, no, it's going to make people say, give me a search back. Yeah. Yeah. This, because this feels, this feels broken, whether it's an illusion or not. When I open a Google search tab, I'm pretty confident I'm going to find the thing I'm looking for within the first page or two. Here, I have to do a lot more cognitive work. Right. To understand if I'm finding the thing that I actually wanted at the end.
[00:42:11] Yeah. As we discussed last week, you know, who's training whom? Is the machine training us or are we training the machine when it came to queries and the end of the job of, what was the, that was the word I'm looking for? Not a, a prompt, prompt engineer. Prompt. Oh yeah. Right. So we talked about that last week. But in this case, we have an entire generation. We have every one of all generations, but we have 20 plus years of people being trained
[00:42:40] of what search is. True. And Google is search, right? They, they, they still own that as part of our heads and how we think about the world. And they're so scared about others ruining it. They're going to ruin it themselves. It's, it's, it's mind boggling to me. They feel they need to be with the AI, blah, blah, blah, the way everyone else is, or else they'll be left behind. I mean, was he, it was Yann LeCun. It was Yann LeCun's interview where he said Google is kind of running scared right now,
[00:43:10] in a sense, because they see the oncoming AI wave as being a threat to some of these pieces of its business that have operated a certain way. And so they're rushing and making mistakes along the way because they want, they want to be sure that they aren't left behind in the process. And somebody else does it, you know, comes out with the AI search that puts them out of business or, or puts their product on a lower rung on the right. Right.
[00:43:36] And so, um, interestingly, you know, cause like what I keep on saying about Google and AI is that they, they are ahead. They are doing things. You know, I saw a story today about someone, I think it was a guardian. It's where it should appear is that I'm giving up AI. I'm a, I'm a, people who are, who are giving it up. Well, you can't. Did you do a search? Did you, uh, see an ad? Did you use maps? Yeah. You're using AI. Stop. It's ridiculous. Yeah.
[00:44:02] Giving up AI, maybe for a part of what you do, but a portion of what you do, maybe you're giving up AI as a research tool. Well, maybe you're choosing not to use chat GPT, but you're using AI. Right. You know, come on. Yeah. Right. Stop. Just stop. Um, it's like people used to say to me, well, I don't want to TV. I said, but how do you know who Vanna White is? Um, so, um, uh, what we've discussed about Google and AI is that they have been ahead for years, but they did it with like Intel inside AI inside.
[00:44:31] We didn't really see it. Uh, so now they're trying to get credit for it. So I just texted you a link to, um, oops, no, I texted it to my wife. She's like, what is Jeff sending you here? She said, is this for me? Uh, no, sorry, honey. Uh, hopefully it pops up on my screen. So, um, uh, Google puts out a blog post, the latest AI news we announced in April.
[00:45:00] So they're trying to say, look at all we're doing. Look at it all. Give us a treat. Amazing. Right? Yeah, totally. We hosted Google cloud next 25. We made the best Google AI free for college students, but only during finals. We made a, we made deep research available on Gemini. Right. So they're trying to list all this stuff and say, look what we're doing. We're doing all this AI stuff. We know, we know you are. Yeah. I think it makes sense to then, uh, talk to jump a little bit ahead.
[00:45:28] I don't know why I didn't group these things, but cause this is all in advance of what we have coming up here in a couple of weeks. Google is, uh, going to have its Google IO developer conference. And as has been the joke in the last couple of years, and I think will continue to be the joke is that it's less Google IO anymore. It's more Google AI. And I think it's going to be almost entirely that, you know, and that's what we're going to see.
[00:45:54] But what they announced, um, yesterday is a new version of Gemini 2.5 pro IO edition. So now they're, now they're additioning around events, models of AI. I thought that was interesting, but essentially it's an updated version, stronger coding capabilities, says Google. Google. They say it's designed to make front end coding, UI development, all those things easier, more efficient.
[00:46:22] The new model can pull information from a video is one example, and then create apps based on what it learns from the video. So that's interesting. And Google has a challenge for developers to create things with the new model to work with the new model and then submit what they've created. And the, there will be some that are chosen to be highlighted during the conference in a few weeks.
[00:46:47] So, you know, getting more kind of juice out of the kind of public, uh, interaction and engagement with their model ahead of their conference. So, um, yeah. And as you mentioned, I think last week, um, so the, they're pulled, they pulled the Android stuff out and they're doing that a week before, right? Is that going to be an in-person event or is that virtual? My understanding is that it's virtual. Um, well, when I don't, I don't even know that it's an event because it's called the
[00:47:17] Android show. So is that like, uh, is it a live stream or is it a recorded video? Like, I don't know that we know. I don't, I certainly don't know the answer to that, but, um, but it does seem like Android is being de-emphasized to make room for all of the AI progress that Google has made. Yeah. Yeah. You know, and plans to announce and yeah, I mean, you know, there's, there's a lot of developers.
[00:47:46] There's a lot of, we've talked about it on Android faithful. When to it down, one of the co-hosts on Android faithful is an Android developer. She's not even going to be at IO. I think she's going to be at a different developer conference that week. And in light of that news, she's kind of like, I think, I think I win. Like I'm not going to be at the conference that's de-emphasizing Android development. I'm going to be at a conference that's all about Android development because Google I really seems like it's less that now. It's more an AI conference.
[00:48:16] But the thing that strikes me about that, it'll be interesting because you'll be covering that, I assume, the Android conference. Oh yeah. 100%. Yes. Yeah. And I'll be there. I will be at Google I.O. Right. Most of the Android faithful can be there. It's interesting to compare because you would think that they're trying to emphasize that these devices are all AI and that the essence of them is AI. And you'll see in an upcoming interview, we discussed that with Intel as well. Everybody's really trying to push the idea that AI is integrated into the hardware.
[00:48:42] So to separate out their hardware OS from AI seems odd to me. You'd think they'd want to... One could argue, well, no, it would get buried in I.O. So now we're going to give it its own attention. I'm sure that's what they're going to say. That's fine. I get that. But it strikes me as a little bit of a disconnect. Literally. Like unplugged. Like an air gap disconnect. Yeah. Problem exists between the person and the keyboard type disconnect.
[00:49:12] Yep. All right. Moving away from Google, we've got Amazon unveiling Vulcan. Can you do that? Can you do the Vulcan thing? It hurts. It hurts. Oh, does it? Okay. It really hurts. Because I know it's the kind of thing that some people can and can't do. You can just do it fluidly. Do I have to kind of pry them apart with a tool, a vice? Right. Right.
[00:49:37] Well, Amazon has been practicing and they are bringing that to their warehouses in the form of robots with a sense of touch. This is what Vulcan is all about. Tactile sensors that enable it to handle 75% of the items in Amazon's warehouse. It can pick. It can stow. It does all of this with human-like dexterity.
[00:50:00] And as many people will be quick to point out, further reduces the need for human workers to perform those tasks. Yeah, and this is always a yin-yang thing where the jobs are awful, but we're going to have fewer awful jobs. You know, it's how do you judge that? But I think this was inevitable. And it's not just for retail.
[00:50:20] This is certainly, I think, for – I can imagine more flexible factories where you could pick up screws of different size and know what position they're in and then get them in, you know, that kind of stuff. To pick up this product to be able to know what's fragile and pick it up in a lighter way. Oh, these are eggs I'm shipping right now, right? Yes. Yeah. Be very fragile. So I think this is a necessary step in robotics.
[00:50:51] And again, as we talk about often, it's an interaction with the world, with the real world, right? It has to learn that you don't be mean to the egg. It cracks. Yeah, if you're too tight, if you're too strong with the egg, you're going to squish it. Yeah, this, again, to mention Jan LeCun once more, he talks all about this, the world model and how necessary that is for artificial intelligence.
[00:51:19] If AI is ever going to get to the point to where it has human-like awareness of all things, you know, the ever-elusive AGI or whatever you want to call it, then it needs sensors like this. It needs some sort of sensory capability that mimics what humans do have and not just the ability to read words and spit things out and that sort of stuff. It also needs to be able to interact with the world.
[00:51:47] And this is at least one step closer to that for the world of robotics. And I doubt that Amazon is the first to do this sort of thing. Honestly, I don't know. Robotics isn't my thing. But I have to imagine other companies have been working on this. Yeah, you think of many, many, many applications. You know, I had two of my spare parts taken out, my prostate and my appendix, with arthroscopic and robotic surgery.
[00:52:13] And, you know, when I walked into the operating room for the prostate, you look at this big robot, you want to just be nice to me, salute. But you realize that the doctor is over there on a screen, right, and cannot have any sense of touch. There's no feedback to the doctor.
[00:52:32] So you can imagine the utility of these kinds of systems that are doing something robotic, being able to give not just haptic feedback in the sense of here's a buzzer, here's a buzzer, so it knows you did something. But a sense of soft or hard, a sense of that touch, I think would be useful. Or bomb robots or any of this kind of stuff. Right.
[00:53:01] So, yeah, I think it's an important step forward. And I'm kind of surprised that we haven't heard more about this. I think you're right, Jason. A lot of people have surely been working on this. 100%. And maybe there's more advances than we know because we don't keep up on all the scientific papers about this. But it is interesting. Yeah. For sure it is.
[00:53:21] And, you know, a company like Amazon stands to benefit a lot from this sort of thing because of the reliance upon, you know, the need to store and sift and pull all of the items and everything that they're doing to feed their business. It seems like the kind of company that would be investing a lot of their attention and their resources into this sort of thing and to benefit from it.
[00:54:14] After Apple's queue says AI will replace search engines. What? Ah, okay. Wow. That's all it took, huh? God, the market is stupid. Someone to say that it's going to happen and boom. Yeah. That has the, you know, that has the impact that it does. So let's see. Google stock right now is down.
[00:54:36] Considering a major shift in its Safari browser, integrating AI-powered search engines, that prompted by the potential end of its $20 per year deal with Google that makes Google the default search provider on Apple. Which again goes back to all the antitrust cases, is there's tons of competition for search right now. Mm-hmm. Indeed. Indeed. And it's been one obvious choice for a very long time. Yes.
[00:55:04] And now the foundation around that is really transforming pretty rapidly due to all of the, you know, intense regulatory pressure coming from all directions. And so the funny thing is here, the one that's going to lose... It's in the air. The one that's going to lose revenue is Apple. You think their stock would be going nuts, Google paying them $20 billion a year. If Apple gets rid of Google search, which I think would be a mistake, but if they did, there goes $20 million. Yeah. You know, hello?
[00:55:35] Yeah, interesting. Well, thanks for the tip there, Dr. Dew. Thank you, Dr. Dew. I love getting some breaking news in the show. That's awesome. Happy we got that in there.
[00:55:46] Real quick before our next break, in response to the story that we covered last week regarding Reddit's discovery of AI-powered human impersonating bots in the Change My View subreddit, Reddit has announced that it's tightening its user verification to battle this type of AI usage. No word on whether they're going to rely upon Altman's whatever it's called. Orb. What's it called? Orb. World Orb.
[00:56:13] There's no reference to Orb in this article, but I'm like, well, here's an opportunity. Reddit is going to require users to prove their humanity, sometimes also their age. But the interesting thing about this is that Reddit forever has been like a critical kind of component of that community is its insistence on the ability for users to remain anonymous. Yes. And so- Well, can you be anonymous yet still human? Yeah. Yeah. Yeah.
[00:56:43] Which goes back to- Well, okay, that's interesting. That goes back to the Orb to this extent. You could be verified as human without your identity, right? Okay. In other words, that I am- I'm a human, but I'm not this human. I'm not identifying myself as a human, but I, a human, wrote this. Right. I am a human, but I'm not this specific human. Yes. Right.
[00:57:14] Yeah. Interesting. Reddit does insist it's not going to ask for real names of users. They're committed to protecting user anonymity. Nonetheless, big shit. But it's really, I know you mentioned this quickly, but it's a problem that I was just thinking that, fine, I can verify that it's me a human, but then I can still use AI to make some crap. Right. Well, that's what I'm thinking about Orb as well.
[00:57:38] I don't understand how it works exactly, but how do you know in all these systems, hopefully they're doing the work to prevent someone from identifying as human and then still operating as a non-human somehow outside of that. And that just reveals my lack of understanding of how these things actually work. Obviously, someone's probably throwing tomatoes at the screen, but I don't know. Couldn't you verify that you are and then about face? Yeah.
[00:58:07] And then suddenly you're not. Become a conduit for artificial crap. Yeah. That's exactly what I'm trying to say. All right. Which is to say that we're doomed. We're just doomed to be surrounded by AI crap. The sky is falling. Yeah. Yeah. Chicken little. Yeah. All right. We're going to take a quick break. On that happy note. Yes. Take a quick break. Come back, round off the show with a few quick stories after this minute.
[00:58:36] New research shows that being honest about using artificial intelligence- Speaking of which, right? Can result in distrust by the people that you work with. Which you think would be the opposite, right? One of the lessons that I learned in blogging early on- Well, we go way back. As a reporter, if you made a mistake, and we do make mistakes, you never wanted to admit it because you thought that it devalued you and it was a shameful thing to do and you wanted to go hide and not do it. It's a red mark. It's a mark on you. Right. Yeah.
[00:59:05] I learned with blogging that admitting your mistake and correcting it increased your trust. Yes. Immensely. People said, okay, he's an honest guy. I can trust him that when he does make a mistake, he will say so. Right? So you would think that that same logic might apply to the use of AI. Hey, I'm letting you know that I used AI in this case so you can just know. But instead, it already has such kind of built-in cultural cooties that instead this study finds
[00:59:33] that people don't trust you because you used the bad technology. Maybe the difference is because you chose to use a technology that is infallible. And as a result, I automatically don't trust you as much as I would have because that was your choice versus I made a mistake. I did this thing. I did this research. I thought I was right. And oh my goodness, I just realized I'm not and I'm fessing up to it. I was incorrect.
[00:59:59] Instead, I use this tool that's this percentage of time inaccurate, but I used it anyways and I passed it off as accuracy and it's inaccurate actually. I took that risk. It's my responsibility and that's why people don't trust me. I don't know. That's just a guess. I still think it's such early days. It's hard to say where the cultural norms will land here. Reference to the Gutenberg parenthesis, now in paperback.
[01:00:23] One of my favorite lessons from that is that when print began, it was not trusted because the provenance was not clear. Anybody can make a pamphlet. Who made this stuff? I don't know who made this stuff. It can come from anywhere. And so what was trusted instead was social verification. Back to knowing you're human, right? I know the innkeeper and blah, blah, blah.
[01:00:46] When the typewriter came out, it was not trusted because it was Sears Roebuck would still have handwritten notes because if they sent typewritten letters to customers, the customers thought it was our modern equivalent of junk mail, that they had printed something that was impersonal. They didn't entrust that that's the way it looks, right? Eventually, obviously, we trusted typewriting. We trusted print.
[01:01:13] So I think we will come up with new norms around AI and the creation of speech. But it's obviously going to be a lot harder because it mimics us so well. But it's too early, I think, to come up with forever rules here. For sure. Yeah, 100%. Yeah, I mean, reading through this, I feel this study to a certain degree because the
[01:01:39] last year and a half, I have really leaned into as part of this show, I wanted to learn these technologies. The more I learn them from an LLM text-based perspective, the more I understand how they are useful to me and the more I want to use them because I'm like, oh, wait a minute. It enables me to do these things quicker, faster, in ways that were more difficult to do or took longer to do prior.
[01:02:07] And yet, and I'm surrounded by people that also do that, and yet I still feel a little weird sometimes talking about using them because there is a bit of a social stigma attached to it of like, oh, well, so you're not really doing the work then. Or how do I know that you're saying your words and not like the words that the LLM put out or whatever? And so there is a stigma attached to this stuff. And I think that's part of that is the distrust.
[01:02:32] You know, you're making me realize that this week I was in a faculty thing and somebody said, oh, I use ChatGPT deep research to find out, to ask them this or that. And I kind of looked, I looked at it askance and I thought, well, A, you're a professor. And I also didn't like the output from it. I thought it was too limited. I thought the question was not right.
[01:02:57] But because it gave like 10 pages of all this stuff, deep research, people were responding to that. And I was thinking, no, stop. But I didn't want to insult the person who used it. So yeah, I guess I looked askance. Yeah. None of that I think about it. It's probably there's something subliminal about it. Like, actually, I would wonder, I'd be curious for people who, and obviously this wouldn't apply to everyone, but for people who do use AI on a regular basis, how do they see it in
[01:03:27] that situation when someone else does? Because I think even if they're open to it and they use it, do they place like you're talking kind of exactly what you're talking about? Do they still, whether they mean to or not, have that like internal, a little glimpse of judgment, a little flash of judgment when they find that out? Or my trust or my perception of quality out of this thing suddenly went down a couple of rungs.
[01:03:55] Not entirely to the bottom of the ladder, but just a couple of rungs. Like, it was up here that I found out there was AI involved, and now it just like took two steps down. And yeah, I'd be curious to know. So I'm sure there are studies going on right now about that too. Yep, exactly. The family of a man who was killed in 2021 in an Arizona road rage incident used AI to
[01:04:18] recreate the man's likeness and voice for an in-court impact testimonial. And oddly enough, when I read the headline on this, I was like, oh, well, that couldn't have gone well. But oddly enough, the family who's involved, who created this, who crafted this with the use of AI, said that the core, you know, said that, what was the, what did they say?
[01:04:44] They basically said that the message that was reflected was of his nature, forgiving, you know, steeped in kind of his religious beliefs as well. And I think it said something like, in another life, we probably could have been friends. I believe in forgiveness. And the courtroom was very moved by this, this perceived, this representation of authenticity
[01:05:11] that's captured by his spirit through the form of AI. It was just very, very confusing to me that like this story seemed to have a happy aura to it. I'm still not convinced, but there you go. There was a story a few weeks ago of somebody who used, I think they were representing themselves in court, who had AI speak for them and the judge was pissed off. Because you shouldn't have AI testify for you. That's just, that's obviously wrong.
[01:05:41] So I had immediate reaction to this was the same way. This is a court, what are you doing? But it's not testimony in that sense. It was the family trying to represent their view, trying in turn to represent the view of their dead loved one. And they used this tool to do that in what they thought was a more effective way. Now, what if instead the view was, hang this sucker, I should be alive right now. How dare you? And then is that-
[01:06:10] You did this to me or whatever. Is that manipulative in the opposite way? Yeah. Then I think we'd probably be looking at this a little bit differently and say, well, it's being used to try to drive the court to vengeance from what is clearly a fabricated representation of a human being and a view. The forgiving view was fabricated, but it was forgiving.
[01:06:38] So the AI became a tool to that good and happy, or happy is hard to say in this case, he's dead. Right. But that more gracious- Gracious. I just want to say filled with grace part of the story. So that made us look at it this way. But I think if we saw a case the other way, I don't think this would be seen in a good light. Totally agree. Totally. Yeah. I wonder if we're going to see more of these, too. I do. I wonder, too, whether the court reviewed it before it was shown. Right.
[01:07:07] Did the defense have an opportunity to see it? Did they have an opportunity to have objected if it had been vengeful? And how, what is the right word? Not authoritative. How authentic can something that was created like this actually be when you're talking about something that's being shown in a court proceeding? Well, it's not. It's the family decided to do it, and it was in the family's control, as it should be in the family's control. Yeah, that's true. It is an opportunity for impact.
[01:07:36] It's an impact statement. Yeah. So it's not changing the course of the case. The family could have gotten up there and said their piece in their words from their mouths, and they could have probably said whatever the heck they wanted to in that situation. Instead, they chose to go this route to have the victim say what they believed the victim would have said. Right. Based on content, based on videos and audios of him being him.
[01:08:02] And so, yeah, it's an interesting story when I saw that one. Thank you for putting that in. So I looked up the original story here, the link from it. So they also showed images of the victim from real life, real images of him with a sense of humor and so on and so forth, which again, I think is part of an impact statement. People hold up photos. Those are technologies too. The state had asked for a nine and a half year sentence, and the judge ended up giving
[01:08:29] him 10 and a half years for manslaughter after being so moved by the powerful video. So he got a worse sentence than the state wanted. So I don't know. The judge offered the following response about the use of AI. AI has potential to create great efficiencies in the justice system and may assist those unschooled in the law to better present their positions. That's my argument that AI can extend literacy in a way. For that reason, we are excited about AI's potential, but AI can also hinder or even
[01:08:58] upend justice if inappropriately used. A measured approach is best. Amen. Along those lines, the court has formed an AI committee to examine AI use and make recommendations for how best to use it. At bottom, those who use AI, including courts, are responsible for its accuracy. All true, Your Honor. Stipulated. Interesting. Finally. Finally. Tell me a little bit about this I smell AI thing.
[01:09:25] So I'm going through my Google News on my phone or whatever they call it these days because they change their names on everything. And yes, it's still Google News. And I'm just scrolling along and I come across this story. Alberta's premier promises referendum on separation from Canada. Now, I used to cover, long ago in my early career in journalism, I went to Quebec and Montreal and covered stories about the separatist movement there. So I'm really fascinated. So I'm looking at the three bullets. This is part of Google News Showcase.
[01:09:53] And it's what the AP provides to Google News. Yeah. And the third bullet is Alberta, a largely French-speaking province of Quebec. Oh, boy. I think you got it. So I just came up and I said, I smell AI. And the thing is, this thing carries the reporter's byline. So everybody said, who's Jim Morris? Where's Jim Morris been? Right? I don't think it was Jim Morris. I doubt even he was responsible for this summary.
[01:10:23] It's somebody in the Associated Press who did it. And then I looked at the story itself. And I think I can see where this came from. Because there's a paragraph just on its own that says, the largely French-speaking province of Quebec held references in. So if you, the AI knows this story is about Alberta. Alberta. And so it substituted the word Alberta, comma, the largely French-speaking province of Quebec. Because the story was about Alberta.
[01:10:53] And that's all it did. And so it was rag. It used the story available. It tried to summarize the story. But it got thrown by a missing antecedent. And so it screwed up. And the moral of the story, like the moral to all AI stories, is it shouldn't have passed without somebody putting a human eye to this or verified saying, that's just so funny. That would have been an easy catch on that one. Right, right.
[01:11:23] So the Canadians are having a ball with us as a result. My favorite response is, la belle province Alberta. There's like 12 French-speaking people in Alberta. It's the absolute opposite of Quebec. There couldn't be more extremes in Canada. And so it's hilarious. But it's a cautionary tale. I mean, yeah, province of Quebec. Yeah, that's great too.
[01:11:51] Because yeah, it's not at all. Like I was going to say, there might be some French-speaking folks in Alberta. A few. A few. Quebec is part of Canada. And you know, they travel around. Somebody said the response to me, like I'm one of 12 of them. Nice. Nice. Yes. I lived in Montreal for six months. It was a fantastic six months. I loved it. Quebec. You did? Beautiful. Oh, what a place. Montreal is beautiful. Yeah, it's a wonderful city.
[01:12:22] Really cool. Love to go back someday. How's your French? Oh, it was horrible. Like I didn't speak French before the opportunity to come along. I was dating a girl at the time who was a citizen and lived there. And she's like, why don't you come get a student work visa and live here for six months? And so I did. And so I learned very basic French to do it. Je voudrais un donut. But, you know, I worked at a movie theater. I remember Douzium Salle Gauche is like second cinema to your right or something.
[01:12:52] I can't even remember. You know, I learned just what I needed. But it was a great city. I really enjoyed it. And a great podcast was had. Jeff Jarvis, thank you so much for being with me on yet another episode of the AI Inside Podcast today. Always a pleasure. Always. Just have a ton of fun. The web we weave at jeffjarvis.com. Also the Gutenberg parenthesis and magazine and more coming soon.
[01:13:19] So on a website when it says copyright 2024, do you have to remember to go in there every like year? Oops, I guess I do. Change that to 2025. Sorry. I just noticed that. I'm like, is there like an official thing that you have to do or is that just? Oh, yeah. No, it's just a line of text. Okay. You just put it on there. Okay. Well, there you go. Sorry to call that out. I didn't mean to do that. So I'm going in tomorrow. I'm going to record the audiobook version of magazine. Nice. Oh, great.
[01:13:44] So I'll take my drugs beforehand so I can speak slowly and clearly because it's very difficult for me to do that. Yeah, that would be difficult for me too. No, you're a broadcaster. You can do it, but I can't. I'm mumble mouth. Yeah. Yeah, well, more power to you. I'm sure that's probably a couple of days worth of hard work doing that, but everybody should look for that coming soon.
[01:14:11] As for this show, AIinside.show is where you can go to subscribe to everything that we do. There's even reviews on the page. Would you look at that? And we have one new one. I'm just going to keep saying it until people do it. There you go. Go ahead and leave your review. It'll show up on the page eventually. Finally, if you really want to support us on a deeper level, why, you can become a patron. Patreon.com slash AIinsideshow.
[01:14:38] You get ad-free shows, Discord community, and we have the wonderful, the illustrious executive producer level, which, yes, is the most expensive level. But you get a t-shirt and you get possibly one of my favorite parts of every episode.
[01:14:54] You get your name called out at the end, including Dr. Dew, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, Dante St. James, Bono Derek, Jason Neifer, Jason Brady, and our newest executive producer, Anthony Downs. Thank you. I love that we're adding more and more people. It's just so awesome. That means that there's at least eight or nine AI Inside t-shirts out in the world. There you go. You can become a member as well.
[01:15:23] Patreon.com slash AIinsideshow. Thank you so much for watching, everybody. Thank you for being here. We will see you next time on another episode of AI Inside. Take care, everybody. Bye. Bye.