Jason Howell and Jeff Jarvis discuss Microsoft Recall, OpenAI GPT-4o voice sounding very similar to Scarlett Johansson, Reddit's data deal with OpenAI, and the struggles of Humane AI's product.
Support this podcast on Patreon: http://www.patreon.com/aiinsideshow
NEWS
- New Windows AI feature records everything you’ve done on your PC
- Microsoft introduces Phi-Silica
- Microsoft unveils ‘Copilot + PC’ initiative
- Get ready for the AI PC
- Microsoft teams with Khan Academy
- OpenAI to Pull Johansson Soundalike Sky’s Voice From ChatGPT
- Scarlett Johansson says she was 'shocked, angered' when she heard OpenAI's voice
- OpenAI Reportedly Dissolves Its Existential AI Risk Team
- Science paper with many authors on AI risk
- Jeff's reaction
- Compare/contrast with paper on science & AI
- Ads creativity and performance at scale with Google AI
- Google Taps AI to Show Shoppers How Clothes Fit Different Bodies
- OpenAI partners with Reddit
- Humane is looking for a buyer
- Meta is reportedly working on camera-equipped AI earphones
- Remember: the Iyo One
- Facebook Parent’s Plan to Win AI Race
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 18, recorded Wednesday, May 22nd, 2024. Humane Giving Quibi Vibes. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. Hey, everybody. Welcome to another episode of AI Inside, the show where we take a look at the AI hiding inside everything. I'm one of the hosts of this show, Jason Howell, joined as always by my co-host, Jeff Jarvis. Good to see you, Jeff. Hey, boss. How are you? Great. Good to see you digitally, but it sounds like maybe next week we might actually have an in-person thing going on.
Maybe in the backyard of my laptop, but we'll figure it out.
We'll figure something out. I've been really thinking hard about how I set up this room, I'm sure it doesn't look like it's enormous, but it's really not. But I think I can do it. For video viewers, we might be locked to a two-shot, which is fine.
It doesn't matter. We'll be in the same room. We'll have a couple of mics. We'll have everything we need, I think, so I think I'll be able to pull it off.
I just have to be okay. Jeff Jarvis is going to be able to rate my room in real time. How do I feel about that? Only if you start playing the guitar. I wish that I could play under pressure.
I'm going to be in a World Economic Forum AI event in San Francisco next Tuesday. That's cool. I thought Wednesday, I'll just come up to Petaluma. I'm going to take the ferry to Larkspur, and then the, what do you want to call it, train up to Petaluma, and then somehow make it over to wherever you are.
No, I can go pick you up. I can pick you up. Yeah, whatever time you're coming in, we'll talk off air, but just let me know, and I can pick you up from the train.
We'll be together, and then I'll go down to TWiG, TWiT.
Yes, yes, you'll be making the rounds. Excellent. Cool. Well, I'm stoked. It'll be a lot of fun, and it'll be a true test for my studio setup to see how does it go having an in-person show, an entirely in-person show. I'm looking forward to it, though.
It's going to be good to see you. Before we get started, big thanks to those of you who support us directly on Patreon. Of course, you can do that by going to patreon.com/aiinsideshow, and I'll just leave it at that.
Go there. There's plenty of perks and plenty of ways that we do our best to make it worth your while, and you get your name read out at the top of the show like our supporters of past and now this week's supporter, Steve Isaacson. Thank you so much, Steve, for your support of what we do here with AI Inside.
Steve, is that you? Steve, if it's the same Steve Isaacson, it was my son's computer teacher. If so, hello. If not, hi, new Steve Isaacson.
Hi, other Steve Isaacson. Either way, great to have you here, Steve. Thank you so much. All right. This has been the week and a half of AI-related events, whether the events could have been not AI-related or not.
You know what I mean? Microsoft has Build Conference. Google has Google I.O.
These things have existed in the past, and they had absolutely nothing or very little to do with artificial intelligence. This year, though, boy, have times changed. It's pretty much the entire thing. That's the entire thing. Microsoft, during its Build Conference this week, showed off Copilot Plus PCs with a feature called Recall that essentially, as Ars Technica puts it, records everything you've done on your PC. It's like taking snapshots of how you're using your PC over time.
Then that allows you to go back in time and search anything that you viewed on your computer, anything you've interacted with. It's all done locally. On-device is not sending this information that we know of to the cloud to do any of this computation. It's all happening on-device. I do wonder, and actually, we'll talk about this in a second with some late-breaking news, but how much that actually tampers down on privacy concerns that people might have.
It's discoverable, I would imagine. I mean, with a subpoena. Right. That's important. It's really important. I don't know because I don't follow Microsoft stuff closely, but what's their equivalent of incognito and is it as less than fully secure in your mindset as Google's incognito turned out to be? Two. Three, what ability do you have to erase it and get rid of it?
Those are really great questions.
People start freaking out immediately. I tend to be fairly calm about privacy stuff because my life is an open blog. I'm out there. I'm there. It's easy to say that, I know, as a white man. This one freaked me out a little bit.
Well, yeah, because it's running in real time. It's monitoring how you're interacting with your PC. I don't know. It is a little strange. It's a paradigm that we haven't really crossed yet. At the same time, I think there are services and softwares that have purported to do this so far, a third party. Now, this is just Microsoft saying, okay, well, I don't know if they're bringing it in the form of an app or if they're actually building it into the Windows experience. If they're not building it into Windows now, will they do that eventually?
The real question to me, and Jason, again, you've looked at this more than I have, what's the benefit you get for this? We're going to know everything about you so we can personalize everything to you. What, Clippy can give you better answers? I mean, what is it you get? I think
the idea here is, as we use computers, at least my understanding is, as we use our computers on a regular basis, we do many things that seem in the moment to be inconsequential or we're done with it, we can move on. Then days later, we might have a example of days earlier or weeks earlier, you were researching these clothes that you wanted to blah, blah, blah, but you didn't set them aside. Then weeks or however long later, you realize, like, oh, wait a minute, what were those things? You didn't bookmark them, so you can actually go back in time and see what you were doing on your machine at that moment and be able to pull that into the now again. I guess my understanding of it is it's a way to protect you from yourself, just essentially making your computer usage searchable in whatever ways that might be helpful to you.
Yeah, I mean, okay, but that's not, I think that's probably just as difficult to search and figure out. I would guess what they're going to do with it is because they're now advertising, this is your AI PC, it's the PC with AI. They're going to try to use that to summarize things for you or predict for you or customize for you or so on. Okay, but I really want to see a demo of that. What's the local computation that gives me value and is it worth me having to record everything and being a little bit freakish all the time?
Daniel says Limitless does this. There are others that are building software and stuff to do this sort of thing. Another example that they showed off that's, I guess, somewhat tied into this is they showed off a gaming example where someone was playing Minecraft and the AI that's on the system kind of running in real time is something that you can interact with and talk with that can kind of give you pointers and kind of be your gameplay assistant or buddy or whatever you want to call them. Yeah, cheat.
Maybe that's exactly what it is. But, you know, also real-time meeting translation, transcription across apps. That's fine. Turn that on when you want it. Yeah, totally.
That's nice. And there is actually some late-breaking news on this. The UK's Information Commissioner's Office has started inquiry into Microsoft to understand the safeguards that are in place to protect their user privacy before the feature is released. So there's that, which I think we're going to see a lot more of that sort of thing around this feature. Anytime you're talking about the history of everything that you've done on your machine, like that's going to make people, you know, a little nervous.
Well, let me ask you a question here, Jason. Dr. Android. So you've been reporting on, especially Android phones, for what, now, two years? They've had tensor chips that have said that they have AI resident and so on and so forth. So it is kind of interesting to me that we didn't hear that with PCs until now. Is this just a marketing gimmick or are there things that your Android phone can do because of AI, because of local processing, that your PC hasn't been able to do? Is there something that you could imagine that you'd want to bring over from your phone onto your laptop?
Does that make any sense? Trying to track your question. So what is the thing that you can do on Android?
On Android, because you have a, um, tensor chips and stuff there locally, uh, local AI tasks, right? You can do things like translation locally. You can do transcription locally. Sure. And, um, it'll target ads for you if they ever get their sandbox act fully together. Um, so that's the beginnings of what you can do on Android. Um, can you imagine things that you wish could be produced, uh, processed locally on your PC? And this is weird for me cause I'm, cause I'm a Chromebook guy, everything occurs up in the cloud anyway. Yeah.
I, you know, that's a really good question. I mean, so many of these features I feel are being integrated in so many different ways. It's, it's kind of hard for me to keep track of what things actually require or, or benefit, you know, from having that dedicated processor processor on the device versus going to the cloud for certain things. I mean, I think the overall benefit is in a perfect world, it would all happen locally because then, you know, potentially that, that brings the latency and the speed of these processing tasks down.
Uh, it protects some privacy or at least gives us a sense of better privacy because we're keeping it all on device and not sending it around, sending out all that information and everything. So I'm not surprised that Microsoft is doing this. I wonder if Apple is going to go down this road. We've got WWDC next month and we largely expect Apple to more clearly define what their artificial intelligence play is going to be. I don't know that I necessarily see Apple doing this. And I also don't necessarily know that I see Android doing what Microsoft is doing here.
And, you know, but, but you just, man, you never know. I mean, one of the biggest, you know, the bigger pieces of news related to Android from last week's Google IO is just how deeply integrated Gemini is being built into the OS, which tells me that it could be possible to do some of this stuff. And, but, but I think what keeps coming up for me on this, and this came up on, on Android faithful last night is starting to feel like a lot of this, these features and check out what AI can do when it's integrated into your OS and all this kind of stuff are again, solutions looking for a problem. Like at the end of the day, is it, is this something that people actually want? Or is this, Oh, we've got this technology that can do all these things. So let's just throw that AI spaghetti against the wall in a million different ways. And no one asked for it. No one actually needs this, but they're proving it in the hopes that people will like develop a need for it. I don't know.
I think it's sometimes as simple as, Oh yeah. Yes. Because nobody asked for ChatGPT at the search. You know, my opinion is it shouldn't be there. I think there's mixed reaction to Google's AI search. And now we've got to have AI and everything. And again, I keep on saying this to Google has put AI into everything for the last five years.
I can't remember when the IO we went to and they kind of declared that shift. We are now an AI company, right? And Microsoft was behind. So Microsoft's going crazy trying to, um, uh, add truffles into everything. They, they come out of the kitchen and Google then thinks we got to catch up when in fact it's Microsoft catching up to Google. Yeah.
They're, they're going back and forth. Yeah. They're, they're kind of trailing each other, uh, along the way, right? Who knows? Apple, who knows? It's going to be really, yeah, I'm going to be really curious to see kind of what that message is in light of all of this AI heavy, uh, craziness that we've felt, you know, the past week and a half, but you know, you're right there, they're following each other. I mean, a large part of what we heard from Microsoft is very similar to what we heard from Google, you know, agents, team co-pilot, it's an AI assistant that's integrated into Microsoft teams.
It's going to help you manage meetings, take notes, track action items. You know, that sounds a lot like, um, uh, what was it? Teammates, you know, Google's teammates that they mentioned last week, uh, introduction of AI agent capabilities for automation of tasks and workflows, stuff like that, you know, similar to gems that Google, they're all kind of offering the same thing. And I think the real big question is, okay, once you've done that, are the users going to be there actually kind of buying into this vision and buying into this idea? And I suppose that remains to be seen.
Yeah. And is it, is it going to be, um, I think that, that has to be, that question must be answered before you find out whether this is a marketing benefit or not. You want to buy a laptop with AI Inside? Well, what do I get for it?
Right. That goes right back to the title of this show and like my vision or this image that I have in my mind, which I know I could never do because it would be a trademark violation is the Intel inside, but with AI Inside of it, I'm like, I could just see Intel does be like a, now there's AI Inside.
Uh, anyways, um, Microsoft also announced new multimodal, uh, five, three vision model, which is, you know, analyzing images, analyzing texts as part of the five, three, um, small language model family available on Azure, uh, as well. Oh, sorry. Sorry. GPT four.
Oh, this is another big piece of news. The GPT four O model now available on Azure's OpenAI service. So you can do that, that multimodal capability as well, uh, using Microsoft, you know, and then finally, and I thought this was interesting. I think you would find it interesting to Khan Academy, a partnership with Khan Academy. We had Sal Khan on, uh, you know, however many, I think it was episode 10, uh, to talk about how AI could really kind of improve classroom, um, environments. And essentially this partnership means that Khan Academy is going to provide free AI teacher Khanmigo, uh, for all us educators.
So essentially their, their Khanmigo product for free to educators so that they can use it in the And so you're motivated with AI Inside. Yeah. Very interesting. Sal Khan's been really busy the last week and a half. He's like, you could tell it's, it's all been, you know, with the, with the book and these partnerships and everything
he's, and he appeared with his son, uh, at the OpenAI event. So yeah, he's, he's diving in the deep end.
Yeah. Keeping himself busy. Uh, and then you had included an article, which I thought was, uh, was kind of, you know, ties into this really, really well. Axios' Ina Fried, uh, wrote about the future of the AI PC and what that actually means. And, uh, essentially it's just this kind of moment where it seems like there's this big integration of AI into computers, uh, very, very much like what Microsoft is doing with some of these announcements and the fact that it requires that those dedicated AI processors that we were talking about that we have on, you know, pixel devices with the tensor chip, it needs that neural engine, those neural processors to do a lot of this stuff on device. And, and you can just imagine, we're going to see so much more of this in the coming, you know, next few years where PC companies lean in heavily to this. And again, my question is, okay, what does Apple do going to do on that front? Are they going to join the herd or are they going to do their own thing?
So I have, uh, quotes an analyst predicting that 19% of PCs shipped this year will have AI capabilities. See earlier conversation about what that really means rising to 60% by 2027. Interesting angle here. And I'm not a, I'm not a chip freak, but this is a door opening for Qualcomm on a sets. Uh, they created the, uh, the, uh, with their new via acquisition, uh, on, on device. AI is a big piece of how Qualcomm sees its chips changing the experience of using a phone or laptop. So, um, and Intel, however, isn't seeding the turf touting its AI bona fides, uh, saying it'll ship a hundred million PCs with AI accelerators by the end of the year. It's like watching Apple two years ago. Um, 5G, 5G, 5G, 5G, 5G. Oh, everyone.
Yeah. You know, that's another thing we were talking about last night is like, you know, still these phones come out with 5G in the name. And as at a certain point, it's like, okay, that probably doesn't need to happen anymore.
Cause they all have 5G. When does that happen with AI? When, when do we hit the point? And I don't think we're anywhere near it by the way, whether we want to be or not, but when does that point where they go, you know what AI is just so ubiquitous and assumed that we don't have to, you know, shout from the rooftops. It has AI.
I think what you said before is absolutely true that it is still at the solution looking for a problem phase though, because AI causes all kinds of consternation. It's also a problem looking for a solution phase at the same time. I don't know if I want that around me. What's that really going to do? Can I trust its output? Uh, what's it listening to? Uh, what, what sources did it have coming in? Uh, is this another way to try to get a subscription out of me? You know, I think that's interesting to watch. Yeah. It's very, very interesting. We'll keep on having the show for a while until we get answers.
And once we get the answer, then the show's done, there we go. We've, we've found the answers. We don't need to do it anymore. Um, uh, OpenAI. So let's talk a little bit about OpenAI because you know, we had last week, we had the, uh, GPT four Oh unveiling. And then all of the, uh, the, I don't know the, the thing around the voice, the voice being flirty and everything. It just, it continued to give me weird, weird vibes.
Well, it turns out that it's, it's been the source of a lot of controversy for the company. In particular, the voice that we were talking about sounds very similar to Scarlett Johansson. And if you ever watched the movie, Her, you heard her voice in the exact same kind of role, which is, you know, the AI disembodied AI voice on the phone, um, that the, you know, that the actor, what am I blanking on his name falls in love with, uh, Phoenix, Joaquin Phoenix. There we go. Um, so this is interesting. Turns out Sam Altman, uh, of OpenAI actually reached out to Scarlett Johansson to ask her to voice the AI, to be one of the voices of the ChatGPT voice, uh, system. And she had turned it down. She said, you know, she thought about it.
She turned it down. Then she heard the voice of sky. She said, a bunch of people she knew had pointed it out to her and said, Hey, this sounds a lot like you. She said she was quote, shocked, angered, and, and in disbelief.
And, uh, you know, none of this is, is, is made any easier by the fact that during the event, Sam Altman tweeted out a single word, Her, while this was all being shown off, which really does a, you know, goes far to link all of this together.
I didn't mean her. I meant her.
I, you know what I meant to say? I meant to say here. I just wanted people to know that I'm on Twitter right now. Damn Donald. Correct. Dang it. You can't say that I didn't. Um, Altman attempted to, to contact her, um, before the event
two days before, uh, went to his agent, her agent and asked for a reconsideration reconsideration done. You may have scarred.
It was scarletized, you know, it had already been done. Yeah. And, and they were going, so, but let's think about that for a second, a reconsideration. If it's days before the event, like what was going to happen two days before the event, they were going to have her come in and revoice it or like, you know what I mean?
Because the voice isn't hers permission to do what they'd already done. They had the computer could do it. They've all, how many of these companies have now said they can recreate your voice and nothing. There's plenty of voice samples for her. There's the whole movie her. So they didn't need her to come in. They just needed her.
But, but OpenAI released a blog post or, or some sort of a, an article that detailed how they've, uh, recorded how they hired and recorded all of the voices. Cause there's like a total of five voices.
This is just one of them called Sky. And they said specifically in that post that they hired an actress. They did not hire the actress because her voice sounded like Scarlett Johansson is what they say. Um, but, but I mean, the likeness is, is pretty darn uncanny.
It's pretty darn close. So I don't, I don't actually believe that it is Scarlett Johansson's voice. I do believe them when they say they hired someone to, you know, and that actress is the voice of it. I think it'll be interesting if that turns out to be not true.
He's trusting. No, no. I think so.
You think it's actually Scarlett Johansson's voice sample her voice.
I think they used a whole bunch of samples maybe, but I'd be odd. I would bet anything. They sampled her voice. So some whistleblower may come out because they went to her. She said that in September they made an offer to her to do this. She turned them down and she's a, she's active on this issue. She's a union activist.
Um, this is not smart all around. And then to come two days before when, you know, they've done whatever work they do, why did they feel they needed to come to her? Because they knew it sounded like her.
How much did they make it sound like her? Yeah. Well, Mr. Trump, why did you sign these checks? What did you think you were, you were buying? You know, it's, it's intent. It's, you have to, I think this well could end up in court.
Now, if this, if this isn't actually shaped by Scarlett Johansson and it is entirely a, another voice actress, uh, that, that lent the voice, I mean, is there, I guess, is there a case there? Because it's somebody else, their voice happens to sound like it. Like that's interesting.
Well, a Scarlett Johansson impersonator, I suppose. So, right. So is that the real Sally or is that Sally is Scarlett? Uh, who knows? You know, so it all goes to, to use of, um, likeness. Yeah.
And your personal brand. And so I saw one story, I thought I'd put them right now. I can't find it right now. That said that, um, uh, there's a presumption that, uh, this may help the passing of new law on likeness and, uh, reputation and such. But, but again, you're, you're right, Jason. Um, if I find somebody who sounds like Scarlett Johansson and I use her, um, because she sounds like Scarlett Johansson, but she's not Scarlett Johansson, is that a violation of Scarlett Johansson? Or does that actress have the same right to Scarlett Johansson does to, to, to hire out?
Right. Because the, it's not like in the app, it says, this is the voice of Scarlett Johansson. So they're not explicitly saying outwardly that it is, even though, you know, people will draw the correlation if they've seen the movie, if they're familiar with any of that and the likeness of the sound, you know, differences between the two, the two voices. Yeah. It's, it's really interesting. It goes back to what you and I talked about however many weeks or months ago about kind of Elvis impersonators and how those, you know, need to continue to exist. It's kind of a similar, similar situation here. It's just with a voice. I don't know. I'll be curious to see how this pans out.
It'll be interesting both in courts and in legislation, because I think anything that goes against the tech company is not going to say, oh, evil tech company is stealing her soul. But the larger issue is, do you ever know what's real? And less and less do we rely on technology to know what's real. More and more, we have to rely on human beings to know what's real. When Scarlett Johansson says, ah, that sounds like me. Stop it. Right.
Is it enough if it sounds, if it only just sounds like her and wasn't her? Yeah, I guess that's the big question. And I do think that the, you know, the irony is not lost here on the fact that like so often under the crosshairs for AI tech companies and, and, you know, these founders who are, you know, using, using data that they find online, scraping data, whatever you want to call it, to, to inject and inform their, their models that they are so often criticized for, you know, for people are wagging their fingers at them for taking other people's information and, and just going rogue with it and using it for whatever way. And then here we are in this situation where it's like another, it's like an example of that, but a different kind of example of that. Yeah, it'll be interesting.
The overlap is very interesting. There is in March, Tennessee passed the ELVIS act. Right. Which updated the state's protection of personal rights law to include impersonate protections for songwriters and other industry professionals voices being misused by AI. Um, so it could well, there was a case in the eighties involving Bette Midler who sued Ford over commercials for its Mercury, Mercury sable, which I wouldn't think would be a Bette Midler kind of brand. Um, they used an impersonator when she refused to sing for the campaign, the court sided with Midler. Uh, so much of the law, the intent is important if you want it to sound like Scarlett Johansson. Uh, so there's another, this is an information story about the, about the law.
Potentially a, um, a law professor said that, um, uh, opening as strongest argument is that they were going for a broad style of voice, but missteps weakened that defense, namely her, her, right.
Yeah, exactly. Like, dude, like don't do that. Don't tweet out that word that the second the, the demonstration is, is happening, especially two days after you tried and failed to get Scarlett Johansson's sign off on this.
Like that's not smart. Yeah. This is really, uh, really going to be something interesting. Um, yeah, I mean, and, and if they do open the book, you know, this does go into a court case and they do kind of open the books and everything, you know, the, the immediate pulling of the voice, even though the voice, by the way, the voice has been around since like mid to late last year, these voices have existed for some users. Um, this is just the first time that, you know, I think that, that the larger public heard it and then also heard it with that kind of like that, like new flirtatious tone thing that was going on there. That was just a little, a little bit too reminiscent at that point where it's like, all right, ding, ding, ding.
Yeah. I think it wasn't just that it was sounding like a star. It was also that it was trying to be too ingratiating, too human, too saccharine smarmy. Um, uh, it, that felt intrusive too. I don't try to, don't, don't try to suck up to me machine. Right. Uh, you're just a machine. Keep that in mind.
Tell me I'm handsome machine. It went down that road real quick on the wall. Who's the fairest of them all. Right.
And I will always tell you what you want to know, what you want to hear.
Oh yeah, for sure. That's kind of weird when you start getting into these voices that, you know, sound like other people and everything. Then suddenly Scarlett Johansson is telling you exactly what you want to hear.
And yeah, yeah, I could, I could imagine she wouldn't be too happy about that. Um, and just real quick related more people in the OpenAI risk research team leaving. Um, we knew, you know, about Ilya Sutskever, uh, Jan Leike, now the super alignment team, which is in charge or was in charge of examining existential danger of superhuman AI is disbanded. Leike said on Friday that the team had been sailing against the wind.
And, uh, so it's no longer, and I'm sorry for those of you who are regular listeners, I'm going to go off into a real quick, uh, test grail rant here in that I'm frustrated by the reporting of this media tend to accept the word safety at face value when it comes to these discussions, but AI, well, everybody in OpenAI is in the cult of AGI at X risk. The safety people air quotes are kind of the more fanatical about that. So one could argue in some views that the safety people are the least safe because they're the ones going on and on about S X risk and all that stuff. Um, and so when they leave, one could interpret it two ways. The safety people have left and there's no guardrails now, or you can interpret the other way is that the fanatic people about X risk and Doomer's left. And I, it's a hall of mirrors.
It's really hard to figure out where it is. OpenAI really stand now on these issues, but, um, media tend to not do their homework. Reporters don't do their homework. I brought test girl. Cause it's so hard to explain and they just say, well, there's no safety team now. What are they going to do?
Yeah. And I'm realizing I have, I have some, uh, stories kind of out of order. So I think we should, uh, probably have our next be the paper in science, which you, which you had put in there kind of shining a spotlight on AI doom once again, with a focus on managing extreme AI risks amid rapid progress and really leaning into the extreme risks, things like irreversible loss of human control, uh, weapons, deployment, extinction of humanity, all this kind of stuff. And you actually wrote about this. Tell me a little bit about it.
Yeah. So there was a paper that there were a couple of days, there was a, there was a, uh, meeting in soul of AI people, uh, that I heard was kind of, okay. Trying to talk about managing risk and understanding risk and that, and that's all fine.
I'm, I'm all for that discussion. Um, I don't think it's so much out of there that came this paper, but a paper came out at the same time with 25 authors, including Jeffrey Hinton, one of the many father AI has many fathers and he's one of them. Yes. Uh, Yoshua Geo, uh, from, um, uh, Quebec who is fairly well known. Um, uh, and, uh, Yuval Noah Harari.
I'm not exactly sure how he ended up in there, but anyway, uh, and Daniel Kahneman may rest in peace is included as an author. So they go on about all these dangers and it's, it's really a doomster kind of paper. Uh, you already listed the problems that it could lead to a large scale loss of life and the biosphere and the marginalization or extinction of humanity. And the problem I always have with this stuff is that it, uh, distracts from the very real current issues.
And I always cite the stochastic parents paper, which is very good about the environmental risk about ethomorphization. See Scarlett Johansson discussion about, um, uh, the harm to people who have to clean up data, uh, and so on. And so, um, I, I get concerned when the, where the focus is so much, uh, on, on doom. And then the other concern I have, so they, they, they said that, um, uh, their complaints were that it concentrates or my complaints is the doom saying concentrates on the technology over the human use of it. And it's human misuse.
That's the issue. They keep on acting like this is all about the technology. No, it's about us. Um, it engages in this third person effect that everybody else is going to be hornswoggled by this machine, but we're okay. Cause we're smarter than that. Cause we made the machine and it imagines the technology is the solution to the problems that technology raises. Um, so what I then say in my post is that the real bad news here is that it's a general machine, just like movable type. And there's no way, no way to build foolproof guardrails to it.
Um, and so this paper kind of acts as if there's a magic solution to the magic technology. And, and the problem there is not that I'm saying it's all safe and wonderful. What I'm saying is you're distracting from the real work that occurs now. And this is why this is another one of my, um, hobby horses. It's time to stop having AI people lead the discussion about AI. It's time for amateurs like Jason and me, um, step in and, and, and other disciplines and other concerns to say, um, these are the concerns we have. Uh, and, and so, you know, I didn't much like that paper as a result.
I think a lot of people will say, Oh, good. They're talking about safety. This is really important. But, um, uh, I didn't think that was so good.
Meanwhile, there was, I put another one up. Uh, there was a, um, I think it was university of Pennsylvania, the Sunnylands Institute, uh, Annenberg, uh, brought together of 25 people, a friend of mine named Wolfgang Blau, uh, who's a journalist, a Vinton surf, we all know is one of the many fathers of the net. Um, and, uh, Susan Ness is somebody I know well, uh, former FCC commissioner and some smart people. And so they came out with a paper that was just much saner. It, it didn't, it's not very specific. It gave five principles of human accountability and responsibility. And they're all fairly obvious, transparent disclosure and attribution, verification of AI generated content, documentation of AI generated data, uh, a focus on ethics and equity and continuous monitoring and oversight. Okay.
You know, not a big deal. I think we're, we're getting overloaded in papers about AI. And at least in these two, I saw some contrast. One is, and one is trying to be more present tense and I'm in favor of the present tense analysis because we don't know what the future looks like. Yeah.
Yeah. And I think the second paper that you're talking about here is really, you know, it says right here in the new strategic council to guide AI and science, it's really about kind of looking for, as they say, opportunities that AI will actually bring to the table for sciences. Um, and yes, looking at those unanticipated things that might be there. But if, if that is the entire focus, then you lose the forest for the trees, I think. Yeah.
Um, it's very distracting. Yeah. Interesting stuff. Love it. I'm happy you put that in. Um, we're going to take a super quick break and then we'll come back and we'll talk a little bit about some other stuff that isn't quite so doomy, uh, but interesting nonetheless, back in a second.
All right. So kind of touching back on Google from last week, got some kind of, um, kind of followups from Google IO and what we're starting to see around Google's AI search product. Before we kind of get into these news stories, are you starting to hear from people who don't normally, you know, follow this stuff, people in your world that are now seeing their Google search change and go like, well, what the heck is that?
I haven't heard much of that yet. Cause I'm seeing it not that often. How much are you seeing it and hearing it? Yeah.
I mean, I'm, I'm not much, but I have heard a couple of comments where it's like, I don't know, like what, what is that and where did that come from? And, you know, I, I just kind of have to remind myself, oh yeah, I've been in the beta for a long time. I've gotten very used to it, but, but I'm, but I'm you know, and what it gave me wasn't right, you know? And so a lot of people who don't have their, their, their fingers on the pulse of this stuff as it's happening now, suddenly getting introduced to it and forced to do that because there is no opt out and now it's just rolling out to everyone in the U S and their first interaction with this, this big AI thing that they've heard a lot about is, oh, but that's not right. Why, why is it saying that's the way it is when that's not the way it is?
And I don't know, I, you know, first impressions are big deal. What does that do for the longterm longevity of it? I don't know. I think that's interesting.
Yeah. So people are getting this product and you know, of course Google wants to, to make its money and it does with ads a lot of the time. So get ready for ads and AI overviews. They are coming according to this article or not article, this post by Google. They're going to be testing search and shopping ads and AI overviews.
That's going to start beginning. So you might start to see some of those appearing depending, you know, depending on where you're well, this is, oh, here's what I was looking for. So you do a search for summer tops and you, you know, you might get these interactive elements that are tied into well, this in particular is about using AI to put clothing that maybe you're selling on different body types, which I actually think is a really great use of, of AI, but you know, having at the top of your search results, that AI overview, either above it or below it, some sort of sponsored results that tie into what you're looking for. So if you're trying to remove, you know, you're like, how do I remove this, this bloodstain from these clothes? It might give you that overview, but injected into there, a carousel of different products that are sponsored around that. And how is that different from what we get normally?
AI comes in advertising all kinds of ways. Obviously, AI is already there when it comes to programmatic advertising and the auction that occurs for your eyeballs every time you open a web page. That's how these auctions occur in instant. Now, getting ads next to those AI search boxes is another media opportunity in essence. And then creating richer advertising, they can charge a premium for is also another opportunity.
And these are areas where once again, media old style media are left in the dust. You can have a banner. Can we sell you a banner? That's pretty much, Oh, you want to be in our newsletter?
That's pretty much all they got. Whereas, you know, will people really use this to say, how does this look in my room? We've seen these gimmicks for a while. I'm not sure, but advertisers love new things. They always want the newest, latest thing. So they will rush to some, for at least a while, they will rush to some of this. Yeah.
And what does that do for the longstanding players? Yeah, there's, I haven't listened to the whole thing yet. I've listened to about half of it, but Nilay Patel interviewed Sundar Pichai. And it's definitely worth watching the interview. You know, Nilay does not take it on Sundar Pichai.
I've heard about half of it too. Yeah. I'm eager to watch the rest.
Yeah. Around just like the, the, the potential impact to the way the web operates and, and kind of similar to what you're talking about, the, the smaller sites and how they are in, in some ways, getting edged out because of technology like this, that's being put in. And, you know, the, the, the smaller websites have less of a chance to survive, at least according to Nilay Sundar, you know, said the opposite said, you know, it said, maybe it's not our product that's doing that to them. Maybe, you know, the analogy he, he did was that, you know, when a restaurant suddenly gets less busy, it's, it's not because the food, you know, it's, it's not because there's no food there.
Maybe it's because there's a restaurant that moved in down the street that now people are going to instead and, and that sort of thing. So yeah, it'd be interesting how that influences all this.
It will, but it also comes at a time when this is not directly relevant to AI Inside, but with efforts to pass legislation right now in California to get Google and Meta and company to pay for destroying news, which they didn't do hedge funds do, but that's another discussion. Google is talking about pulling out of the things that it does voluntarily for the news industry.
Oh, and if you put that at the same time with these new advertising abilities that little old media won't be able to do, it's going to be a very interesting, tense time between Silicon Valley and, and the news world. Yeah, no question. That's interesting.
Um, let's see here. Reddit data, very valuable data. And, uh, you know, last year Reddit had said, uh, had pulled its, its data from being used to train AI because Reddit said, Hey, you know, if this is going to happen, we want to get paid for it.
We we're, we'll let you know when the time is right. We know that they've struck a deal with Google, a $60 million deal, uh, already, uh, with their data. And now last week they announced a strategic partnership with OpenAI, uh, OpenAI gains access to read its data API. It's real time structured data, which is an incredibly valuable resource. I mean, I use Reddit for so many things nowadays, and it's really become an everyday kind of source for me sometimes around search some, you know, trying to figure out how to do something or just figure out the temperature of a certain thing with the community read super valuable data resources. Be interesting.
Yeah. I, I get, I wonder what Redditors will think of this same, same or, uh, automatic and WordPress. Uh, we go in, especially in the case of Reddit, where there's all these volunteer moderators who have made Reddit a success, uh, in the, uh, quality of conversation versus elsewhere.
And that quality is what's being sold OpenAI for sure. Is there any, what's the benefit to the conversants, um, to the moderators? I read it's one of the most to make money and I, they get that.
That's fine. And they don't make a lot of money. And so it supports Reddit and it supports the community. I'm okay with all of that. But, uh, what benefit do we get?
And also, uh, how does this get used live? I'm, I'm of the mind that if you use this just to train models, you're not going to see it played back to you. I think that's generally okay. I think it's fair use and, and, and, um, transformative, but in this case, it hints of saying, we're going to use this real time data.
And if I'm in the middle of having a conversation over here on Reddit, thinking it's on Reddit and suddenly it's getting served up to people who are asking questions of ChatGPT. And I don't know what I think about that. That gets closer to copyright issues and gets closer to, um, uh, ownership issues.
So what, so what are you saying that, that like, that the comments that someone makes is wholesale lifted and presented in a different way or that it's used to inform kind of the general
form, the model as a whole and teach the model. I believe that's fair use and transformative. Yeah. If you didn't get access illegally, right. And Reddit did say, no, you can't do this. You can't come here and you're going to honor our, our do not scrape then.
And you go and scrape it anyway, that's a problem. But let's just say that you get it. Okay. My, my issue here, as with the news article, as with anything else, if you are then going to take verbatim, something that's said in the New York times or in Reddit and reproduce it over here, then, um, there needs to, but Reddit in this case is saying, okay, well we've licensed it. Well, what about the Redditors?
Yeah, that's like, yeah, that's the, yeah. And I guess it remains to be seen how that'll be used and if it would be, yeah, just a straight lift or if it is more of a generalized kind of interpretation of that information.
If you're like New York times and your article gets quoted verbatim in OpenAI, but New York times did a deal with opening. I will then tough nookies, your employee. But if you are a Redditor or a Reddit, uh, community manager.
Yeah, that's okay. I see where you're at. Yeah. Right. And what are the D what are the details of the deal that is specified here? Is it that that information can be lifted wholesale and presented?
Uh, like, like you're saying, you know, specifically and directly, or is it, yeah, used in a more general sense. We'll say gut. Yeah. My gut tells me that it would be more general, but maybe I'm optimistic.
I just keep on talking about this value of the real time conversations.
Yeah. But I mean, the real time conversations could still give an indication of, of the direction of sentiment or, you know what I mean? Like without being specific, like this is what this, Hey, what are folks saying about? Yeah, totally. Totally. Folks seem to be generally positive about this thing or whatever the case may be.
But in that case, I am what I, what I kind of want is it as value add to Reddit. If I'm a Redditor and I'm not a dedicated Redditor, but if I'm a Redditor and I want to say, Oh, I've been gone for two days. Let me find out what's going on. Can you summarize the conversation in my favorite, um, uh, uh, site here? Uh, that would be a value add to Reddit. And, and I think that can be really useful. I get that. Right. Um, however, once again, if somewhere over on being the conversation that you and I have about something is informs what's happening over there. Um, and I don't even know it. Uh, let's say I'm, I'm, I'm doing satire and I'm joking about something. And then the machine is too dumb to understand. Irony takes it seriously. And suddenly over on, on, on being, they say, well, people are having a fit about this when the fashion jokes, right?
What does that cause? Well, that's not, that was somebody kind of, uh, listening in to our conversation to exploit it. Um, and again, if you're an employee of the New York times, that's, that's life. But if you're on Reddit, I just think no surprises would be a policy. Yeah.
Well, and, and as you said that I was like, okay, yeah. Reddit is a, is a home to many people who are very, you know, dryly making, uh, jokes about a million different things just to make fun of it or whatever. So I think you're absolutely right. Yeah. How do you get that nuance?
Uh, cause those communities aren't just pure information. You know, a lot of times it's, it's a lot of, uh, yeah, it's a lot of memory. It's, you know, it's, it's a whole, a whole soup. Yeah. That's interesting. Hadn't considered that. Be curious to see how that interprets that, uh, the Humane AI Pin story is things are not going so hot right now. I have to start laughing.
Oh my goodness. Um, if the, as bad as it gets reviews, weren't enough, the company is seeking buyers, uh, according to a report from Bloomberg seeking between $750 million to $1 billion, which feels, I don't know, like a lot considering everything that has happened in the last couple of months, it was valued at $850 million by investors in 2023. Now that was before the product, the Humane AI Pin, their, their key product was skewered by reviewers last month. Um, they are still working to iterate the device. They just actually last week brought GPT-4o to the device. There's that, but, uh, getting quibby vibes. Yeah. I think it could be vibes.
That's a good one. Yes. I like that. Um, yeah. Two questions. One, what is a value at humane? Maybe it's an acquirer. Maybe there's some talent there doing things. Okay. I don't think the hardware, um, is of great value. I don't know that the software they have to date is of great value. Uh, so are there people of great value?
Maybe in that case though, what's it really worth? The total investment in has been $230 million according to Bloomberg. Um, including by the way, investors, including, uh, such as Sam Altman, uh,
well, which is interesting because Sam Altman is apparently, you know, working on some hardware, Jonny Ive. So, right.
So, you know, will the investors get paid back? Uh, even I doubt it. So it'll be, it'll be quarters on a dollar if they were very lucky and probably as an acquirer. I just don't know unless, unless there's some dumb money out there and there is, this is, well, this is cooler than, you know, it's really going to be wonderful. We can make this into, but it's still not worth a billion dollars. No effing way.
Yeah. Seems, seems a little bit out there. You can start playing taps for humane. If I had the sound effect, I'd start playing it. Uh, maybe it's a little early for humane taps, but, uh, I think it's, I think it's in the horizon on the horizon somewhere. Maybe just a get well card. Okay. That, that works. Yeah.
Go ahead and pick one of those up. Uh, and finally meta is working on, uh, AI earphones and not just like earbuds, but they would have a camera on board. The information says that they've been designing, uh, a couple of different ways to do this in ear, as well as over the ear formats to find the right approach.
But of course they're running into issues that we can probably all guess things like obstruction. There's a camera on the device, like you've got hair. How do you work around that overheating issues?
Because it goes into the body, you know, it goes into your ear canal and how does that, uh, get maintained and managed. Um, but that's, yeah, that's interesting. And I guess when I think about this, I'm kind of like, okay, well I, I kind of see the glasses format, you know, the glasses become, uh, small enough and normal enough looking that that becomes the approach, but not everybody wants to wear glasses.
So maybe there are, you know, maybe there are alternatives. Maybe. Um, or you can be a Cyclops and just put an eye in your head, your own personal Eye of Sauron.
I like that. That's, that's our startup idea, Jeff.
Let's, let's, uh, it could be worth a billion dollars before you know it.
Exactly. Even, even if we get skewered, maybe we can get a billion dollar buyout. Um, this also reminded me of the IO one, which we had talked about like a month ago. And I think it was the IO one pro or something like that.
That was like $1,500. Yeah. Yeah. I mean, who knows, but although this didn't have the camera involved, it's just a, a big hockey puck in your ear, I guess. So it's interesting.
I mean, again, as we discussed on the show a few weeks ago, it's, it's interesting to see people try and try on hardware, but so far no winners so far, no winners. I can't wait to see what you think. Do you have any buyer's remorse before getting your rabbit or.
No, you know, it's really funny. Um, I've actually considered doing a video about this because there's, you know, there's all, and, and by the way, the rabbit are one, uh, there was a video on YouTube that is definitely worth checking out. I wish I could remember off the top of my head, the name of the channel that did it, but you know, the, the, the founders behind it and everything, it's making some really serious claims about how much of a, a kind of a scam that the company is even, even with the device being what it is.
So there's, there's something brewing there to getting kind of crypto vibes from that. But I've been considering doing a video about this because I have no buyer's remorse. Like, yes, it's $200. I don't even have the device yet. And I still have no buyer's remorse because at the end of the day, like I didn't buy it really for it as much as I bought it for perplexity. And I'm really happy with what I get out of perplexity with and learn from.
And it's a lot cheaper than $1,600. Yeah, absolutely.
Yeah. Big, big difference. Like I'm happy I bought that and didn't get the Humane AI Pin. I feel like the Humane AI Pin, I would have felt really burned. And you know, is the Rabid R1 going to deliver on the promises that made it cool, that large action model? I'm not entirely certain that it will, but even if I get nothing out of the hardware, more than a set piece that can be on my, my, you know, my, my bookshelf behind me, like I, at least I, because what it did, what I realized is it really got me um, accustomed to what it's like to start to lean into these AI tools on a deeper level. And it's really like, it's really been very helpful to me as an independent creator. It has helped me be far more efficient. Um, and really kind of nailed down some of my workflows in a way that if I didn't have it, I'd be doing the same work. It would just take me a lot longer. And so I, I really appreciate that. So yeah.
All right. Good. I can't wait till you get it and report back. Totally.
I am very curious and I will give it, you know, give it a fair shake. Once I do get it, who knows by the time I get it, maybe they will have figured out some of this stuff, you know, and, but I'm not, I'm not holding my breath on that.
I don't know. I agree. We'll see.
It hasn't been good so far. So, um, and then related to this meta, you had put this article in there, their approach, according to the wall street journal, which wrote an article about kind of their open approach, giving it away for free, uh, the AI and how that's so counter to what other big tech companies are doing with their AI charging for the services meta taken the totally open source approach and how that could really be the winning approach for them in the long run.
Uh, yeah, they're, what I've been saying is they're the spoiler. Yeah. Uh, right. Because, um, I don't think we see, it's like phones where we don't see a lot of real differentiation among all the phones. Oddly, we don't see a lot of real differentiation among all the AI models.
Um, they all, they're all, they all come out of neural nets. They're all doing very similar things. So, and, and, and meta doesn't have quite the same opportunities
that Google and Microsoft, uh, and Apple have to monetize AI themselves so they can spoil for rails. Same way. Apple spoiled it for everybody when they couldn't succeed at advertising. And so they said, okay, we're going to be the privacy place just to screw you. And just same way Android did what the following the iPhone, well, we can't beat the iPhone. So we're going to be the biggest thing is we're going to be free. I think it's a legitimate strategy.
Yep. Yep. I think it's, I think it is too. Um, you know, really all these services are charging $20, you know, although you have like, you know, you've got OpenAI releasing GPT-4o for free.
Google has his Gemma AM model. They have these free options, but neither is releasing the code the way meta is. Neither is saying here it is. Businesses have at it and meta does have some limits, but I mean, those limits are pretty tall. Like you, you've got to be a pretty large, uh, enterprise to, to hit some of those limits. And, uh, that's, yeah, I, I agree with you. I think meta is taking a really interesting, almost Android approach to, to AI. And, uh, it could really be the, you know, the, the strategy that, that, uh, takes them a lot further if they didn't.
And they have good people doing AI there. I have a lot of respect for Yann LeCun. I think that he's, uh, a saner version of the AI boys. And so I trust, uh, his prognostication.
Yeah, right on. All right. Well, with that, we have reached the end of at least the, the, the news stories that we chose for this show. If you want to hear Jeff Jarvis talk about probably a few other AI stories that we didn't talk about, I'm sure you'll be talking about them on this week in Google here in a little bit.
Thank you for the plug and Scarlett Johansson. Yeah. Yeah. Oh yeah. I know.
Yeah. Probably some of these stories gonna, gonna get some overlap for sure. It's, it's hard not to talk about a, a nice OpenAI Scarlett Johansson, uh, you know, the story when they come up, uh, Gutenbergparenthesis.com for people to check out Jeff's work. It's excellent magazine book. And then of course the Gutenberg parenthesis, and then you've got a new book, new book in the works. Any, any like ideas about like when people can think about.
About 120 days out, they'll put up a discount code. So, oh, okay. Um, excellent. A lot of old know the web we weave. Here's the, uh, just the, the, I love it, but there it is.
So it'll be, it looks great. I love it. I love when it's tangible like that. That's awesome. Well, good work, Jeff. Thank you so much.
Um, always, always a pleasure to do this show with you. Um, if you want to follow what we are doing here at AI Inside, uh, you can do so we record live every Wednesday at 11 and Pacific 2 PM Eastern. All you gotta do is go to the text uh, YouTube channel, which is kind of the home for a lot of the content that I'm, I'm producing and everything. So just find Techsploder on YouTube. Uh, you'll see a little, a little live call out when we have a live show going.
Uh, and then of course, if you don't want to join live, that's totally fine. We do publish this show every, uh, single Wednesday later in the day, you'll see it in your podcast feed, make sure like rate review and subscribe wherever you happen to listen. And of course, support us on Patreon. Why don't you, that is a really wonderful way for you to support us directly in what we do with AI Inside is patreon.com/aiinsideshow, ad free shows, early access to videos, discord community, regular hangouts with me and Jeff and the rest of the community. And you can also be a executive producer of the show like DrDew and Jeffrey Marraccini who are, you know, continue to be our executive producers. And we appreciate that so very much, but that's it for this week's episode. Go to AIInside.show for everything you need to know about this show. And until next week, we'll see you then take care, everybody.