Jeff Jarvis and Jason Howell talk about the challenges of Google's AI search errors, the implications of OpenAI's funding and internal conflicts, Elon Musk's ambitious xAI supercomputer project, and the potential of AI for companionship and sentience.
Support AI Inside on Patreon: http://www.patreon.com/AIInsideShow
NEWS:
- Google’s A.I. Search Errors Cause a Furor Online
- Google AI Search getting everything wrong
- Elon Musk raises $6 billion to challenge OpenAI@ylecun
- Yann Lecun: "Join xAI if you can stand a boss who"
- Elon Musk plans xAI supercomputer, The Information reports
- OpenAI Says It Has Begun Training a New Flagship A.I. Model
- Former OpenAI board member explains why they fired Sam Altman
- Anthropic hires former OpenAI safety lead to head up new team
- AI Is a Black Box. Anthropic Figured Out a Way to Look Inside
- Apple’s AI to include transcription, photo editing, search, & “software that can create custom emojis on the fly”
- Report: Apple signs deal with OpenAI for iOS, still wants Google as an 'option'
- California Senate passes really bad AI bill with big impact on open-source
- How A.I. Made Mark Zuckerberg Popular Again in Silicon Valley
- Could AI help cure ‘downward spiral’ of human loneliness?
- No, Today’s AI Isn’t Sentient. Here’s How We Know
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 19, recorded Wednesday, May 29th, 2024. What's in Anthropic's black box? This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. What's up, everybody out there in podcast land? I'm Jason Howell.
Welcome to AI Inside, the show where we take a look at the AI that's hiding inside everything, and the dealings that are going on behind the scenes to make that happen. At least that's the case today. And I think for the foreseeable future, we're going to be talking a lot about this stuff. Joining me as always, my co-host, Jeff Jarvis. Good to see you, Jeff. Good to see you, Jason.
This is Jeff's pre-funk to This Week in Google's AI section. But you guys probably end up talking about some of the same stories and some of the different stories because you always bring like a lot of leaks.
I mean, I do put the same articles in both rundowns to be clear. I mean, the news is the news. Yeah, so the homework is useful for both. For sure. It's because unlike this, which is a democracy, Twit is not. It's an old port joke. And so he does whatever the heck he wants, and sometimes I drive to some stories. But yeah, no, they're very different shows, which is good. Very different.
I'm sure you're going to be talking about what was the big Google news today? It's the Chromebook, new Chromebook. Oh, you're going to be talking about the Chromebook. Of course you are. Yeah. Yes. And I mean, there's some AI to that story
too, you know, that's a really big part of they're pushing out the new Chromebook series, which actually I don't have in the rundown. But we did talk about it a little bit last night on Android Faithful. And, you know, really at the end of the day, the story is Google is Gemini-fying
everything that they have their hands in. Yeah. And it's interesting because whereas Microsoft, last week we talked about that, that they made, you know, the AI laptop. Yep. And so what is Google doing? It's not like the Chromebook can handle a tensor chip right now.
That would be overkill, I got to say. So what is it? It's software. It's the cloud. Yeah. You get more Gemini up in the cloud. That's all it really is. Yeah.
Yeah. We, um, God, what was his name? Now I'm suddenly blanking. We actually interviewed from the Chrome OS team. I'm dancing while I look this up. John Meletis, who's the VP of Chromebook on Android Faithful on this week's episode. And that was actually a question that I asked him, Jeff. I was like, so at what point do Chromebooks require an MPU, some sort of like, you know, dedicated chip, like a tensor chip. And, you know, obviously, you know, they don't, they don't reveal the kind of their future intentions, but he did kind of allude to the fact that, you know, as these systems are built out further, there might be more and more of a need for MPUs and chips like it that are dedicated to AI to be on the device, even with Chromebooks. So I wouldn't be surprised at some point to see that.
You know, it's interesting. I hadn't thought of this till this moment, but, but we, we, the, the, the common people see AI through a web browser primarily, or through certain applications, Air Canada's chat bot, but it's still just, it's just a front end to something going on in the backend. And there's a lot of talk, obviously, about more AI happening locally in phones. But I was just thinking that if you're, if you run, let's say Linux on a Chromebook, you can do everything today that you could do anywhere else on AI, because it's basically web-based and a few applications. I wonder whether anything becomes native and an application layer to these machines, which will be interesting. If new things get written for AI capabilities on local devices, that whole industry, I don't think it's really started yet. GPTs aren't in either because GPTs happen in the cloud.
Is there going to be application writing around AI capabilities at a local level? I don't know. Yeah. Well, I mean, we're seeing that a little bit on, on Android, right? Like that was a part of, of the announcement two weeks ago at Google AI, as far as Android is concerned about how they're taking their Gemini in there because of the tensor chip and because of the onboard processors that are dedicated to AI processing, they are able to move a lot of that stuff onto the device, keeping it out of the cloud, you know, addressing some of the privacy and security features that people have and everything. So yeah, I think that's exactly what's, what's going to happen. I think these these, you know, laptop OSs are going to start to go down that path too. And it's going to be really interesting.
But if things go the way they are with AI, this is my transition to the first story. If things go the way they are, maybe everybody's going to want to back away from it.
It could be the case. And that's what we're going to talk about. Before that though, I do want to make sure that I don't forget to throw out the names of some patrons who support us, patreon.com/aiinsideshow. We literally could not do this show without you folks. And so we appreciate you being here.
JL Revilla is one of our, one of our patrons, also one of our newest patrons, because I'm realizing I've been naming the people that were there from the beginning. And then we've had all these new people join and it's going to be forever before they hear their name. We hope so. We certainly want to never run out of names.
We never want to run out of names, but I'm going to start going at the end and the beginning. And the pressure's on you to sign up, patreon.com/aiinsideshow. So you can be like JL Revilla and our newest patron, Josh Wassing. Thank you for joining us, Josh. Thank you, JL, for your month's worth of support so far. We could not do this show without you.
Okay. So what was Jeff alluding there? Google pushed out its AI overviews in mass in the United States a few weeks ago and really just basically said like, look, we know it's imperfect.
We know that sometimes AI hallucinations, it's just a problem, deal with it. And now what we're seeing is it's just becoming a constant kind of, I don't know, what do you want to call it? What is the game of birthday parties where you've got the bat and you're swinging at the thing trying to get it to? Yeah, see, my brain doesn't work.
My brain's not working either. You know what I've started doing lately? I've started doing lately. So I'm going, because I'm getting old, when I'm forgetting words and having a senior moment, I'm going more and more. I'm using meta AI because it's the easiest. What's the thing kids hit at a birthday party?
I can't believe that we have to look this up, Jeff. This is so embarrassing, but this is how my brain works. You're thinking of a pinata. Yes. Why was that hard?
I have been using it for that more than anything else where I'm thinking, I know there's a better word. What was the word? It's my new thesaurus and dictionary. Yeah. Okay.
So then you're finding a reason. That's okay. It's a use. Yeah. It's a use. It's a very expensive thesaurus, but yeah.
Totally. But I would absolutely agree when there are times where my brain fails me, like that embarrassing moment that I just experienced on this show that will live forever. I'll leave it in there.
It deserves to be heard, I suppose. But when things like that happen, there are times where I'm like, what is that word? I don't even know how to figure out how to search for that word. So then I just open up an AI and I just start typing stream of consciousness and hit enter around it and it figures it out. So I think that's a great use.
Which is the most logical proper use for it because what is generative AI? It is the relationships of words. Yes. That's true. That's no sense of meaning, which we're going to get to in a second, but the relationships of words is a perfect way to find the word you're missing. Okay. I'm going to give you all these words around it. What's missing? That's exactly how it learns. Oh, I know how to play that game. Here it is.
Right? Right. And it's amazing at that. It's really good at that. It is not good at facts or knowing what a fact is. Or telling you what you should eat. Or telling you what you should eat.
Exactly. People are sharing all of these air quotes facts that aren't true, that are coming out of Google's search experiment here with AI overviews. And I mean, what's so interesting about this is the way Google did this, they basically said, we know this is imperfect, but we're featuring it in our most well-known product for everyone anyways.
And so as a result, a bunch of people who have never used AI before, who have been using Google search for literal decades at this point, are now learning that Google is saying some really wildly stupid things because of the AI. For example, adding glue to pizza makes cheese stickier. Eating one rock per day is good for you. Staring at the sun for 30 minutes is safe if you have darker skin. These are all not true.
Don't do any of that. But Google's search is saying this in their AI overviews that they understand. Sundar even acknowledges, yeah, look, it's imperfect.
It has a hallucination problem. Okay. But what kind of damage is it doing to you, Google, when your main product that everybody knows you for is now suddenly telling people to do harmful things? Like, that's not good.
Yeah. And my take on this, which I tweeted, because you tweet everything you think just as soon as you think it, is that Google blew this in the extent that they could have, when Microsoft ran all in chat GPT, and it did all kinds of stupid stuff, Google could have stood back and said, well, we're not desperate to do that. We're the company that relies on, we're the company you can trust to be right. You've trusted us for years with Google search. We're keeping that going. We're not going to put this crap on our beloved search affecting our brand. Let desperate Microsoft do it.
We're different. Instead, Google screwed up in two ways. One, by screwing up its search, and two, by looking desperate, by looking like it's behind. It just wasn't smart all around. That's not to say they couldn't play with it.
They couldn't put it somewhere as an experiment. On my search now, I have that ridiculous Gemini box to the side. It's there. But to put it on top and say, this is a reliable search for you, and act like it was going to be decent, I think was just wrong. I've been saying from the very first, it shouldn't be associated with search.
It shouldn't be associated with writing articles, news articles, unaided. And I think it affects their credibility, it affects their brand. And I'm probably wrong here, because every stock analyst would say, no, they've got to be on top of AI. But that's stupidity. They could have played this differently, I think. Do you think, Jason? Yeah. Do you think they had to do this?
I don't know that they necessarily had to do this this way. I mean, I understand that Google really wants... Google, I'm imagining, realized that it should have been with the wave of AI based on the work that it had done for years and years prior to, and that the public perception may have been, oh, Google, where were you?
Were you sleeping on the job? Look at these newcomers doing all this fancy stuff with AI. We would have expected you to be able to do that, and you're not. And so I understand that Google probably feels like it needs to play catch up, and it needs to really go out... And obviously, this is the playbook they're running by. Go out of its way to reassure everyone, no, we have AI everywhere.
And it's amazing. It could do all these things. But did they have to foist it into the top of search and force people to interact with it? It's not even on the search page down at the bottom. It's at the top.
It is meant to be seen no matter what. And when you take that along with some of the ridiculous things that are happening and being shared up in that space, all it does is impact trust in the Google product. Even though AI search is different than their kind of legacy search, they're positioning it in a way where it's going to be hard to decouple those things.
Yep, I agree. I think it's... And right now, I mean, because we knew everybody could predict this was going to happen, it was going to screw up, and everybody's going to pile on Google. And now, supposedly, they're running, trying to erase the bad answers. They can't.
And I was thinking about it. It's not as if, well, we solved that problem. But because of the randomness built into generative AI, they can't even predict where this is going to happen. They can't stop it thus from happening. It's a cool tool. It's a powerful tool, but not for this purpose.
Yeah, yeah. And I mean, I will also say, and I think this is usually my retort on the whole search and AI thing, is that I do use Perplexity. Many would consider Perplexity to be a search, a search AI integrated experience. And I suppose there are similar products in the way that Google is taking multiple sources of information and summarizing based on your search result, and that's what I use Perplexity for. I don't encounter this kind of stuff to this degree, especially with Perplexity versus what I'm seeing from the examples with Google.
Are you using Perplexity the same way you would use Google? Yeah, yeah. Right? You're asking different things.
Yeah, totally. Sometimes I'm going to it with a very specific problem to figure out or something.
That's kind of better, whereas if you were asking a search result where I know I'm going to find recipe sites, you're not expecting to find, in a search, you're not expecting to find recipes with rocks in them. Yeah, yeah.
Yeah, it's interesting that some of those sources are from the onion.
Also, one thought was the reason the pizza came up with glue, because that's how food, I think Leo said this on Twit last week, that's how food, yeah, photography is sometimes done.
It's a trick there. One of the other examples was an onion thing. You'd think they could just restrict, never answer with the onion. But one of the answers was, I think it was about the rocks, something. One of them was an onion story. It wasn't necessarily the answer with the onion, but somebody had rewritten it for SEO purposes to fool Google, and it appeared above the onion story as if it were legit. And it had nothing to do with what the site wrote about, but everybody's trying to fool Google. That's why the web is messed up. Right, yeah. It's not like Google broke the web. Everybody broke the web trying to get Google to pay attention to them.
Yeah, right. Yeah, you're absolutely right. And of course, Google is also saying, these are outliers. By and large, the majority of the results are high-quality information. Oh, yes. They are. But all you need is one- And that's probably true. Yeah. Right, exactly. When you're talking about the scale, so much of what we talk about in big tech stories, by and large, is that the scale at which they operate, even if you're 0.01% fall into this category, that still equals a lot of examples, a lot of people impacted. That's still unacceptable.
It so reminds me of the early days of Wikipedia, where people were gunning for anything wrong. And there was one famous episode of a former, John Sigenthaler, former editor, well-known in the industry.
People knew him in journalism. And I don't forget, it had something noxious, somebody had put in there. A human being put that in, right? Well, everybody for years cited that, as Wikipedia is unreliable.
Finally, the volume of correctness of Wikipedia outweighed Sigenthaler's bad post. But that took a long time. Yeah, long time to undo. So do you think Google's going to stubbornly stick with it as is up there?
Or do you think- Yeah. I was thinking about that this morning. I was like, could I see them? My gut tells me at some point they reverse course a little bit. If things continue to go down this route, where people are continually being presented with things that are actually harmful and dangerous, which it's AI, AI is imperfect, that's going to continue happening.
I don't know. I could really go either way. Right there, my gut told me that at some point, they're going to hit a breaking point. They're going to have to at least pull it for a short period of time to make some corrections and then put it back up. I don't think they're going to get out of it entirely.
Let's just say that. I think they're going to continue to lean into it. But they might have to make some corrections and be like, all right, all right, fine. You say that this is unsafe, we shouldn't have it. We're pulling it back to the drawing board. And in a couple of weeks, we'll have a more refined system that won't do these things as much.
The other problem is, it's like one of the links you put in the rundown was how it's getting everything wrong. No, it's not. But some of it's also just banal. The weird thing to me is I can't figure out when it's going to give me an AI overview and when it's not. There are times when I think it is appropriate and it wouldn't, it doesn't.
And other times, I just put in, because I'm flying to California next week, what's the best way to get an airline upgrade? Well, here it gives me. But it's banal. Join a frequent flyer program. I hadn't thought of that. What?
Use miles or points. Can I? Whoa, whoa, wait a minute. Slow down. Fly solo. Sorry, hon. Dress nicely. Haven't really heard of that.
Volunteer for an oversight flight and ask. It's banal, right? It's not even useful. The truth is, what it's going to do is it's going to give me otherwise stupid stories people wrote with the same stupid ideas. It's all it's doing, it's recycling this. There is no good way to do it. The airlines are out to screw you. And so it's just a stupid sandwich, but enough. Yeah. A stupid sandwich.
I am hungry, but not hungry for a stupid sandwich. Put a little mustard on it. It'll be fine. Oh, Dijon, though. Yeah, absolutely. Absolutely. I didn't think you were a French's guy, no. No, definitely not.
I mean, in a pinch, fine, but yeah. Musk's xAI announced a Series B funding around $6 billion from investors almost one year since its launch in July 2023 with the aim to, quote, bring the company's first products to market, build out advanced infrastructure, and accelerate R &D. And of course, Elon threw in there also something about free speech and all that kind of stuff. So there we go. Elon's going to continue with this stockpile of money to work his way toward his prediction of AGI by 2025. And yeah, it's not just the money, right? There's the information how to report about a supercomputer that they're building by fall of 2025, 100,000 NVIDIA GPUs, a gigafactory of compute that's at least four times larger than the largest AI clusters seen today, which, wow, that's a lot. So I just... Can you imagine... Well, let me ask you this question. If you had lots of money
and you want more money because people with money get more money, and so it's not an issue of putting food on the table and you got money to play with, would you ever, and you're not on an AI show, you're not doing this, would you ever invest in an Elon Musk AI company?
No. I mean, on a personal level, I wouldn't. If I was a business person, yeah, that's just not the world that I operate in, I guess. It's hard for me to know because on a business level, it seems like people with lots of money still take chances with someone like Elon Musk because he does have a lot on his resume that is successful from the outside. For sure. Right.
Absolutely. No question. So Sequoia, Marc Andreessen, and Saudi Royals invested. I don't think he passed the IQ test. I would not invest in Elon Musk. And then all this stuff about, of course he's going to have, because he's all about macho, of course he's going to have the biggest, hugest computer you can imagine. I think we're way past that idea. I think that big is better, size matters in this stuff. No, quality and control are going to matter. And I think that Elon's coming late to that, but who knows? Yeah. Yeah.
There was also a little back and forth over the weekend that I paid only partial attention to because I was busy enjoying my holiday weekend here in the US. But Elon Musk took to X to assert that xAI is about, quote, understanding the universe, which requires maximally rigorous pursuit of the truth without regard to popularity or political correctness. To which Meta's AI chief, Yann LeCun, responded that xAI was driven by a boss who claims that what you are working on will be solved next year, no pressure, claims that what you are working on will kill everyone and must be stopped or paused.
Yay, vacation for six months. Claims to want a quote, maximally rigorous pursuit of the truth, but spews crazy ass conspiracy theories on his own social platform. And the jabs went back and forth and back and forth. So that's the AI drama.
LeCun also said that if it's not published, it's not science. Musk came back and said, well, what have you done? What, what, what have you done? LeCun came back and said, I've published 80 papers.
Gary Marcus came in and said, well, publishing alone isn't enough. Everybody's going around, everybody else, it's like a locker room. I'm definitely on team LeCun here, but that's fairly obvious. Agreed. Agreed.
Anyways, if you like the drama, there you go. Sometimes the drama is fun. Sometimes it's like a heavy eye roll. And usually with Elon Musk, I would say for me, it's usually a heavy eye roll. Yeah. OpenAI says it is actively training its next major AI model.
Let's see here. And it's assembling a new safety and security committee to evaluate the company's efforts over the next 90 days. 90 days is what the article said. So I don't know. Does it go away at the end of that? I don't know.
I do believe that Sam Altman is on the, yes, safety committee.
Indeed. Indeed. So the safety committee that is overseeing all of this has, uh, has Sam Altman and board members, Brett Taylor, Adam D'Angelo, and Nicole Seligman. Um, I'm sure there are others in there, but so basically this really only appeases people who already trust OpenAI at this stage. Yeah.
And I, and I will always replay now. I might as well just put it in a cart and hit play. Um, the definition of safety when you're dealing with people who believe in AGI and doomsters is a compromised word. So I don't know what they mean when they say that they have a new safety plan or a new safety committee or who's on it and where their worldviews are. It's Hall and Mears time. So, um, yeah.
OpenAI says that the new model that they're working on is going to bring us to the next level of capabilities on our path to AGI. Someone's going to write a book that says that's titled path to AGI if it doesn't exist already, because they're all saying it on the path, on the march to AGI.
I finished, um, Mustafa Suleyman's book, uh, the coming wave. And, you know, he started Google, uh, DeepMind. He's now the head of AI at Microsoft. He's a smart and very accomplished guy with all this, but even he goes on about the AI, AGI. I think it's, it's among certain people, it's gospel. Among others, it's a joke. I'm on the joke side.
Yeah. And we actually have something we'll talk about a little bit later. It's, it's almost like the first half of this show is kind of like the, the newsy big headline things. And then the second half is a little bit more kind of, uh, I dunno, a little bit more in the clouds, less newsy, more talking about the, the, you know, loneliness and sentience and things like that.
So we will get there for sure. Um, Helen Toner, who's a former board member of OpenAI spoke out on why the board moved to oust Altman last year, that crazy, you know, like week that happened that shook the AI world, said the board had lost trust in Altman once he neglected to tell them that he owned the OpenAI startup fund. And also that Altman gave incorrect information on safety practices within the company.
And he attacked Toner for a paper that she wrote. There you go. Okay.
There's that too. Um, once they decided the company needed a new CEO, she said, uh, this is actually, by the way, on a podcast, Ted, the Ted AI show for May 28th. So you can hear all this. She says, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board to prevent us from even getting to the point of being able to fire him. And so they had to act swiftly and secretly.
Deceptive and chaotic was her word. Um, on the same podcast, Brett Taylor, who's now the board chair at OpenAI came back saying they were disappointed. Ms. Toner continues to revisit these issues. Well, sorry, but we got to revisit them because we never heard the story. We didn't visit them in the first place.
Um, and I think there was some belief that they're just like, shut up. Could you just stop talking about it? So people will forget.
Right. And, and we never got a report from, from the law firm that was hired to audit all this. They just kind of said, oh, everything's okay. It's fine. And Sam's back in charge. It's fine. And Sam's doing safety.
It's fine. Um, and Sam is raising Trevor seven trillion dollars and it's fine. Uh, no, um, it's not a public company that you can argue. We have no right to know anything, but given the power that it has and given how much influence it has and given the way it is, for example, choosing to put money on certain news publishers and not others, it's using its influence in ways. Uh, I think that, uh, they should be much more open to use the title and they haven't been. So I'm glad she spoke out at last. I think it's great.
Yeah. Yeah, for sure. For sure. And then totally related to this, um, Jan, is it Jan Leakey? It's Jan Leakey, right? I think so. Who left OpenAI. Apologies if I'm getting your name wrong. Um, and if you'd like to come on the show and talk with us directly to tell us, like you've got an open seat at the table, uh, left OpenAI earlier this month on concerns of the company's poor approach to AI safety. Now Jan has joined Anthropic as lead of that company's super alignment team, uh, was leading the super alignment team at OpenAI prior, uh, reporting to Jared Kaplan, who's the CSO for Anthropic.
Leakey said on X that the team is going to focus on AI safety and security, of course, quote, scalable oversight, weak to strong generalization and automated alignment research. Those are some phrases there that I feel like I need to know more about.
Um, yeah, I look at the words alignment, let alone super alignment and the same, um, through the same hazy lens that I look at AGI. Uh, I don't think that the machine has no sense of meaning. How can it have a sense of ethics?
How can it have a sense of mission and alignment? It can't, it just simply cannot. And if we fool people into thinking that it can, um, it bites two ways, right? It bites the company because it's going to disappoint.
Surely people get to do crappy things, but also it sets an expectation that just simply is, I don't think is true. Um, and so, I mean, did we stop the machine from destroying all mankind? Oh yeah, good job done. Right.
But it wasn't going to anyway. And so the whole expectations of what these machines can do, I think is terribly skewed. And this kind of language skews it more. Anthropic is filled with a lot of, uh, AGI folks. And so once again, I look at them with a side eye. Yeah.
I mean, that's a good side eye. I like that. Chilled me to the bone is what it did. Uh, you don't want Jeff Jarvis giving you the side eye folks.
It all goes downhill from there. Um, yeah, well, Anthropic has, yes, been known or rather has made themselves to be known from early on as kind of like the safety conscious, uh, AI system. Like we're, we're doing it with safety first, safety at the forefront, but.
Which in spirit, I mean, they have a constitution and then they did an open thing where they had people contribute to that constitution and that all sounds good, but it's in this, um, framework of thinking that you could teach the machine how to be a good boy when it doesn't know anything.
So I don't have the story in the rundown. Um, but this, uh, is it Anthropic that is doing the work? Maybe it is this story actually, that they're kind of trying to work to understand the black box.
Yes. Yes. Anthropic did that where, which is good where they, they looked at the, um, they were trying to analyze which neurons fired. Okay. I'm using, I'm putting my, for those of you on video, I'm putting my hands near my head as if it is a brain. And of course it isn't, I'm, I'm doing it right now. I'm anthropomorphizing. Your head has a brain just to be clear. My head has a brain. The AI does not.
I'm not so sure. Um, but the AI of course is a neural network. And so that's the structure of it. That's what enables all this wonderful stuff to happen. And so it's those connections, those flashes of neurons, and they've started to break into it a little bit to understand it. One of the disturbing things about AI to the public is that you don't know why it does things. We can't explain. Explanation is gone.
And we're used to explanation or at least the belief that we could explain things. It's been a while since I've done a plug. I talk about this at some odd length in the Gutenberg parenthesis where the machine predicts, but it doesn't explain why it has no, why, uh, David Weinberger wrote, wrote about this and, and a guy named Alex Rosenberg wrote about this. And so there's been this effort to make AI explainable. And it's not because it's so huge and you don't know where the connections are and you don't know where this goes, but anthropic, I think did good work here to try to start the process of saying, what can we understand about the regions in which certain decisions are made? You know, which word to put next, uh, what, how it categorizes things, the connections among things. It doesn't have any sense of meaning in your topic, but it does put things together and their relationships. So I think it was, it was, it looked like very pretty soon in the paper I was lost, but, uh, I think it looks like a good research and a good, a good way to start going to better understand how an LLM just simply operates. We'll never fully understand it, but we can have some better sense. And then maybe there is some chance to do slightly better guardrails. Yeah.
I mean, that's the reason that kind came up for me is, you know, they're, uh, you know, kind of talking about how we introduce the concept or the idea of ethics into these systems when it's really about, you know, word prediction versus, you know, kind of like theories or, or ethos or whatever. And does understanding how the black box work, does that actually, does that actually explain things in a way that looking at a line of code gives us an, you know, an, an idea of how we can go in there and we can change things so that we get it to do the thing exactly the way we want it to does understanding the black box give us this similar keys,
you know, as you're talking Jason, something like ethics, I think I would say maybe it's more, um, uh, now I've, I've lost another word. I'll have to ask metadata what this word is. Uh, uh, after, after somebody dies, they look back at stuff, whatever I'm trying to think that word is, um, uh, chat room. Can I come up with this word in retrospect?
Uh, well, yes, actually that's fine. Um, uh, uh, where maybe if you could record all the flashing in the neural network and ask, why did this in past tense answer come out, maybe you could back up a little bit better. I'm dubious about being able to go forward, uh, and say, can we prevent certain things from happening? I have my doubts about that.
Yeah. Interesting stuff. Before we hit the break here real quick, I just want to, uh, throw out a thank you to the ozone ozone nightmare very much for the, uh, for the super chat, I believe says AGI is the synchronicity of the moment, a groundlessly optimistic ideal that gets thrown around as a way to stoke the hype embers of the AI zealots. There you go. Did you run that through an AI to, to, to copy edit? Good stuff.
Ozone. Thank you for, uh, thank you for the dollars. Really appreciate that. All right. We're going to take a break. And when we come back, we're going to talk a little bit about Apple. Cause why not?
All right. Mark Gurman from Bloomberg doing what he does best, offering more insight into what Apple is working on. And this is, you know, important because we've got, um, WWDC happening next month. We've got Apple, you know, we've, we've talked a few times in the past about Apple, the perception that Apple is behind in the AI game. What is Apple going to do? Is Apple going to, you know, start leaning into this and everything.
And Mark has some information on how AI is going, is likely to appear in both iOS 18 and MacOS 15. Gurman says initial features might not be as impressive as what we've seen from the competition, but that they're, they're hoping that in the longterm, their large user base is going to give their, their approach an edge over time. So in the near term, likely to see the standard things, transcription, photo editing, search functionality. And I'm thinking by search functionality, I'm guessing that's more like, maybe that's like AI on the device, like device search would be my guess. I don't know that that's necessarily like web search functionality, but I guess we'll find out. Uh, and then quote, software that can create custom emojis on the fly. So, you know, we're, I think what we're, what we're seeing as I, as I read these features and I kind of see what Google is doing with, with Android and, and some of this on-device stuff is the AI that's coming to devices right now, isn't, isn't the kind of thing that's revolutionary for everyone immediately. I can see how it could eventually get to a point where it's offering features that are just out of this world. Um, and, and, you know, really useful and, and, uh, really have an impact on your kind of quality of life, getting things done, that sort of stuff.
But this just feels kind of like, oh yeah, this is like table stakes at this point. Does it do summaries? Does it do transcription? You know, these are all things that LLMs are reasonably good at and everybody already has these on devices. So Apple just needs to catch up.
Uh, yeah. And I think, uh, it'll be interesting to watch because Apple made privacy a feature when it failed at the advertising business and it didn't need data for anybody. Now it's going to need some place to learn about its users and understand and give them more relevance and how it does that is going to be really interesting to watch. Uh, but creating a custom emojis on the fly does not strike me as shattering Apple.
No, but you know, they had the, didn't they have like the, was it Bitmoji or was that Android? I can't even remember. They, you know, iOS has its own kind of emoji system that you can, you can create. And so I guess this, the, this kind of ties into the, the doing fun things with your phone type side of things.
Not necessarily the changing your life in revolutionary ways. But again, it's, it's like opening up Dolly and creating an image, you know, that you couldn't create otherwise. Like it's fun. It's not earth shattering. It's neat that you can do it. And some people, you know, will certainly lean into it because it's fun, but that's about it would be my guess. Um, and then down the line, you know, I'm sure I'm, I would be really surprised if Apple doesn't go down the route that we see Google going down, which is leaning into eventually the, the multimodal capabilities of AI and seeing the, the smartphone, seeing the device that you have in your pocket as a way to get answers on things that appear in front of you in, in real life with less friction than it would take to punch out a question in a search engine or whatever the case may be.
You know, I think I was thinking about this the other day is that we've all ended up in a phone mail jail where we're going through, tell us why you're calling today. No, no agent, agent. Right. Yeah. And, um, and those systems are going to get more and more and more ingrained to some extent they may get better, but they're not going to have the authority.
I think people are going to get so fed up with dealing with AI agents like that, that they, there might be a bit of a backlash on phones and devices. Like give me real people. Let me, let me get to the real stuff here. Stop with all this because it becomes a, a layer on top of things. It's like, it's like, yeah, it's like whipped cream on your Sunday. Just get me to the ice cream. Thanks.
Yeah. Well, you lost me there. I do like whipped cream on top of a Sunday, but, but I see where you're coming from. Not everybody does. I, I like them both.
Maybe not equally, maybe the ice cream a little bit more than the whipped cream, but the whipped cream is a nice bonus. Um, Gurman does also say that Apple has signed a deal with open AI and that's to bring his chat bot to the platform. I'm wondering if we're going to see that next month. Um, does that mean Google lost that, that, that race? It doesn't necessarily mean that it says, uh, Gurman does point out that, that, uh, Apple is still working on a deal with Google as an quote option. So I don't know what that means.
Like you get a, you get a choose your browser window and then you get to choose your AI window. I doubt it's something like that. Um, as a fallback, maybe, maybe it's because, you know, Apple feels, uh, like they need to protect themselves and not put all their eggs in one basket. And so, you know, figure something out with multiples and then you're not totally locked in. If open AI goes totally off, you know, often does something really, really stupid.
It's amazing. The money is flying around here in weird ways. There was just a story right before we came on, uh, related, but not open AI did a deal with Fox and the Atlantic to give them money for content. And we have before that the associated press, the FT news Corp, uh, actual Springer, right?
They've all gotten money. Um, and in this case, the money's going, I presume the opposite way where, uh, opening, I get some money from Apple. Um, though, who knows, I don't think it'd very much because open AI wants the positioning. Um, and in the news part of this, it struck me as, um, does open AI need Atlantic thumbsuckers?
No, not to train and not really to serve. Why are they doing this? They're doing this for lobbying and PR don't sue us. Don't legislate against us.
Don't, uh, lobby against us. Right. And, uh, meanwhile, if you're going to give money to news, the place that needs it more as local news, the kind of specificity of news, they're probably the users are probably going to want would come from local news.
Uh, but that's not who's benefiting here. And so it's a weird kind of pre bubble time when money's going in all kinds of weird directions, but I'm not sure in the end it's their actual value. Hmm. Yeah. Yeah. So we'll see.
I mean, I don't, I have no idea what we'll never know between Apple and open AI, what the they're having, but it's, it's a little hard for me to really figure out who benefits more in that deal. Um, well, yeah.
And is, is, Hmm. Kind of talking about what I mentioned a few minutes ago, as far as Apple being seen or the, that perception that Apple, you know, is, has been lagging in AI. And then going back to the beginning of the show, we were talking about Google and the perception of Google lagging in AI is a deal with, with open AI.
Yeah. Is, is that kind of a deal with the devil for Apple? Like Apple, Apple admitting, like we, we would love to be in control of this destiny, but we can't, and we don't, and we aren't.
And so we have to make this deal. And that's probably why they're keeping their options open with Google. Cause I don't think Apple wants to back itself into a dark corner.
You know, on the other hand, as you're talking to me, it could be that they present it like a choice among, um, search engines, default search engines. And then maybe Apple doesn't get blamed for a bad answer. Open AI or Google does. Maybe it's, maybe it's a way to kind of stand back, which might be smarter, but we'll see.
Right. Right. We know these systems are imperfect, but don't blame us. We, we aren't the ones, you know, you, you're the ones that want the chatbots. Okay. So here you go. Here's your poopoo platter of chatbot options. You choose that's on you.
We don't do that around here. Uh, Jeff, you included a story about California Senate passing some AI regulation. It's SB 10 47, which really seems to put the crosshairs on AI models, uh, in order to control their size, control their capabilities.
Tell me a little bit about this. It, it, my, my, the bottom line fear here is that it cuts off, um, open source because it's going to put such, um, responsibility on the model makers. Only the big companies will be able to afford, frankly, the liability insurance to deal with this. So the bill, uh, creates a new regulator, the frontier model division of the department of technology, any model that's more than 10 to the 26th flops is subject to regulation. So it's the bigger models. Let's say that. Um, but if any model is trained, um, with less compute and lower benchmarks, um, well, it's, it's, it's kind of okay. But if the, but again, this is about the training.
So if you do an open source version of Lama, I would presume that that counts. Well, here's the deal is that if you want to train a model that could conceivably fall in any of these categories of size, you have to sign a document. I'm reading from a post by Dean W ball. You have to sign a document under the pain of perjury, which is a felony promising the frontier model is safe. There is no way to guarantee that.
Let me say it again. There's no way to guarantee that because you cannot possibly predict what every imaginative malign user could make of the AI model to do something bad or something accidental. There's no way you can't guarantee it's safe. And so now you have the California government coming in saying that you must. And I, I think what, what this does in the end, according to some, it doesn't specifically mention open source, but, uh, if you're, if you're a university, if you're a startup and you want to train a model, uh, that goes past this, this threshold, then, uh, you're up crap Creek. If you can't have the resources, the insurance to verify, um, but it's also a felony at an individual level. You're also putting yourself at risk.
So this is just right. It's I suppose it's well-intentioned, but it's incredibly ignorant as to how this world works. And my, uh, sorry, I always fall back to Gutenberg. My analogy here is that Gutenberg couldn't possibly have said, well, Johannes, you can put your machine out, but just guaranteed no Martin Luther could ever use it and do bad things with it. Guarantee it won't undercut the Catholic church guarantee.
It won't start a reformation and the 30 years war. No, it's a general machine. I don't say it's artificial general intelligence, but it is a general machine in the sense that it can be asked and told to do anything. A gun does one thing.
It fires a bullet, which you can use to kill a deer or a human being. This machine can be, can be made to do no limit of things. And that's the point. And that's what also makes it impossible to regulate this way.
And in the public policy debate, we've got to get our heads around this. Um, so thank you for that rant moment. I'm going to be in Sacramento next week as it happens now to talk about the news protection legislation, uh, that I've been railing against. I wrote a long paper about that and I can find myself perhaps even talking about this legislation. Yeah. Interesting.
Uh, are you willing to share what the name of the thing is for anyone in the Sacramento area? I don't know.
I'll be knocking on doors of legislators, which is weird. So I did a paper, I did a long paper for the, um, California chamber of commerce that they commissioned on, um, uh, the California, uh, journalism preservation act, which I think is a bad legislation. And I spent 41 pages explaining exactly why I thought that and gave a bunch of alternatives. And so if you go to my medium, you'll find the entire text is there. That's not the way I'd read it, but there's a link to, uh, the PDF for that. And it touches on AI in the sense that it touches on questions of copyright and fair use. And what we see is the news industry constantly trying to expand, uh, copyright and fair use and AI is one context and trying to tax the platforms is another.
So excellent. So this is a, if you go to medium, do I have the right article up here?
I plug it in lots of other, this is, this is a piece of legislation called the headline is a bad to worse. Uh, that was one from a Senator in California that just died on the vine, but there's another piece of legislation.
The one I write about that's still going on. Um, it has some, the paper has some, um, uh, interesting, I think a history on copyright and on the consolidation of the news industry and on the issues of fair use. Uh, so that, that I think might be interesting for some folks. You can skip sections.
Cool. Well, we'll have to talk about that. Um, uh, on next week, I'll be fresh from sector. Yeah, indeed. Indeed. And we'll have to figure out reporter days, figure out if you fit in the studio with me.
Well, my big head may not get through the door. Um, my old reporter days, we call Sacramento SACTO to be actual Californians. Yes. Yes.
No, I don't really hear people like calling it SACTO out and about, but I've heard it. I've heard it, but I don't really hear it as like, it's not like San Fran, which you hear.
And anyone from San Francisco will tell you will that they shudder when they hear San Fran or, or Frisco, you know, I wonder if it's the same thing with SACTO. Yeah. It's the city. Yeah.
Go to the city. Yeah, exactly. Yep. Well, speaking of open source, uh, Jeff, you have been saying, uh, for a while that Meta's open source approach is a, you know, potentially it's a redemption story for Meta for the company. It turns out also for Zuckerberg, New York times, Mike Isaac writes that Zuck is, uh, you know, for, for many years could do no wrong or could do no right. Sorry for many years could do no right. There we go.
That's the right way to say that. But now by open sourcing Meta's AI and making it freely accessible, he is becoming an AI champion. It's almost like a, a glow up or a Zucker song says he puts it in the article, which is a weird word. Uh, the article points out that and Joel PNU, uh, who headed up AI research in the company pushed for the open, uh, approach as a better long-term play for the company Zuckerberg agreed, calling it good business. And, uh, you know, to, to an extent kind of this current focus on open source around AI is at least for a certain category of AI, you know, people who are dialed into the AI ecosystem and, and all that, uh, see Zuck as, you know, kind of an open source hero.
Um, yeah, who are the, well, I think a few things make Zuck more popular. Who saw that coming? Right. Who saw that coming? One is that he's, he's championing open source AI, which I think is very important. And two is he's not Elon Musk. I mean, people saw how
bad Elon Musk can be as a mogul. The demon of the moment is Elon Musk.
Yeah. And so you think, don't you miss Mark? And I think it's true. Um, uh, you know, it's funny. I just saw a stock story come by that, uh, one of those ridiculous things that comes in my feed, you know, you, one of the two stocks you should buy if you want to buy an AI and that is now one of them. So he's really made himself into a, into an AI company, even though it's now interesting. But, uh, yeah. So I think this has been a clever strategy.
Open source hero, Mark Zuckerberg. Plus he changes wardrobe and his haircut and that helps too.
Yeah. Yeah. It's, I've seen some, uh, some clothes recently. He's looking pretty dapper depending on the situation. Uh, yeah. See, I idolize Mark Zuckerberg now, my hero got to make some room on the wall. I don't think that'll get you many drinks in the bar. No, I don't think so either. Um, and AI as a cure for loneliness.
What did you think? Well, you know, I think it can sound kind of counterintuitive or maybe even a little dystopian. Um, but this guardian article was, uh, I think a really interesting read in kind of sharing the alternative view, which is, you know, that loneliness right now worldwide, at least the article makes the case that worldwide loneliness is an epidemic and that people are craving human connection and not getting it from humans a lot of the times. And so the article talks to a lot of people, um, some professors, some researchers to kind of illustrate that their interactions that they're, they're craving, they might not be able to get it with that interaction with, with humans, but they might be able to get something along those lines with an AI.
It's not perfect. It's not going to be a human, a truly human interaction, but it might help them feel less detached. It might help them practice the skills that would bring them back into human contact. And, uh, I don't know, like I, I, I, I get where it's coming from. I do think that for some people, this makes, this would make a lot of sense. And for some people that doesn't matter how much sense it makes for anyone else, they're going to look at this and they're going to say, this is horrible for humanity. And I don't think you can make everybody happy in this regard. Yeah.
The story kind of disturbed me a little bit because it is the ultimate anthropomorphization that, um, this machine is a companion. Uh, but people are, I'm not, I'm not sure I buy the idea that we have this epidemic of loneliness.
That's not, that was one question I had. Like, where does that, where does that come from?
Yeah. And so not necessarily buying that. Certainly people are other people, lonely people in the world. Absolutely. Um, uh, and, and, uh, in a sense I'm being wrong because a novel can make you less lonely. Music can make you less lonely. Uh, a good movie can make you less lonely in that sense. It, it occupies your mind. It connects you with, with ideas and creativity and so on.
Um, so maybe it's okay. I just watched a presentation by Lev Manovich, who's a brilliant, uh, digital humanities, uh, artist at city university of New York. And, um, he's doing a lot of work right now on AI and aesthetics.
And he said something really interesting. He said that 250 years ago, uh, art, um, and maybe longer than that actually, but, but, uh, some, a few centuries ago, uh, art and creativity weren't necessarily connected because there was one creator that was God. So the artist was not the creator. The artist was a vessel, right?
And that's a very different way to the world. And so the way he was asking that, believe it or not, this is relevant to AI in one second where he was saying, we asked whether AI could be creative. Well, define creativity.
Our whole sense of creativity has changed, uh, as a society over the last few centuries. So I suppose one could come to the same sense of companionship. Um, but, uh, if your AI sounds like, uh, Scarlett Johansson, don't get too attached. You're going to lose it when she's Susan. Yeah. You're going to lose it. Yeah. Might never come back going away from you.
There was one interesting quote before we get to our last story. Um, that I thought that that just kind of made me pause and kind of think a little bit.
Professor Tony Prescott in the article said, although AI is cannot provide friendship in the same way as other humans, not all relationships we find valuable are symmetrical. I was like, well, that's true. Like, you know, we don't, we don't, we don't absolutely look at every human in our life through the same lens and that they have the same role that they play. And to that end, that's where, when I was reading this article, I was like, okay, I can see that, right? Like it might not be the one-to-one, you know, this is a replacement for a human being, but it might be enough of a needs filler or whatever you want to call it, to make someone, to kind of satisfy what that someone is looking for. And maybe there's some good in that, you know, to have that option.
I mean, I'm not a gamer, but isn't that awfully true when it comes to games? I don't mean multi-person games, but I mean a single game where you play something and in a way you're interacting with the creator of that game. Oh, for sure. Yeah. That's a really great point. But what did people say about kids and games? They're not really dealing with real people. Oh no, it's going to skew their worldview. It's going to mess them up. Isolated.
Yeah, they're going to be isolated. No, you're interacting with something. You're stimulating your thought. For sure. Okay. I could see a sense of companionship that I can get my head around begin there.
Yeah. Very interesting though. It definitely got my mind kind of thinking about it in some different directions. And as did this next article, which you put in a time.com article about AI and the idea that it might be or someday become sentient. This idea of sentience that is having subjective experiences and how that's a really big part of this concept of AGI that we were talking about earlier that I alluded to. The article points out that there may be a lot of reasons why people feel like AIs are already sentient, which I think is silly. But those reasons they speak the language of being sentient, i.e. they speak the language of being conscious. If a human says, I'm hungry, I have no reason to doubt them, even though I don't see that hunger with my own eyes. So why would I doubt it when an AI says it? Could be true here. But the article says this is wrong for, I thought, a very valid reason.
This was a really good kind of mind twister for me. And it's very obvious. LLMs know how to string syllables together to express something like I'm hungry, but they don't actually have the physiology, no body, no chemistry, no machinery that relies upon food in order to live or survive or derive happiness or whatever. They don't have the required physiology for that kind of awareness to be valid.
And yeah, it sounds so obvious. They have no connection to our reality. Right. Right. It sounds so obvious. It's like, well, yeah, duh, it's a machine. But I think that's a really good – this article did a really great job of illustrating why that is such a – like that goes too far to believe that that could be true. And I'm not even saying that it couldn't be true somewhere way off down the line that we don't build some sort of machinery that includes all of the senses and has – like maybe that's possible, but I don't think we're nowhere near that. I don't think I see that in my lifetime.
No. This piece was written by Fei-Fei Li and John Etchemende, I hope I pronounced that correctly, at Stanford who do really smart work about this. They're not on the crazy end of all this. In my next book, The Web We Weave, coming out in October, I write about the case of Blake Lemoine, which many of our audience will remember is the Google engineer who said that Google's model was sentient and then tried to hire a lawyer for it and lobby Congress for it and got fired for this, as he should have, I think. It is the ultimate of anthropomorphization. We imbue in the machine a sense of ourselves. And it's not unlike what Emily Bender, the University of Washington linguist says – she was an author of the Stochastic Parents paper – is that when we sense meaning in the machine, we're the ones who impute the meaning. The machine doesn't have it, but we think we want to see it in it. And I think the same is true of sentience.
And the same – it goes back to the prior story, right? – the same would be true of companionship. And there is a risk there, because it could be used by a malign actor to fool us and exploit us. Now, I generally believe people are smarter than this. And no, we're not going to be just the dumb – the machine's not going to be able to just make us do whatever it wants to do. A, it doesn't want anything. And B, we're smarter than that. But one could well imagine how some people could be exploited in such a way. And that's the danger of talking about this. So I'm really glad this was – Time magazine tends to be the journal of moral panic about technology, going back to their famous cyberporn cover some decades ago.
I was going to say, when I saw this article in the rundown, I was a little surprised that came from you. I was like, well, wait a minute. Time? Yeah. Written Time off a long time ago.
I have, too. That's why I put it in, because I think it was very good.
I'm glad they did that. Mm-hmm. Totally agree. Well, I have kept you for six minutes longer than I'm supposed to on a Wednesday, because I know you've got to get ready for your next show, and you're a busy person. Jeff Jarvis, thank you so much for doing this show with me each and every week. Thank you, boss. I continue to learn so much talking through these concepts.
And next week, if we figure it out, I don't know if we can. I'm putting a big burden on Jason here trying to figure out all the technology.
But my hope is that we'll be in the same room or building or zip code in Petaluma. Definitely same zip code. Yes. Hopefully, same room. This is not a very big room, but I think I can figure it out. I think I can figure it out.
So, we'll figure that out together. GutenbergParenthesis.com is where people can find the work that you have done already with Magazine and the Gutenberg Parenthesis. When your new book comes out, is it going to hit this site also? Yes, I will figure that out. I have to get my son, Jake, to help me do that. Got it.
I hear you on that one. Well, thank you, Jeff. Thank you, boss. Great stuff today.
I appreciate you. AI Inside, we do this show live every Wednesday at 11 a.m. Pacific, 2 p.m. Eastern on the Techsploder YouTube channel. You just go to youtube.com/@Techsploder and we will be streaming live to the page. I think if I refresh, you'll actually see. We are live right now.
It's kind of like AI Inside live inception. If you went in there, you'd watch me recording this at this very moment. We do publish the show, though, if you can't catch it live, every Wednesday later in the afternoon. So, just go find your podcatcher of choice. You'll find AI Inside podcast there. Like us, rate us, review us, subscribe wherever you listen, and of course, support us directly on our Patreon at patreon.com/aiinsideshow.
That is where you can, you know, throw us a few bones. We offer ad-free shows, a Discord community, regular hangouts with me and Jeff and the rest of the community. I actually did a Zoom hangout with one of our supporters yesterday, Steve, and it was a total blast. But I would love to get more of you in those Zoom hangouts. So, go to patreon.com/aiinsideshow.
At a certain level, you also become an executive producer of this show, including Dr. Du, Jeffrey Maracchini, and our newest executive producer, WPVM 103.7 in Asheville, North Carolina. Did I say that? Is that your real name? Anyways, that's what you had there on Patreon. So, there you go.
I hope I did that justice. Everything you need to know can be found on our website, AIInside.Show. Thank you so much for watching and listening. We will see you all next time on another episode of AI Inside. Bye, everybody.