Unpack the AI noise from Google I/O 2024 and OpenAI's GPT-4o reveal, with Jason Howell, Jeff Jarvis, and guest Mike Elgan analyzing the impact on search, content creation, voice interactions, and the evolving interactions between humans and AI.
Support the show on Patreon: http://www.patreon.com/aiinsideshow
NEWS:
- Google I/O 2024: everything announced
- Google’s broken link to the web
- Google rolls out AI Overviews in US with more countries coming soon
- Google’s generative AI can now analyze hours of video
- Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.
- OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT
- Mike's article: We're all talking to AI now
- The case against teaching kids to be polite to Alexa
- Meet My A.I. Friends
Mike's Machine Society newsletter: https://machinesociety.ai/
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 17, recorded Wednesday, May 15th, 2024. Google Gemini and the Wall of Noise. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly.
And thank you for making independent podcasting possible. Hello, everybody, and welcome to another episode of AI Inside. This is a show where we take a look at the AI hiding inside of everything. And boy, howdy, do we have an episode that proves that fact. This has been quite the week, and it's only Wednesday.
Those are the weeks that I love because it gives us plenty to talk about and plenty to marvel at. I'm Jason Howell, joined as always by Jeff Jarvis. How you doing, Jeff?
Hey, boss. How are you? Excellent. You're looking dapper. I think I see an IO tan on you.
Well, when we were watching the keynote, we were under the tent, so we weren't out in the sun, thankfully. But then throughout the day, you're walking around and you're checking out things. I do have, though, a water mug, so there's that. It's a pretty stylish water mug, I got to say. It's no device. It's not getting- No, when I was there, you just get a phone. Yeah, no phone this year.
I was wondering if there were going to be 8a's handed out for review and stuff, and that didn't happen. Anyways, it's all good. Being there was the gift enough, I guess.
Sure, we'll just say that. Coming to this show here in a moment, we have a really special guest, someone who I enjoy getting the chance to podcast with. I'm going to bring him in in a moment, but before we get started, big thank you, huge thank you to the people who help us do this show, who support us directly via Patreon.
That's patreon.com/aiinsideshow. Brad Richwine is one of our amazing patrons. Brad, thank you for supporting us as we do this independent podcast.
And yeah, all of you who support, we literally couldn't do it without you, so thank you. All right, so bringing on to this episode, and boy, this is the perfect week to have Mike Elgan on. Mike is the author of Machine Society on Substack. I mean, you've had a Substack for a very long time. You've recently kind of redesigned it and renamed it and doubled down on the AI implications of everything.
Absolutely. So essentially, this newsletter has existed for 24 years. I started in 2000. That itself was a recreation of a newsletter I did for Windows Magazine. But when I went independent, I've been doing it. It's called Mike's List. It was a terrible name. Back in the day, Craig Newmark influenced everybody back then. Nobody remembers this, but in 2000, everything was a list because of Craig's List.
And so I had Mike's List, and I always hated the name because it was dated. And at some point, I'm like, what is this really about? And what it's really about is human beings, human culture, human society, and how machines, how AI, how next-generation interfaces like spatial computing are and will change us. And so I changed the name, and there it is after 24 years.
Love it. Excellent name. Machine Society tells you exactly what you're looking for. And so perfect for this moment, I mean, and especially for today. I mean, there is hardly a news story that involves technology right now. And maybe that's a hint of just the saturation of this idea of AI and everything. But there's hardly news that's in technology that doesn't have something to do with AI.
That's right. And that's true on the funding side too. So if you're a startup and you're not touching AI in any way, good luck. It's really tough. You better have it in there somehow, somewhere. Right. And there's a lot of AI washing as well. You have to throw AI in there, whether you have AI or not. And it's easy enough to put AI in now thanks to the APIs that are available to everyone. But still, it's really eating the world, so to speak.
AI washing, putting AI into everything is the perfect segue for, I think, what is going to probably occupy the first half of this show. And just so everyone knows, I was down in Mountain View yesterday. I was invited to go to Google's... Why am I suddenly blanking? Google IO conference. It didn't have AI in the title. That's why I couldn't remember.
It should have been Google IO I'm surprised it didn't change the name to Google IO No kidding.
They really could get away with that if they wanted to. Had the opportunity to talk with Dave Burke and Sameer Samat from the Android team for my other podcast, Android Faithful. So you can check that out if you like. A wonderful interview with them where I was behind the scenes. I got to do the work of the twit engineers and be the tech guy while the rest of the hosts interviewed.
So it was a nice change of pace. But anyone who watched the keynote, if all you saw was the keynote from yesterday's event, Sundar Pichai called it out at the very end of the episode. If you were playing bingo and you had AI or Gemini on your card, you were hammered. You were probably dead to alcohol poisoning by the end of the episode.
It was about nothing else but AI. When you get down to it, absolutely everything was AI. Google's been saying for years that they're the AI company, they've been doing AI, they have AI and everything, but they never got credit. Along came Microsoft and OpenAI and ChatGPT and everybody forgot everything that Google had done in translation and search. But yesterday it was clearly, this is the AI company. Yeah.
Oh my goodness. Yeah, for sure. I will say just as a footnote to the event, since you mentioned the keynote, Google's really got to learn from Apple about how to generate pretend excitement for their events. What Apple does is, first of all, they invite the journalists who are super, they don't invite the critics.
The second thing is they got ringers throughout. They have Apple employees, people on the team in the audience who are just cold. Yeah, exactly. I think they're also micing the audience. It just seems like even when they're doing these videos.
Google's just playing it straight. They don't have a bunch of people deliberately clapping and all that kind of stuff and they don't mic it correctly. It sounds like when they announced GEMS, for example, there was one guy and then everybody politely chimed in. Wrong energy because they were announcing a lot of groundbreaking stuff.
They were. Being there, I felt a different energy than I think people watching it on a screen did. I think you're absolutely right. I think one of the big challenges, and maybe this is more a logistic challenge than it is an excitement about announcements challenge, is the fact that it's shoreline amphitheater. This is a stadium that requires a band like Radiohead to come in and fill it in order to feel that noise and energy.
You put a tech conference in there. There was noise while we were there, but it takes a lot to fill that place. Maybe they just aren't micing it correctly. That would help generate some excitement.
The other problem is that they're developers. They're not event coordinators. They're developers. Developers are not exactly the most excited audience sometimes.
Indeed. Before we get into what I've been calling the wall of AI noise that was yesterday, because at a certain point, it just became noise. It's like, this is all amazing. I'm here for it, but it's really hard to look at that wall of AI noise and pull out, that's the thing. I did hear a few quotes that I jotted down as they passed by. One was, I can't remember who said it, but on the path to AGI. It was like within the first 10 minutes as we venture on the path to AGI.
I'm not sure this is the path. No, AGI is BS. I'm sticking with that. We're not going to be there. There's no such thing. The thing can't add. It doesn't know a fact. It doesn't know meaning.
It still does amazing things. That's the problem. They've set themselves up for a fall by this incredible expectation that it's going to be us. I think that the other big thing I saw with both Google and OpenAI is a mistake of anthropomorphization. With the voices they used, as Leo Laporte said, we were watching OpenAI, the prosody that they've amplified, the odds and the efforts to be human. I think that's in the end going to be a huge mistake because A, we don't want it to be human. B, it's going to fail at being human.
C, it's going to set the stakes wrong. It can do phenomenal, amazing things. It can transcribe huge numbers of hours of tapes and find interesting patterns. That's all phenomenal. No human can do that. We should be grateful that it can do that. Instead, it can't act like a smart parrot.
I have a theory about that, which is that we tend to give Silicon Valley geniuses the benefit of the doubt, especially the engineers who are working on things like AI. We're like, I can't even understand what you do. It's crazy, complicated stuff that requires PhDs and stuff like that.
We have to remember that these are human beings. When people like Elon Musk wants to build a company, companies like OpenAI want to build AGI, to a very, very large extent, they are living out the sci-fi of their youth. That's what's captured them. They just think that's got to be the future. That's the ideal future. I've been criticizing the concept of a humanoid robot for many years because people think, well, of course, we're going to have humanoid robots walking around amongst us. Well, no, we won't want that, really. We'll want it in certain circumstances. The only reason to have a humanoid robot, by the way, is for military applications, you have a robot that has to be able to open a car door and sit in the seat. You're in a hurry. In general, you want robots and AI in all the stuff everywhere all the time if you want a futuristic society, one that's shaped like a human and walks around with AGI. This is an extraordinarily expensive, energy-consuming, and unnecessary and ultimately undesirable product.
Jason, what impressed you about Google's as you sat there?
What impressed me? Again, I go back to the wall of noise and how it becomes a kind of diminishing returns at a certain point. I realized that their intention yesterday was to show, hey, we're as big a player as anybody in this world, in this current moment with AI.
We've been there since earlier than most others that you're seeing making the news right now. Check out how we're bringing this into all the products, which ultimately, I've thought that's a pretty good idea. If you've got all of these products and platforms that people are already using, put the AI into them so that people don't have to go places to use it.
That's essentially what they're doing. I find myself getting a little numb in the same way that you remember when Assistant suddenly had the ability to have all of these commands. It was like, wow, Assistant could do all these things. That's really amazing. Then as a user, it could almost do too many things that it was too much for my mind to organize.
I ended up not doing anything. That's kind of where I find myself falling a little bit here. It's like, man, it really is. I don't know where to direct my attention because there's just so much. That's impressive, but it's also oppressive.
Yeah. To the point about Assistants and Assistant appliances like the Amazon Echo, Apple HomePod, and all that stuff, people use them for cooking timers. That's basically what they do. They're playing music.
They use them for three things. Part of that is because it's not AI. They don't have any agency. They don't suggest.
They don't enter in conversations. In the case of something like Siri, you say, well, give me this answer. It's like, well, here's some stuff I found on the web. I'm like, okay, thanks. I could have done a Google search.
That's really helpful. I do think if we want to direct attention, one of the places is the video of Project Astra, which is a series of features that are coming, where the demonstrator walked around with a Pixel phone running Astra and just saying, show me the devices in this office that make noise. It's like, well, that's a speaker.
Identifying parts of the speaker, all very impressive. It's more or less what they fake demoed last year. This time, it was a real demo, which they made a point of things.
Well, they said, I've taken them at face value. Then there was really great stuff, except the difference is that open AI is saying, okay, this is coming in a few weeks, in weeks. Google's like, maybe later this year or later this year.
I feel like the implementation that Google had, which does a few things that nobody else can do, it's got to be pretty power intensive. I think the illusion that it creates is that you have this intelligent thing paying attention to what it sees and can identify things. Then later, it's like, hey, did you see my glasses when I was sweeping? It's like, yeah, it's right next to the apple.
That was impressive part. Yeah. Yeah. That was where everyone went, ooh. That was one of the few moments, I would say, where everyone was like, oh, that was surprising. You know what I mean? That was surprising.
It also says that we're going to be using these things, bingo, through classes. I'm getting PTSD right now, Jason. You know what?
For audio listeners, I'm wearing my Google glass from- With a dead battery or is that battery still viable? Oh, you know, I have not charged it. It's literally a set prop at this point.
I almost wore it yesterday because I just wondered, okay, at the end of the day, that's where all this stuff is really heading. Any of the hold up your phone and point it at the thing and let it translate and do all this stuff. At the end of the day, we are going to want that feature without the resistance of having a panel in between us. It's all heading to the glasses. When we saw the glasses in that Astra demo, that was like, okay, there we go. Game on, let's do this.
Once again, how many times a day do you need to ask, what is that? I think that the interesting part is you see this interface between writing something down, looking at a whiteboard and having it react to or looking at a screen. I think that's interesting.
I am eager to play the glasses, even though I have having spent a fortune on Google Glass. What's interesting to me, going to one of the things you said, Mike, is this is a weird way to put it. There's not much of a barrier to entry to development around AI now. That is to say that all these companies are to me are within inches of each other. It's about scale. Google's big announcement really yesterday was that it can take more tokens in into what it deals with live before you. Is it 2 million? I think so.
That's great. I could put in a whole menu to deal with it. OpenAI was about speed, trying to be more efficient, use less power, one hopes. But these are all just gradations on a chart. Since Google came out with Transformer and neural networks, they're all pretty much doing the same thing.
They have the resources to have the data and to have the power and the compute to do it. But I don't see much differentiation right now. Do you? Do you separate them out?
There are minor differentiations and some of them are interesting. I want the future to be, I'll take some of this and some of that and so on. Before I move on from the impressive AI behind some of the features they're talking about, I would encourage people, and we're not talking a lot about this these days, but to revisit Notebook LM, which is Google's experimental note-taking thing.
Well, that already has Gemini 1.5 Pro in it. That's right. This is quite an amazing thing. Basically, if you're unfamiliar with it, it's a notebook app, but you can just dump stuff in it. It's multimodal. You can dump PDFs and all kinds of stuff in there and then just have a conversation with your notes and just say, when is this happening? It's pretty amazing.
It's a great practical way because almost all of us take notes and that sort of thing. One of the chatbots that I use that hardly anybody else uses or talks about too much is pi.ai. It was a flavor of the month for a week and then everything's moving so fast. But pi.ai is really interesting. It's problematic in two senses, which is that it has these very human-like voices. If you thought the Scarlett Johansson voice from OpenAI was a bit much, this is even more so for some of the voices that it has. It has this one female voice assistant. It has what do they call it? A vocal fry and very like, um and ah and all that kind of stuff, laying it out on thick. That's a problem.
But one thing that's interesting that it has that nobody else has, it has a certain level of agency. So I've used Ray-Ban Metaglasses but running the app for pi.ai on the phone so I can hear and talk to it through my glasses. Once you are running the chat interface, it's just listening all the time. It will stay on for 10 hours listening.
It never shuts off and if you say anything, it'll comment on what you say even if you're talking to somebody else. But what's interesting about it is I walk around and I was at the time when I first did this and I first wrote about it. I was in Mexico City's Historic Center and I'm like, well tell me a little bit about the history of the Mexico City's Historic Center. And they were like chat to chat to chat chat. And I was like, by the way, do you know about X?
Are you planning to visit there? And I'm like, well I'm here right now. She's like, oh well that's amazing. You should check out. And it was like, I just had it on, right?
And I was just, it was just there and it was a really interesting psychological event really to just have this like disembodied voice. And I think that's one of the directions we're going in. But to your point about the assistance, the lack of agency causes people to never really use it. Whereas if you have something that's encouraging, nudging you, you know, a lot of these chatbots or after you do a query, it gives you some more options for, maybe you should explore this, maybe you should explore that. We're going to be talking to these things a lot. And so we need it to be a conversation and not be a one true answer type of thing. When it gives us a result, it should say, it should sort of like have the awareness, not awareness, but it should pretend to have the awareness to say, well, you know, you should really consider this because there's this really, you know, brilliant guy, Jeff Jarvis, who wrote an alternative opinion to this.
You should consider that. Well, what did he say exactly? Well, and it's, you know, and so that's the kind of thing that will make these super valuable when it's a conversation rather than like, here's the answer because we don't want that.
I got to say the voices I heard, especially out of OpenAI's demonstration on Monday, but also Google's, those are people, fake people I do not want to be next to on an airplane for more than 10 minutes. This faux cheeriness.
Very, I mean, very California centric.
I saw someone on LinkedIn and linked to a piece about how early in the days of military aircraft, they put in women's voices to issue warnings that they were recording. There was just, it was just audio recordings that people made on the planes, but they thought this would be a way to grab attention to pilots. Why women's voices? What does it say about this, this machine that we expect to be kind of a servant is being presented as an overly friendly woman who, uh, it just strikes me as, as, as this came from, um, a bunch of men. Yeah.
Well, that's, I, I read that today and one person made this, this observation that the mostly male engineers are really trying to create the perfect girlfriend.
A little bit of insight here about this. Well, and how, I mean, and how many times did you see the, you know, her just that single word either tweeted or spoken or anything in the last couple of days between open AI's announcement, uh, between, you know, and what we heard from coming from Google. And then you like, and I will admit you can retract my, my geek card.
Cause I've actually not watched her, even though I feel like I know so much about it, I need to actually watch it. But, um, but yes, like it really does feel like that's a part of what's going on here. And, and then I will also say, Jeff, we've talked on this show about people who do need companionship in some way, shape, or form and guide how these tools can be tools for them in that regard. But yeah, it just gets, it can get weird and murky real fast when you go down that road too, you know?
So if I go somewhere else, uh, the freak out I'm seeing today after yesterday is from publishers and creators online because now that Google is going to show the ability, and I'm nervous about this because once again, generative AI has no sense of meaning or fact or truth. And, and it's not hallucination. It's not a lie.
Cause it doesn't know how to tell the truth. Having said that copy out, we're going to see Google. And I already saw some of it in some search queries I made come up with its own boxes to give us information.
And the one hand, the media folks are freaking out cause they're going to lose traffic. Uh, on the other hand, uh, Google did show recipes yesterday with credit to the recipe holders. But then again, how many chocolate cake recipes are all that special? Let's be honest.
Mike's wife might disagree cause she can find really special chocolate cake recipes in Mexico. Um, but, um, what, what, what struck me more than anything else is the, is the creators looking at this from their perspective, understandably. So I'm going to lose traffic. If you look at this from the user's perspective, who really wants to have to force to go to look for and go to a YouTube video and have to watch it for 12 minutes to see how to tighten one bolt.
Right. And, and so if Google could just tell us this is your problem, tighten this bolt, um, then that is clearly better for the user. But the larger issue to me here is I have been arguing for some time that the news industry should create an API for its news and for its content and go to the AI companies and say, okay, here's the deal. You can have our current credible information, but we want you to credit us. And, um, you know, we want some money too.
And here's the, here's the API and here's the key. But what that says is the entire destination strategy of media, the entire attention strategy of media goes up in smoke. And so we never did figure out the internet media. And here we are at a next generation where the information is going to be delivered to people in a new way. That's going to be better for them. And it's going to be hard to fight against that and say, no, no, no, you've got to hold all that back. Um, and what's happening in the U S and in Europe mainly is fighting that is suing these companies expecting bags of money rather than saying, oh crap, people are going to want this.
How do we fit into it? And I, I, I see another, no worry, the media is an even weaker, weaker shape now than it was 20 years ago, obviously. And it's going to be worse. And the legislation and the regulation and the, um, uh, suits that are going to come up. Um, it's going to be an, and by now, by the way, Google and meta have said, screw you news industry, you're a pain to deal with.
We don't need you. And so hostility is going to be really interesting to watch the efforts to extend the copyright, uh, to fight for fair use the efforts to fight for the user here are going to be really interesting. I think out of the last two days presentations, curious what you got.
Yeah. Meanwhile, meanwhile, what's happening is one of the disappointing things about the Google search component of what they announced yesterday is that, um, there's actually a, a real crisis in Google search, which is that, uh, small, um, websites, bloggers, uh, people like that can demonstrate their traffic, uh, declining to, to as little as 10% of what it was like a year ago because Google search is just not serving these small sites almost certainly because people are churning out garbage that AI generated garbage in such huge quantities that Google search engines are not up to the task of, of sifting through all that stuff. That's where we need them to apply AI to, to, to, to do the, a wheat and chaff analysis on this kind of stuff. What they're doing instead is they're basically saying that AI isn't reliable enough yet, so we're not going to really use it to generate results or we're going to use is we're going to patch together all this useful stuff for you and plan your vacation and stuff like that.
Uh, great. But like what we really need is for Google to be smarter, like perplexity, AI perplexity, AI actually has their own version of page rank that that is more clear about separating a reliable source of information from the unreliable. And that's one of the reasons. And then it of course links to those sources. So when it makes a claim, when it makes the point, it's like, well, click here to go read the article that we got this from.
That's great. You know, but I, I'd love, I would love to see Google worry more about the quality of what you get on Google search rather than, oh, you're a consumer. Well, here's all these ways you can consume things.
Yeah. Yeah. One, one thing that I find interesting about this whole search, uh, announcement and basically for those who don't know the, the generative kind of search experience is coming to everyone. I think it's opt out, not opt in. So you're going to get it by default.
And actually, if I remember correctly, I read that you can't even opt out of it for the time being. So once you see it, it's just there. They're basically saying you got to try this and you're going to try it at some point. Your curiosity is going to get the best of you.
And you're going to try it. But one thing that, um, I was reading through a search engine land article by Barry Schwartz and, um, in the article basically says that Google says that click through rate is higher on AI overview cards than on normal search results. And I just find that really hard to believe, but, uh, but Google saying it, I don't know. I can't like wrap my brain around that. And if that is true, then perhaps, you know, there is something to the fact that, you know, the summary isn't the end stop because like, um, uh, Janko Roettgers, actually I ran into him yesterday and he, he, uh, interviewed me for an article that he's writing about the, the news around YouTube creators and, you know, being able to go to a YouTube a video and essentially say, I, you know, I don't want to watch this video instead summarize it. And he's like, well, how do you feel as a creator?
And I was kind of like, well, you know what? I get it. Like sometimes I don't want to sit through 10 minutes and if the information's there and it's easy for me to get, but at a certain point it just kind of feels like the, if there is no incentive for the person to create the thing, then there will be less things created. Therefore there's less things for the AI to pull from. Right. And you know, the whole value chain falls down.
Well, that's, that's the argument that's made and, and, and, and the creators must be paid and so on and so forth. But there's, if you look at what's happening in the music industry, a lot of people create music not knowing they're not going to get rich, knowing they're not going to make money. Right. You don't have tons of friends like that, right, Jason? I know you're going to get rich, but
I mean, I'm definitely creating not to get rich right now. That's, that's right. Proof is in the pudding there.
And so, you know, I think the other problem here is that we, I mainly say media here, but I'll also say the world ruined the web. We screwed it up. We screwed it up because we, we put in SEO is, is more evil than I knew. It meant that everybody had to create the same thing to try to beat somebody else by one spot in a search result. And so the repetition on it is incredible.
It is wasteful as can be. It's not unique. So where do you go in this new world? I think you have to have unique value, unique information or perspective or expertise. And, and that AI is going to be more under pressure to answer the question. How do you know that?
And yeah, the, the intent should be discovery and not replacement of a content creator. So for example, and I, I go back to, I find myself pointing to perplexity. AI is doing certain things better than, than, than most, but they have an option for the pro version where you can say you do a query and you're like, how do you turn a screw? And then you say, just from a, just from YouTube, I don't want to see it from anywhere else, just YouTube.
You can also do Wolfram Alpha, a bunch of others are also just like, just the links that I give you, but you can do YouTube. That's a major thing. So it will say, oh, you turn a screw like ABC and it gives you the, again, it gives you the links. And if you are an engaged user, which I don't know what percentage of people really are, but, and you click through and you realize, okay, I've asked, you know, out of the last 10 things I've asked about home repair, three of them were this certain person. This person seems to be like, you know, answering my questions. I'm going to go there, subscribe to their channel, you know? So you want to, you want to lead people to discovery of what they really want to the content creators and not just say, well, we're just going to take their information and just give you that information. You want both actually, like give me the information and then tell me who, where you got it.
Just real quick, just to call out an audience member. I was a nightmare. Thank you for, for the, for the, for the tip or the super thanks or whatever. But he says, as a creator, I remain unafraid, unique voices with meaningful things to express will be fine. I still feel these things will crest the hype cycle and end up as background tools, not culture destroyers.
Or it feels funny calling you. Should I call you ozone or nightmare? Um, you can call him Joe.
Yeah.
Um, yeah, I think that the more we, the less we look at these things as competitors and the more we understand them as tools, the better off we're going to be. And it has happened time and time before I'm going to do a Gutenberg moment here and get your drinks ready. But, but print was seen that way and the provenance of it was not clear. And finally the technology became boring and people used it as a tool. And it took a century and a half for tremendous invention with printing to come. Uh, same with later technologies. And I think that's true here.
And so we have to look at this. I argue less as a technology and more simply as a tool. I argue the internet is not a technology. It's a human network. AI is not a competitor. It's a tool that we can use for creation and analysis and other things.
Yeah. And there's a metaphor that I like to use. Um, where, you know, you guys travel, you know, you go to the airport and they have the moving sidewalks, right? And there's some percentage of people who get on the moving sidewalk and they stand there, stop.
And other people get on and they, they're Superman with a bit long strides, like ripping through the airport. That's how people use AI. Some people want to get to the same place with less effort. And some people want to use it to get there a lot faster. And for the people who want to get there faster, AI is brilliant. These chatbots are already brilliant at that. So the people who, you know, the striders are getting, it's making them way smarter. Like AI is already making us smarter if we really using it in the right way. And we're curious people who are trying to learn things.
You can learn a lot more, a lot faster and get to the result you're after. Uh, but there are a lot of people who are just like, yeah, I'd write this email for me. I'm not even going to read it.
Boom. And it's, you know, it's human nature is some people are never going to care about this stuff. But for those of us who do, it's just going to be a massive accelerant. And I hope that Google, you know, I, and I think that the Google will be one of the players is really going to do a great job of that.
Well, to paraphrase Clay Shirky, um, you know, he says that he said this about social media, but the tools get interesting when they get boring. Yeah. And I think that the same is true of AI. The more it's boring, the less it's, it's over-dramatized the more it's just inside things. And again, it's inside tons of things we do now. We're taking the translation tool. It has, it is a hundred times better than it used to be. Um, because of transformer, it's phenomenal.
And that's killer killer app. You know, when we're talking about AI, you know, AI feature sets, there's a million different ways you can go, but I feel like translation language translation is one of the killer.
I read the Helsingian Simon, Matt, uh, a Finnish newspaper in one of the most difficult languages on earth. It's piece of cake to Google and, and it makes it accessible to me, uh, in tremendous ways. So that's AI in there. We're using it in new ways. Publishers are using it in new ways. I think once it becomes boring, once it becomes a tool, once the technologists move into the background, um, and then we take it over and we do what we want with it. If we have a purpose for it right now, it's in that early phase where, what do I do with this? Cause it's so cool. Surely I can find some use and oftentimes we can't. Yeah.
I was in, uh, we were in, uh, stitches Spain a couple of days ago and, um, my wife and I had a restaurant and there was a sign on the wall that was like in tiles and I'd seen it all over town. It was in Catalan, which I, and I didn't even, you know, some Catalan looks a lot like Spanish. Some looks like, I don't even know what they're saying.
And so I just used perplexities app and because I'm a paid user, that's why I tend to use the app more. And I'm like, well, what does this say? And basically what it was was that it was a, it said something like, um, please don't, uh, dirty the walls by which I think they meant use graffiti on these old buildings right on the wall. And then it, and then it said, uh, um, tidiness is a, is a sign of civilization or something like, so there was like a moral to the story, but it was fun. And these were all over town, but then, but then it's sort of like, it's like, okay, well here's a little more, more context about those signs, where they came from, when they were put, you know, it's that was so satisfying. It was just so gratifying. And again, put it in the glasses. So I'm not in a restaurant taking a photo of the wall, like, like, I think it's important there.
Mike is the word this is this is the, is the, what's amazing about these tools. And I saw this yesterday, particularly with Google is the ability to understand the antecedent, the ability to understand what we're referring to. What does this mean? So when they, you know, put them, they used it to put it to a picture of, um, a whiteboard, uh, with an arrow, how can I improve something here? The answer was fairly obvious, but the understanding that's what you're referring to or the two casts in the box.
Right. Um, uh, I remember many, many, many, many years ago, going to the MIT media lab the first time. And they showed this, this then classic demonstration where it was a big deal that someone sat in a chair and they said, put this there, put that here. And for the, for the machine of the time to understand, uh, right. What was referred to as this and that, and there, and here was a leap. It was a gigantic leap right now. Uh, you know, again, yesterday where my glasses, um, they're ever by the apple, right.
I saw two or that I, that my camera saw two minutes ago. Yeah.
And so it's not just language. It's also a perspective and intent. Now, meanwhile, these things still can't figure out that your hand can't go through a wall, right? Cause they have no connection of reality.
Uh, exactly. What I love is when you, when you ask for a, uh, an image for an AI that contains words, it doesn't know what a letter is. It's just like, Oh, it's this sort of shape lines, but a little point, a little point that you touched on that. I, I don't, I haven't heard anybody really comment on this, but the, uh, the, the glasses that they teased in the, in the, in the, uh, what was it?
The Astra demo. They, there was a, there was a circle in the middle when she was asking about what was on the whiteboard. And I'm pretty sure what that is, is in lieu of high tech stuff like apple has envisioned pro to know where your gaze is. It basically says, this is where you make your gaze, like put whatever you are asking about in the middle where the circle is. And I thought that was kind of an elegant solution.
Yes. When I had to train people, this is how old I am. When I trained the Chicago Tribune newsroom on the first word processing systems, I had to explain the concept of a cursor.
It was a big deal. People don't understand that at all. Right.
People have grown up with it. Probably like both of you, uh, it's a different matter, but you have to have this thing there where you want to do something. Right.
Um, it's your shared understanding of where you're at. And, and, uh, and so, but I thought that was an interesting thing. The other point I'd like to make about those glasses is that I am, it makes a lot of sense. And I'm pretty sure that those are, they're using the same hardware they use for Google translation glasses, which they killed last year. Right. So they look very similar and why wouldn't they use those?
It's basically, you have some processing, you have internet, you have a phone connection, all that kind of stuff. And so I'm not sure how far along those glasses is the point. Um, not if they look like that, they're a little bit too thick and, but, but how about you, Jason? Possibly.
I mean, I'm on the fence for the, for the Ray-Bans. So, I mean, if they, if they are in the realm of the, the Ray-Ban kind of, uh, quality, then yeah, I would absolutely consider it. I probably would, would deeply consider it just if they could deliver on what they showed off in the demo, even if they looked a little funky coming from Google, you know, I've already got Google glass.
I can add it to my shelf. Um, we've got, we've got stuff to talk about. I do need to take just a super quick break and then we get back into it.
And, uh, I don't know if we need to go just a tiny bit long or either of you okay with that, or you've got an event you got to go to Jeff, right? Okay. All right. So just one, one second and we'll continue to talk about this. Oh, okay. All right.
We've got more Google and then we have some open AI and I think that's probably going to be about it for today. But Mike, I didn't mean to interrupt you. What were you going to say?
Um, I was just going to say that the thing about the glasses is I think that first of all, everybody's going to have them at some point cause the, the, the, the, and people need to understand that glasses, AI glasses don't fall into the same categories, the rabbit thing or the, or the, the other one, the, the pin, the, the exactly the reason, the reason those types of devices will fail precisely is because they're not glasses. Glasses put something over your eyes next to your ears within, within earshot of your, of your mouth. And so you can really use them naturally. Plus glasses are socially acceptable and universal pins or not.
And the thing that goes in your pocket, the rabbit thing, well, that's just a phone minus a bunch of features. So I think that glasses are inevitable and there are companies in Silicon Valley that are basically doing the hardware and they, they approach AI companies and say, okay, you sell glasses that are the, you know, they're open AI glasses and you can, and your customers can pick from, you know, a gazillion different frames that all look like regular glasses. Uh, you can have visuals in one eye or both eyes.
You can have no visuals in either eye, but you can have a camera or no camera. And anyway, this is an industry that's behind the scenes that hasn't really emerged yet, but is essentially like the PC industry in 1990, 1990, let's say, where you have all these, uh, sort of commodity hardware players who are basically being used by the software, the operating system companies. And then on top of that, you have an application, uh, layer, uh, where people can have app stores and all that kind of stuff. This is coming and it's, it's, it's going to be really, really interesting. And I don't see how a company like Google, if they don't come out with glasses, we'll be able to compete with that because it's going to be really, really powerful. People will put their prescriptions in, they have sunglasses and already we have smart glasses that cost as little as $160.
They give you a chat GPT. And so, uh, and so this is, this category is going to really blow up when people see what it's like to, to wear these glasses. And when the glasses look like ordinary Jason, do you wear contacts?
I do wear contacts. Yeah. So here's the question. We'll be weird as the Leo had the, the, um, uh, whichever glasses he just got, I guess, but he has contacts. So he was going to take the lenses out. Are we going to see people walk who wear contacts or have good eyes walking around with glasses with no lenses?
Well, I mean, I guess that depends on the technology and the glasses. If they actually have some sort of head up display into the lens, you need a lens there.
Would you give up your contacts and go back to prescription lenses?
No, I probably wouldn't. I mean, I guess it depends on price and everything, but, but I would consider getting glasses. Like, like, you know, I, I tried on, um, someone's, uh, Ray-Ban meta glasses at IO yesterday.
Cause I hadn't had the opportunity to do that. And he did not have a prescription, you know, they were literally just Ray-Bans with transitions. So they would be sun sunglasses outdoors and indoors. And I, I envisioned that like, if I was to get a pair of those glasses and I, you know, I wouldn't lock them into just prescription though. Cause then I'd always have to wear, you know, wear without my contacts and I want the flexibility.
I mean, the most common usage for Ray-Ban metas is sunglasses. And those are just, they have lenses, but they're not prescription. So you can, I think what people would do is they just have clear on, you know, uh, and, and, you know, why not? It's, it's, um, it will look more normal, which is the whole point.
But you see, it's weird to me because the glasses exist for a reason. They exist to look through. Yeah.
But there's a, there's a whole style component nowadays too. There, there are people that have no prescription needs yet. They wear glasses because they, they look cool or whatever, you know? I wonder if there'll be something new, something new invented
for people who don't need the lens, don't need this, but need this.
Well, I, I've been, I've been predicting for a long time that at some point people, it'll be common for people who don't need glasses to wear glasses because they want all the electronics and they want to look normal. And, and so look at all the things that are normalized, have been become normalized. One of them is having a Bluetooth thing in your ear, walking around in public, talking like, like so true. And, and this is totally acceptable.
Now you point that out in your article, actually. Yeah, exactly. I think I showed on a little bit earlier, but everybody should go to machine society and check out. We're all talking to AI now, which you made that really, really salient point in this article, which I totally agree with, which is it for a lot of people talking to an, you know, an AI agents and, you know, hearing it, talking back to you can be a little off putting it takes, it requires practice. But what you point out in here is like, we've, we've encountered this so many times and we always push through it because the technology is worth it. Bluetooth headset being one first time you ever popped a earbud, you know, on your ear and felt weird with it popping out of your ear and having to talk through it and hear, you know, it was a different experience, but at some point we got used to it. Same for, you know, command our phone.
Maybe they're wearing AI hats. Right. And, you know, yeah, no glasses. I mean, glasses just make the most sense to me. Like, I don't see why it wouldn't be that just full stop.
Yeah. Absolutely. Go ahead, Mike. Yeah. I was just going to say that, um, that the, um, the, the company that one of the leading companies in Silicon Valley that's, that's doing this is called avid gent and they make huge numbers of different styles of glasses, but they don't sell glasses. They build them for companies who do AI to put their AI product in those glasses. And so again, they're sort of in stealth mode. They have high tech, uh, uh, light, they have a light engine that broadcasts, you know, images onto the, onto the lens. And they're working with another company. They got a snapdragon AR one chip in there. And, and this, this category is going to blow up. I was a little surprised that neither open AI or Google announced, okay, here are the glasses they're shipping this summer.
Um, I really expected that. And it's really going to be a problem, uh, for them, except for the fact that most of the AI glasses actually use open AI. So they're, they're kind of set with third party people doing this kind of thing. Um, but this is, this is going to be a, you know, this is going to be a huge category. And again, some of the glasses are indistinguishable from ordinary glasses.
Yeah. They're getting there. And if they're not, you know, already there, um, you know, next couple of years, I think we're going to see a lot more advancements would be my guests there. We do have at least one other thing that we need to talk about before the end of the show. Cause I knew it was going to be Google heavy, but prior to the Google events, like the day before open AI had its event, which, you know, that it announced sort of last minute, I think like maybe over the weekend, we're going to do it before. And then they rescheduled it for Monday, the day before Google's IO,
everybody was guessing that it had something to do with search. Open AI said, no, it has nothing to do with search. Ultimately. What was it? It was a new GPT, GPT four. Oh, not five. So of course some people are like, well, are we a lot, you know, diminishing returns at this point? Why can't we hit five?
Why is it four? Oh, but the O stands for Omni that is text speech video. So it's adding speech to the mix.
And yeah, I mean, you know, they did a, basically like a half an hour demo, essentially a live demo with, with this thing. And like we were talking about earlier, has that really, they've, they've cut down the latency, which is, I think pretty important for this sort of thing. It's really close to like human interaction levels and the cost and the cost.
Well, it's free to use the, but I mean their costs, like on the back end, they've reduced their own costs, which is important thing. And this is one of the things that you have to contrast against the Google demo. So they came out and they said, okay, we have this multi Omni channel, multimodal thing that actually processes video and can see what you're looking at.
And you can chat with what it's looking at presently. Google did the same thing, but they said twice as fast, half the cost. Google didn't say anything about the, the speed or the cost. And I don't know, it's doing some really sophisticated things. We don't know what the, what the cost is for, for you, the consumer or they, they, the, the, you know, the backend operator, and we don't know about performance.
And they said, you know, it's going to be later in the year. So you can't really compare these. You seems like you compare these directly, but you can't really, one of them is pretty real and they're making claims about its viability essentially. And the other one is doing a demo and you don't know what the costs are, what the speed is, any of that stuff. So you have to take the Google one more with a grain of salt than open a eyes for sure.
Um, so I thought that it was going to be available. Now, if I go to chat GPT on the web, I still have three, five, and I have to pay for four, four, four, zero, four, four.
Um, and the voice, the voice that, uh, Scarlett Johansson voice is later, right. And only for, and initially only for pro for pro subs, you know, paid paid subscribers.
Now the results on who we've had on the show. Yes, indeed.
You're close. Actually side AC is called the adjacent side to the angle. Alpha.
It's got that very California.
Yeah. Well, we had Sal Khan, a founder of Khan Academy on episode 10, I believe. And you know, it was all about the impact of AI, the, the future of AI is impact on education. He was talking about a lot of this and right. Like this video that we're showing right now, if you're watching the video version, you can sort of hear it is about using GPT for, Oh, with his son, uh, to tutor around a mathematical concept. And, uh, you know, I mean, open AI released. I mean, there's just an insane amount of videos that are all on this, on this, uh, this tip, you know, using the camera to look at a dog and, and interact with the AI with the dog and all sorts of really conversational, interesting, uh, demos.
And Jeff is not going to like, which is the, which was the fact that they're trying to perceive your emotions and reply accordingly. And so it's a little invasive. Don't worry about my emotions. Just answer my fricking question.
Yeah, I agree. I agree. Um, and, and I don't, you, I don't like the word creepy in this sense because I think it's, it's nonspecific and, uh, tends toward moral panic, but I was creeped out by that. Um, I, I think that, um, and I'm not a big privacy person by reputation.
I care about privacy. I, but, but by reputation, I'm not, well, now it seems like a bit of an invasion and it's the wrong relationship with the machine. It's a machine. It's a, no, it's software. It's a program.
Right. And, uh, behind it is all kinds of made amazing data and a lot of capability. But, uh, the relationship, I think AI, especially cause they're, they're trying to push this AGI thing. They're trying to say, see how human it almost is.
It's a little bit farther and we're going to be you. And that's BS. It's not the case. It's not true. It's not going to happen in my view. Come back to me. And, and sometimes if I'm still alive, you can tell me wrong.
If, if, if we all read humanization, um, what, even if it's able to perceive general emotions and to speak like an ordinary human should all be, it should all be directed at looking out, not, not at look at how human I am, but at like you not noticing it. Like I just want to not think, Oh, that sounds too robotic.
That doesn't sound robotic enough. But, but like Siri is the worst at this. Like Siri will actually crack jokes and stuff that like really dumb dad jokes and you're just like, you're just wasting my time.
I don't have time for you to pretend like you have a personality. It should just, it should, the, the testing on this should be all about the vocal, the tone of voice, not being something that distracts you or that you notice.
It's a, it's a fuzzy line though, between those two things. Because like, and we talked about this, you know, throughout this conversation is like, well, does it really need to sound like a real person? You know, a person that you're used to talking to, does it need to have that accent and that vocal fry and all these elements. But at the same time, like if we plan to communicate in, in, with these chatbots in the future, like as humans, it's just more natural to talk to other humans. And if these can get to the point to where we don't perceive a difference, then that's the ultimate kind of user interface is that, okay, well we just talked to it as if it were knowing that
it's not, but that's, that's the correction. Right. We talked to it. So Ethan Molek wrote the book co-intelligence, which was out now, which I'm listening to now. And he argues in there that it's not human and you shouldn't think it's human, but he says we should treat it as a human because we'll get the best performance out of it because it's designed that way. It's designed to interact with us as human. Right. And I take the point, but maybe that design is wrong.
That feels like a bug and not a feature to me. It shouldn't, it shouldn't depend on how we treat it. And I wrote a, like a, a really a, a, what I thought was extremely interesting piece, I don't know, like 10 years ago or eight years ago or something for a fast company, which is like, don't teach your kids to be polite to Alexa, right. Or, or, or to, to, or to yeah, any of those chatbots because you're teaching it the wrong thing.
It's not. And, and I talked about why do we have manners? Why do we have politeness?
There it is. And, and the reason we have manners, the whole point of manners is to, to show respect and concern for other human beings. And if we teach kids to have manners to, uh, to, to an AI or to, in this case, a voice assistant, we're teaching them that it doesn't matter that whether you're, you're, you're a software running through your programs, just because this thing has a voice. And I make the point, I made the point that like, if you don't say please and thank you to a jar of peanut butter, the kid is opening parents.
Aren't going to care. But then there are people who say, say please. And thank you to Alexa because it has a voice. This is ridiculous that, that, that, um, the assistant appliance has exactly the same sentience as the jar of peanut butter.
They're exactly the same. It should be treated the same that we should teach kids that it doesn't matter if you're polite or not. It's not a person you have to be polite to people. And there are good reasons for that.
So on certain days, I am grateful to a jar of peanut butter. Do you tell it that, um, in my head I do. Oh, thank you. I might want to hear that from time to time.
I'm glad you're saying, well, if I, if you don't mind, Jason, I'd like to jump ahead to the story that's last on the rundown. Yeah. Yeah. I think that's the, that's the obvious next step. Yes.
Absolutely. Kevin Roose has added again. So Kevin Roose, the New York times, who's a very good tech writer, but I, in my next book, uh, the web we weave out in October, I take him to task for his relationship with Sydney, the personality of chat GPT when it was introduced in Bing. And we all know the story now that Kevin, uh, reports that Sydney fell in love with him and wanted to break up his marriage. Um, if you read the transcripts, it was ridiculous. Kevin had human beings.
And if we teach kids to have manners to, uh, to, to an AI or to, in this case, a voice assistant, we're teaching them that it doesn't matter that whether you're, you're, you're a software, you're running through your programs just because this thing has a voice. And I make the point, I made the point that like, if you don't say please and thank you to a jar of peanut butter, the kid is opening.
Parents aren't going to care, but then there are people who say, say please. And thank you to Alexa because it has a voice. This is ridiculous that, that, that, um, the assistant appliance has exactly the same sentience as the jar of peanut butter. They're exactly the same. It should be treated the same that we should teach kids that it doesn't matter if you're polite or not. It's not a person. You have to be polite to people. And there are good reasons for that.
So on certain days, I am grateful to a jar of peanut butter. Do you tell it that, um, in my head I do. Oh, thank you.
You know, I might want to hear that from time to time.
I'm glad you're saying, well, if I, if you don't mind, Jason, I'd like to jump ahead to the story that's last on the rundown. Yeah, I think that's the, that's the obvious next step.
Yes. Kevin Roos has added again. So Kevin Roos, the New York times, who's a very good tech writer, but I, in my next book, uh, the web we weave out in October, I take him to task for his relationship with Sydney, the personality of chat GPT when it was introduced in Bing. And we all know the story now that Kevin, uh, reports that, uh, Sydney fell in love with him and wanted to break up his marriage. Um, if you read the transcripts, it was ridiculous. Kevin had to try over and over and over again to play gotcha with it, to make it do what it was not designed to do.
Even when there were guardrails, he kept going. So he's blaming the machine for what he gave him, what he demanded. And, and so now, so it was all negative. Oh, I couldn't sleep that night. BS. It's ridiculous. It was moral panic.
So now we have, uh, Kevin coming back again, uh, that he decided to make friends with a whole bunch of AI chat modules. And it's just as wrong the other way. They're not your friends. Uh, you're treating them wrong. You're setting up the context for the public wrong.
I think here, the value is not in that. Yes, they can be used for companionship for old people. I get that argument, but, um, uh, their jars of peanut, I'm going to use this from now on. Mike, they're jars of peanut butter.
Um, and, um, that's great. Uh, so I, I think that, that it's, it's more, it's bad technology reporting because it misrepresents what the machines are and what they can do. It misrepresents the relationship that I think we should have with them. Uh, and it sets it up.
I think they stole my body for that one weightlifter guy. Yeah. Yeah. That looks strikingly similar. Yeah. Just like me. Yeah. Yeah.
Um, uh, but it sets it up too, for, you know, what's going to happen. One of these things is going to be, somebody's going to put up a transcript of the machine being mean to somebody or mean to Kevin, and then they're going to go off on a moral panic rant and it's all, they're living in the wrong universe. So thank you for my rant.
Meanwhile, there is an issue with people, uh, using maybe, maybe, uh, Kevin Roos, uh, readers or others, um, using AI chatbots for companionship on one of the things I've, I've noticed for many years, are there these sort of, uh, virtual influencers on Instagram and elsewhere where they, they actually get modeling deals. They, they, they represent brands, all this kind of stuff. But people, uh, they, they'll do an Instagram post of this AI generated person and they get, you know, 5,000 comments from people saying, oh, you look great. And it's like, if they think that that's a person and don't know it's, uh, just a cartoon, then that's troubling. And if they know that it's fake and comment to it anyway, then I don't know, there's something weird going on there. Uh, why are you comment?
Who are you talking to? Um, so anyway, it's, it's going to be a weird phenomenon. It's going to get weirder and weirder, the more Scarlett Johanssony it gets with, with AI chatbots. And, you know, some, some people are, are very lonely. I think her was, again, I haven't seen it like Jason, but I know enough about it.
I don't need to see it. Um, you know, that sort of thing is, is absolutely going to happen in, in, in terms of somebody just like thinking that this is a friend happened with Xiao Ice in China a decade ago. Uh, which by the way, I think is still in existence where Xiao Ice is an AI model.
I think, um, it was not generative AI, but it was based on what people say on social media in China. And so you have these conversations and it will sort of return to you, like the kinds of things people say on social and, you know, supposedly millions of Chinese people say, tell it that they love it and have a relationship. It's their friend and all this kind of stuff. It's really sad. And I think, I think that one of the things I like to do with my, um, with my newsletter is sort of like, is going, you know, cause over the next 10 years, it's going to be really crazy how many fake things there are and how convincingly convincing those things are all manner of fake things. We, and especially important for children, um, that children grow up and know the difference between what's real and what isn't because the simul, the ability to simulate is, is overwhelming with these tools and we can't lose the plot about what, who we're interacting with, what we're looking at and all the rest. Yeah.
How much of it is, is awareness around the role playing element of, oh yeah, I'm chat bot. You know, this is my chat bot name, blah, blah, blah. And you know, yes, it's got a story, but I think it was it in this article where, you know, it's kind of compared to like a, a grownup Tamagotchi or something like that. And you know, how much is that awareness there that this chat bot isn't real and at what point does that crossover happen and like what, what, yeah, what, what is the impact of, of something like that? I guess we don't really know.
The irony is that the old folks get upset with young people for staring at their phone thinking they're interacting with a machine when in fact they're interacting with a human being on the other end of it.
In this case though, they may well start interacting with the machine. It's been around for a long time. I remember Smarter Child was started by two friends of mine, an active buddy back in the day. It was a, it was, it was beyond Eliza and Dutch generation.
It was just a, you know, a bit of fakery but it was charming in its way. I think you're right, Mike, as long as people understand what's the machine and what's not, what's the harm? Yeah. What's the harm? Yeah. Yeah. I think that's right.
I think it's our, our, our role as journalists and influencers and so on to constantly remind people that, that if somebody is lonely and, and lack human presence in their life, a chat bot is not the answer. You gotta, we gotta interact with people. That's what you need.
And so is that the, the kind of lack of responsibility of Kevin Roose's article to an extent, your perspective, Jeff? Yeah.
I mean, he was straightforward that he was talking to a machine, but he was, he was modeling a behavior with it at an expectation of it.
At the same time, he's modeling a behavior that a lot of people are going to come to that realization on their own, regardless of that article. But you're right, that gives that, you know, a certain bit of distance that maybe.
And I think a similar, a bit of scorn needs to go to open AI because this Scarlett Johansson chat bot voice thing, where it says things like, Oh, you're making me blush and all this kind of, it's just, it's just, it's just really, it's just really the wrong direction.
Emily Bender is brilliant about this. The linguist at the university of Washington, who was a coauthor of the stochastic parents paper. And she reminds us that when we see meaning in what open AI says, open AI has no meaning. We are imputing that it is our desire to see that meaning. And we see it through the linguistics that are used. And so all the verbs that were used, I think that it doesn't write, it doesn't create, it doesn't think, it doesn't believe. But now it's complicated on the next level of, as we said at the beginning of the show these, these voices with prosody and it's just a bit of a fraud. That's my problem.
I like Jerome Lanier's conceptualization of, of generative AI and that it's, it's a, it's a way that humans collaborate. And, and that's a really, I think that's a really good way to look at chat bots and some of these other things. It's like, we're, we're pouring everything into the pot and we're, we're interacting with many, many human beings essentially.
That's where the stuff is coming from. And so that, that's a good way to look at these things rather than, Oh, it's a fake person. Right. Or it's a person, you know, right at all. Yeah.
We have definitely pushed up against the time, so we got to wrap this up, but this was every bit as, as awesome of a conversation as I knew it would be with this week in particular. And with you joining us, Mike Elgan, it's so great to get the chance to talk about this on, on a new show with you.
So congratulations on the new show. I don't think I've congratulated you yet. So congratulations. Thank you. Yeah.
Machinesociety.ai is where you can go to read Mike's work, but you're also writing in other places too. This is your own destination, but you're all over the map.
Yeah. This is, this is, this is, this is, this is what I write for me and I also have a good subscriber base. It's actually a good, good living, but, but, but yeah, this is, I'm really excited about this newsletter.
Love it. Love it. Well, keep up the great work. We'll definitely reach out when the, when the news stars align once again and we can get you back on here, Mike.
It's been wonderful. Anytime, anytime. Right on. And Jeff, of course, gutenbergparenthesis.com for all of your bookly goodness. Yeah, thank you. And for dressing up for the recording, which is something I didn't do this week. On a special occasion.
Well, we do this show every Wednesday. And we do a live recording of it if you want to watch it recorded live. We actually have our biggest turnout this week. So that's pretty awesome. 11 AM Pacific, 2 PM Eastern on the Techsploder YouTube channel at youtube.com/@Techsploder.
be sure and like, rate, review, subscribe, wherever you happen to listen or watch. And of course, like I said earlier, you can support us on Patreon at patreon.com/AIInsideShow. We offer ad-free shows, early access to videos, Discord community, regular hangouts with me and Jeff and the rest of the community, and a whole lot more. We also have executive producers of this show. If you're of a certain level, DrDew and Jeffrey Marriccini are our executive producers. Thank you, bosses.
Thank you. deeply for your support and thank you for watching and listening to this episode of ai inside it's been a lot of fun we'll see you next time on the show bye bye everybody.