Who Needs Humanoid Robots Really?
November 06, 202459:44

Who Needs Humanoid Robots Really?

Jason Howell and guest Mike Elgan analyze the post-election tech landscape for AI, OpenAI's new ChatGPT Search capabilities, the democratization of robotics programming, and more.

πŸ”” Support the show on Patreon!

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

NEWS

0:03:15 - What will the AI industry look like over the next four years of a Trump presidency?

0:05:58 - Introducing ChatGPT search

0:17:25 - OpenAI Swarm

0:23:15 - A faster, better way to train general-purpose robots

0:29:48 - Humanoid robots are a bad idea

0:34:30 - Who needs a humanoid robot when everything is already robotic?

0:39:20 - Anthropic: The case for targeted regulation

0:43:43 - Meta says it’s making its Llama models available for US national security applications

0:45:20 - Exclusive: EU AI Act checker reveals Big Tech's compliance pitfalls

0:49:39 - Alexa’s New AI Brain Is Stuck in the Lab

[00:00:00] This is AI Inside, episode 42, recorded Wednesday, November 6th, 2024.

[00:00:06] Who Needs Humanoid Robots Really?

[00:00:11] This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show.

[00:00:17] If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.

[00:00:31] What's going on, everybody? Welcome to another episode of AI Inside.

[00:00:35] Inside is the show where we take a look at the AI that is layered throughout so many things, like a tiramisu, a delicious tiramisu, or a lasagna, however you want to look at it.

[00:00:45] If it's your main, or if it's your dessert.

[00:00:47] You know, either way, I'm all for it.

[00:00:49] I'm one of your hosts, Jason Howell.

[00:00:51] My co-host normally is Jeff Jarvis.

[00:00:54] He's actually traveling right now.

[00:00:56] I think he's in Germany, in fact.

[00:00:57] So he is very much out of the pocket, which I would say would preclude him from being on the episode today, is the case.

[00:01:04] But that would not preclude our guest from being on the episode from Bali, of all places.

[00:01:10] But not on a beach, Mike Elgin.

[00:01:12] I'm a little disappointed that you aren't podcasting, like literally sitting on the beach.

[00:01:17] Well, first of all, I'm in Baja, not Bali.

[00:01:19] Oh, sorry, sorry.

[00:01:20] Baja.

[00:01:20] Okay.

[00:01:21] The other four-letter B place.

[00:01:23] I am literally on the beach.

[00:01:24] I'm in a house that is literally on the sand.

[00:01:26] So I'm looking at the beach that's 50 feet away right here.

[00:01:30] That does me no good.

[00:01:31] I want to feel the beach right now.

[00:01:34] Believe me, today of all days, I want to be on a beach.

[00:01:37] That's exactly where I want to be.

[00:01:38] Well, it's good to see you, man.

[00:01:39] How have you been?

[00:01:40] Of course, Machinesociety.ai is where everybody needs to know about to go and check out your writing, the newsletter, which is fantastic.

[00:01:49] I subscribe to that.

[00:01:50] Get that on a regular basis.

[00:01:52] Yeah.

[00:01:52] Yeah, it's good to see you.

[00:01:54] Had you on a handful of months ago and had a lot of fun.

[00:01:57] And with Jeff out, you know, you were the perfect fill-in.

[00:02:00] So as best as Jeff could be filled in for.

[00:02:05] Another ill-tempered old white guy.

[00:02:07] That's what you need for that slot.

[00:02:10] Somebody complained.

[00:02:10] Right.

[00:02:10] A lot of complaints.

[00:02:13] And a lot of opinions on AI.

[00:02:14] I think at the end of the day, that's the most important part.

[00:02:18] Yes.

[00:02:18] Yes.

[00:02:19] Mike, it's really great to have you here.

[00:02:21] And I'm excited to talk about everything that we will talk about.

[00:02:24] One second before we get started, just to let everybody know, big thank you to those of you who support the show on Patreon.

[00:02:30] That's patreon.com slash AI Inside Show.

[00:02:34] Row Code is one of our supporters.

[00:02:36] Thank you, Row Code.

[00:02:37] And thank you to so many other people who support us.

[00:02:41] So, you know, head on over to patreon.com slash AI Inside Show.

[00:02:45] And you can throw a little financial support our way.

[00:02:48] And we really appreciate it.

[00:02:49] It enables us to continue doing this show on a weekly basis.

[00:02:52] And we love that.

[00:02:54] And if you happen to be watching live while we record, welcome, first of all.

[00:02:58] But be sure to subscribe to the podcast.

[00:02:59] That way, if you miss it live, you won't actually miss the episode entirely.

[00:03:04] So go to AI Inside Show and you can subscribe to the show from there.

[00:03:07] With all that being said, it's time to get into the news.

[00:03:11] And yes, today is post-election day.

[00:03:14] But I think I'm still a little fresh.

[00:03:18] So I think most of the news today has absolutely nothing to do with the election.

[00:03:24] And I don't know about you, Mike.

[00:03:25] I'm okay with that.

[00:03:27] Yes.

[00:03:27] We can talk about it.

[00:03:28] It's nice to disappear into the tech news.

[00:03:31] A lot of our colleagues, and I think yourself included, do a really good job of staying away from politics.

[00:03:38] I find it public commentary on politics.

[00:03:42] I find it sometimes irresistible.

[00:03:44] Sometimes I get my act together and I keep my mouth shut and stay in my lane.

[00:03:48] But we got a lot of colleagues who never talk about politics, and I don't know how they do it.

[00:03:53] Yeah.

[00:03:54] It's a hard one because sometimes it feels really important and feels very prescient and now.

[00:04:01] And I mean, that certainly is the feeling that I have today.

[00:04:04] I mean, I think if we are talking about politics, I will just say I'm really curious to see with the influence that it seems that big tech has in the upcoming Trump White House here.

[00:04:18] In the U.S., you know, namely Elon Musk.

[00:04:21] I mean, we know Elon Musk is going to have some sort of interaction, or at least that seems to be what's being signaled at this point.

[00:04:29] It's going to be really interesting to see how that influences kind of regulation around technology, regulation about artificial intelligence, the kind of injection of AI or the open arms kind of nature to AI as we enter the next four years.

[00:04:46] At a time where AI is just exploding in progress, you know, progressing the technology and everything.

[00:04:52] It's kind of an interesting moment, and I don't even know how to predict what that will look like.

[00:04:57] Yeah.

[00:04:58] A worst case scenario, he puts Barron in charge of U.S. AI policy.

[00:05:02] Barron Trump, his son.

[00:05:04] Okay.

[00:05:04] Every time somebody asks him about AI, he says, oh, the cyber.

[00:05:08] My son Barron is like so good with a computer.

[00:05:10] So I don't think that's going to happen.

[00:05:12] The worst case scenario is that I think that the signals around Elon Musk are that he wants Elon Musk to do to the federal government what he did to Twitter, which is basically just fire most of the people in government.

[00:05:25] And so that could prove to be even worse than Twitter.

[00:05:31] I mean, I don't know.

[00:05:32] I don't want to – we probably don't want to talk about politics, but I think that it's a huge mistake to put Elon Musk anywhere near the federal budget.

[00:05:44] Yeah.

[00:05:45] Yeah.

[00:05:45] Going to be real interesting.

[00:05:47] That's for sure.

[00:05:49] Buckle up, y'all.

[00:05:50] There's going to be some interesting stories coming over the next handful of years.

[00:05:54] Okay.

[00:05:55] Well, then, why don't we talk about some AI news?

[00:05:59] And actually, the first story that we'll talk about today is something that had been kind of teased and kind of a long time coming.

[00:06:07] We knew that eventually OpenAI was going to launch its search product, and it has done that.

[00:06:13] As of last week, it unveiled ChatGPT Search, which is essentially its AI-powered search engine.

[00:06:20] It's integrated into ChatGPT.

[00:06:21] It's actually running a specialized version of GPT 4.0.

[00:06:25] Yeah.

[00:06:25] And essentially gives you what we've seen.

[00:06:29] Like, I'm a user of Perplexity, so I've been using Perplexity for many months now.

[00:06:34] And I'm actually a really big fan of how it tackles this very feature.

[00:06:39] Yes.

[00:06:39] And I have not used the ChatGPT Search product yet.

[00:06:43] Right.

[00:06:43] It's only available for paid users right now.

[00:06:45] But what are your thoughts on the kind of convergence of search and AI right now?

[00:06:50] Because I know Jeff has felt, you know, feels and voices his kind of, I don't know, his opinion is a little bit on the negative.

[00:06:58] Like, I don't know that the peanut butter and the chocolate should mix together.

[00:07:03] Yeah.

[00:07:03] My experience with Perplexity has been it's actually been incredibly useful.

[00:07:07] But I'm curious to know what your thoughts are.

[00:07:08] Well, Perplexity is a rag tool.

[00:07:11] It's a retrieval augmented generation tool.

[00:07:14] And I agree with you.

[00:07:15] I think it's very good.

[00:07:16] It does hallucinate.

[00:07:17] It does give you weird answers.

[00:07:20] And sorry about that.

[00:07:21] But it's windy here.

[00:07:22] We're in sort of the kite surfing capital of Mexico here.

[00:07:27] Okay.

[00:07:28] But basically what that means is that they have their own version of PageRank, literally the Google tool for searching.

[00:07:40] But the Perplexity version is optimized for better sources.

[00:07:46] So it's more sensitive about the quality of the source, which is one reason why it's slightly better than Google Search.

[00:07:52] And it gives you the results.

[00:07:53] Like, it'll search for 10 things and it'll give you the links to those things just like a search engine, more or less.

[00:08:00] ChatTBT Search, ironically, is not a rag tool.

[00:08:03] It's kind of like a rag tool.

[00:08:06] It basically uses Bing primarily because of the association between OpenAI and Microsoft.

[00:08:13] But it uses other search engines and other tools.

[00:08:16] Basically, it cobbles together the information.

[00:08:21] And instead of basically having this sort of spidered index, which is, of course, how search works, it will do the search at real time when you go to do a query on ChatTBT Search.

[00:08:37] And then a lot of how it works is not well known.

[00:08:42] We haven't been able to dissect it.

[00:08:44] They haven't really announced the specifics.

[00:08:46] And it's also probably a moving target.

[00:08:48] They have their own web crawler called GPTBot, which basically crawls a web very quickly.

[00:08:54] So I think they're cobbling together different things.

[00:08:56] And it's a little bit, for the most part, it's less of a competitor to search and more of a thing that's built on top of search.

[00:09:06] Yeah, augment.

[00:09:08] Yes.

[00:09:08] But I think the bigger picture is that traditional search that has no AI component is dead man walking, basically.

[00:09:17] Because everybody's going to want to use AI.

[00:09:20] Because AI can put things together better for you.

[00:09:24] And RAG tools like Perplexity, and there are a few others, and ChatTB Search, presumably, will bring together multiple sources and sort of summarize it for you.

[00:09:33] And that's more useful than here's a bunch of links.

[00:09:35] You have to go do – now you have a homework assignment.

[00:09:37] We have to go read through these and figure out on your own what's happening.

[00:09:40] One of the things I like to do, by the way, and just before we move off the topic, and this is a little bit off topic, which is that I'm also a fan of Google's Notebook LM.

[00:09:50] I know that – I'm sure you've used it.

[00:09:52] But basically, Notebook LM is a thing where you're supposed to keep your notes there.

[00:09:56] But in reality, what it really is is you can just dump all these documents in there, and then you can ask questions of the documents.

[00:10:02] And it will get its information from the documents that you upload.

[00:10:05] So one of the things I like to do is, let's say I'm researching a technical topic, and I don't – I just want to know.

[00:10:15] I just want to learn, right?

[00:10:16] So it's not something I'm doing for publishing or something like that.

[00:10:18] I just want to learn something on a highly technical topic.

[00:10:21] So I'll go to Perplexity AI typically, and I'll say, you know, tell me all about it.

[00:10:26] Give me a bullet point list of all the facts around this or the most important things or summarize this or something like that.

[00:10:30] And it will give me its result, and it will make claims in there.

[00:10:33] And then I take the PDFs produced by the scientists or the researchers or whatever, and I upload those into Notebook LM.

[00:10:40] And then I use Notebook LM to fact-check Perplexity.

[00:10:43] So I'll copy a claim made by Perplexity, paste it into Notebook LM, and say, is this accurate?

[00:10:49] And Notebook LM has never failed me in terms of telling me exactly – it will say, yes, this is accurate, blah, blah, blah, and give me more.

[00:10:58] Or no, not quite, or almost accurate, but here's the problem with it.

[00:11:01] And so I would never recommend that anybody just use any AI-based tool to get facts and information and just take those facts and information and just believe them.

[00:11:13] And just run with it.

[00:11:14] You always want to do it against it.

[00:11:15] But Notebook LM is a great fact-checking tool if you have the primary documents.

[00:11:19] Yeah, yeah, that's so interesting.

[00:11:21] Yeah, Notebook LM is incredibly powerful, and I think that is really the kind of – the big, powerful use case of these services is when you start to really get a better understanding of what each one of them is good for between them.

[00:11:39] So that you can pop in and out of them as needed and pull this output into this other thing that's good for this very specific thing.

[00:11:48] I mean, eventually, maybe that gets easier.

[00:11:50] I mean, Perplexity is certainly having a lot of different aspects to it and a lot of different models that you can use within it kind of alleviates some of that pain a little bit, allowing you to kind of stay in it.

[00:12:02] But what's interesting is as you were talking about the list of blue links and kind of heading back into the search mindset, the list of blue links that you then have to work for to really get the information.

[00:12:16] It's so funny that to me it's only taken me a year of using these tools to the degree that I do for that to feel like actual work.

[00:12:24] Meanwhile, for the last couple of decades, that was just the internet.

[00:12:28] Right, right.

[00:12:29] That was just the way it was.

[00:12:30] And now it's like, oh, man, but it's so much that that that kind of hurdle, that speed bump isn't quite nearly as painful anymore.

[00:12:39] That's right.

[00:12:40] Once once you get more comfortable with the tools and understanding what they're good for and what they're you know, what to look out for.

[00:12:45] Well, like all powerful technologies, it makes things better and worse at the same time.

[00:12:50] So, yeah, you know, people are afraid that this is the new Wikipedia.

[00:12:53] So students would just go to a chat, you know, an AI generative AI chat bot and say, write my essay for me.

[00:13:00] And the whole value of an essay is that it's hard to do.

[00:13:05] It's the work that you put into it is the whole point of the essay.

[00:13:08] It's not going to be published.

[00:13:11] Nobody's going to read.

[00:13:12] You know, the professor is barely going to read it.

[00:13:13] You'll never read it again.

[00:13:14] It has no function other than your work and your labor and the mental process of figuring out how to write this essay.

[00:13:21] And if chat GPT just writes it for you, there's no point of even assigning it.

[00:13:25] So there's a dumbing down factor with these tools.

[00:13:28] At the same time, for people who are seriously curious and seriously interested about learning, it's the greatest learning tool ever created.

[00:13:36] And I personally have learned so much.

[00:13:38] I've gained so much more knowledge from using these tools simply because I'm always asking.

[00:13:44] And then I can come back with a follow-up question or I can accept one of their follow-up questions.

[00:13:49] And I just – it's so readily available.

[00:13:52] I'm also one of the people who believes that the thing that's going to replace smartphones within 10 years, probably closer to 10 years than one year, is glasses, AI glasses that are augmented reality, AI-based glasses.

[00:14:07] And so in a context like that, the audio information output is going to be more valuable than the visual information that it can produce.

[00:14:17] And so we're just going to be talking to an agent, and it will be an agentic AI, which will be goal-based.

[00:14:25] So we'll be able to basically tell it to go figure something out or, oh, you know, every time I'm near something, tell me and all that kind of stuff.

[00:14:32] And so the search – the whole paradigm of search engine – of traditional search engines where you get a list of results is totally incompatible with glasses.

[00:14:41] For sure.

[00:14:42] Walking around.

[00:14:43] And so we need to get to a point where we have non-hallucinating agents that give us good information that can go beyond that and actually go do things for us.

[00:14:53] You know, as we're starting – we're just creeping into this world where LLM-based tools can actually use our computers and, you know, open apps and figure things out and all that kind of stuff.

[00:15:06] So this is going to really accelerate the replacement of smartphones with glasses because if the agent we're talking to through our glasses can go use a computer, can search the internet, can go to websites, can log in on our behalf, do things for us, then we're going to really like that.

[00:15:24] That's going to be really great.

[00:15:25] And the whole process of sitting down at a laptop and using applications and going to websites will seem a little slow.

[00:15:33] It will seem a little bit like Google search seems right now.

[00:15:37] It's like we still do it.

[00:15:38] It's still useful.

[00:15:40] But it feels like a kind of throwback-ish if you've been using it.

[00:15:44] A bygone era.

[00:15:45] Yeah, exactly.

[00:15:46] So I think that's where we're headed.

[00:15:48] Yeah, interesting.

[00:15:50] Interesting.

[00:15:51] Gosh, I feel like we just need to have conversations like this instead of talking about news.

[00:15:54] The question that came up for me around that is you said hallucination-free.

[00:16:00] And I actually – now we're totally off topic, but that's okay.

[00:16:04] I don't mind at all.

[00:16:05] I think today, you know what?

[00:16:06] Anything goes.

[00:16:08] I was driving a couple of days ago and heard an ad on the radio, and it was talking about an AI service, and that was one of the things they were saying.

[00:16:20] Hallucination-free.

[00:16:21] And I mean – but that seems – isn't that an unattainable goal?

[00:16:27] I mean –

[00:16:28] No, I don't think so.

[00:16:28] Maybe it's a goal.

[00:16:30] It's a great goal to have.

[00:16:31] But I just found it interesting that they were claiming hallucination-free, and I just kind of am of the assumption that like that just kind of – that's part of the oxygen.

[00:16:40] It just happens.

[00:16:42] How can you claim that?

[00:16:43] So, first of all, the experience we all have with LLMs is we go to an all-purpose – we go to ChatGPT, right?

[00:16:50] We go and we search.

[00:16:51] Search it like it's – you know, ask it a question.

[00:16:54] Or, you know, if you're better at prompt engineering, you'll say, you're a world-class scientist who knows everything about biology.

[00:17:00] And then you say what the question is about biology.

[00:17:04] But that's – the all-purpose nature of that tool, I think, is going to fade into the background.

[00:17:11] Like, we're not going to be using tools like that.

[00:17:12] I think that the next phase – and again, this is, you know, years away, three, four, five, six years, I don't know how many years in the future – we'll be using agentic AI swarms.

[00:17:23] And this gets really sci-fi-ish.

[00:17:25] But basically, one of the tools that OpenAI has come out with is called Swarm.

[00:17:31] It's a not ready for production, not ready for primetime tool.

[00:17:35] But what it's designed to do is coordinate special purpose LLMs, essentially.

[00:17:43] So, imagine if you had an LLM that's like a rag that can search the internet and make sense of that.

[00:17:49] And another one that's great with language translation.

[00:17:51] Another one that can make pictures.

[00:17:52] And another one can do this.

[00:17:53] And you have 30 of those, right?

[00:17:55] And then the agents themselves – or there's like a central agent that can direct and bring the specialized agents into the action, transfer the memory of like what the interaction has been so far so they're on board with what the project is, and figure things out.

[00:18:13] So, I think, you know, so let me give you an example.

[00:18:18] Let's say you had – you know, you work for Twit forever.

[00:18:24] All Twit content, all the transcripts for all Twit shows should be poured into an LLM.

[00:18:29] And then you would be able to query that without reference to the training data, right?

[00:18:37] And so, if you did something like that, you would very likely have right off the bat a non-hallucinating chatbot because it would tell you – like you'd say, well, what did Jason say in the year 2009 about blah, blah, blah, blah?

[00:18:54] And it would – it has no bad data.

[00:18:57] It just has the – unless the transcript is flawed or whatever.

[00:19:00] But it's bad data.

[00:19:01] It's the hoovering up of all information that gives us the biases, that gives us the misinformation because there's bias and misinformation in the data.

[00:19:12] And so, if you have little agents, each of which is very specific and is brought into service as a specialist, that's going to give you a much better result than these all-purpose, all-data-based chatbots, I think.

[00:19:29] Right.

[00:19:29] Yeah, specific kind of aligned data to – and that's probably part of why something like Notebook LM is so impressive, right?

[00:19:41] Because you can feed it exactly what it needs to know or exactly what you're querying around.

[00:19:49] And you aren't going – you don't have to expect or assume that you're going to get anything that falls outside of that data set.

[00:19:56] It'll give you good data.

[00:19:57] Yeah, the thing that Notebook LM and, in fact, many chatbot, LLM-based tools suffer from is they can't really tell – I mean, sometimes they do an impressively good job, but they can't really tell what's important.

[00:20:12] So, I've had – I know people who are authors and they put the digital version of their book into – and say, you know, give me the most important points.

[00:20:21] And they're always disappointed by what the AI thinks are the most important things because on what basis would it know what's important?

[00:20:31] Only humans can – right now, only humans can really understand.

[00:20:34] And to a certain extent, meaning is something that's personal to us.

[00:20:38] But it'll be a while before they're really good at that sort of thing.

[00:20:43] And sometimes when you ask an LLM to summarize something, it tends to emphasize secondary points that are not really –

[00:20:51] Totally.

[00:20:52] Totally.

[00:20:52] I've certainly had that where it's like, oh my god, this video or whatever that I'm watching on YouTube, this is so rich in information.

[00:21:00] I could sit here for the next 45 minutes, an hour, and make my notes or ask the thing to summarize it.

[00:21:07] And I get the summary and it's like, okay, well, you got some of the stuff, but you didn't get – how do I get you to be comprehensive?

[00:21:12] Like, just be comprehensive.

[00:21:14] Well, so back to ChatGPT Search, OpenAI has relationships with news organizations, like with traditional news organizations.

[00:21:22] I forgot the list, but it's like AP and a bunch of others.

[00:21:25] It's Reuters, AP.

[00:21:28] Yeah, I can't remember.

[00:21:29] Axel, Springer.

[00:21:31] But real news organizations, traditional news organizations.

[00:21:34] And this is gold for understanding what's important because in traditional journalism, you write in what's called reverse pyramid style.

[00:21:42] So the first sentence is in a straight news story, not a feature, not an opinion piece, but like an irregular, like reported news story.

[00:21:51] The first sentence is the summary of the thing, right?

[00:21:56] It's like – for example, if you were to do a news story about the election, you wouldn't start with, oh, Minnesota, blah, blah, blah.

[00:22:02] You would start with, Donald Trump won the election and blah, blah, blah.

[00:22:06] And that would be the first sentence.

[00:22:07] The second sentence would be the second most key thing.

[00:22:10] So to a certain extent, traditional news stories are already coded for priority just based on chronological order.

[00:22:20] So that's really valuable stuff.

[00:22:22] A video, on the other hand, a conversation like this video, right?

[00:22:26] This video and all other podcasts and most other video systems, you know, the nuggets are – just come up somewhere.

[00:22:36] And there's gold in there.

[00:22:38] But it's not the first thing you say.

[00:22:40] The first thing you say is just like, hey, you know, go to my Patreon.

[00:22:43] So you know what I mean?

[00:22:44] So it's like that's not the main point of what we're doing.

[00:22:48] Everybody should do that.

[00:22:49] But that's not the main point of the content of this podcast.

[00:22:53] So it's not ordered that way.

[00:22:54] But news is.

[00:22:55] So that's actually a good thing.

[00:22:56] And they should take advantage of that.

[00:22:58] And I'm sure they are.

[00:22:59] Yeah, indeed.

[00:23:00] Indeed.

[00:23:01] And I think anyone using it would expect for that as well.

[00:23:06] So interesting stuff there.

[00:23:08] You shared a couple of links, which we can talk about now.

[00:23:12] First of all, the word of the day obviously is heterogeneous pertained transformers.

[00:23:19] Yes.

[00:23:20] Also known as HPT.

[00:23:21] Maybe that's not the word of today.

[00:23:23] But anyways, tell me a little bit about this.

[00:23:26] Yeah, right.

[00:23:28] The research.

[00:23:29] Yeah, tell me a little bit about this.

[00:23:31] Okay.

[00:23:31] So MIT researchers are tackling the problem of robot training.

[00:23:37] So robot training is really, really hard.

[00:23:39] And the reason is basically what you do.

[00:23:42] If you've ever seen, I recommend that everybody go look at videos of the Tesla assembly line.

[00:23:49] These robots are amazing.

[00:23:51] They pick up whole cars and move them over here.

[00:23:54] And then these arms come in.

[00:23:55] And they're at high speed with incredible precision are bolting things together.

[00:24:00] And there are humans that are kind of facilitating things.

[00:24:02] But to a very large extent, industrial robots can be very capable.

[00:24:06] The amount of training it takes to get robots to be that precise and to be flexible enough to deal with the small little variations in what comes across the assembly line is very, very time-consuming and difficult.

[00:24:19] First of all, all of the robot companies have their own sort of programming language that it's proprietary that they use.

[00:24:27] So if you're building a factory and you're using robots from four different companies, you need staff who are proficient in four different programming languages.

[00:24:38] And they use these little things that are called teach pendants.

[00:24:44] They're basically little handheld devices that have this like Taco Bell, like cash registered interface where they can control the robot and make tweaks in the programming.

[00:24:56] Another way to program robots is they have a human teleoperating a robot.

[00:25:01] So the human is doing the stuff.

[00:25:03] And that gets some initial data that they can then download into the robot and then the robot.

[00:25:08] And then they have to tweak it.

[00:25:09] But it's time-consuming.

[00:25:10] It's really hard.

[00:25:11] So they had this incredible genius idea.

[00:25:15] What if robot software worked like ChatGPT?

[00:25:21] So the idea is they take all this data from robotics.

[00:25:26] They take the programs that have been written to control robots.

[00:25:30] They take the sensor data from robots.

[00:25:33] They take the visual data from robots.

[00:25:35] They take the simulation data from robots.

[00:25:37] They encode it so that it's something that can be read by a transformer.

[00:25:42] Right?

[00:25:42] So they encode it so that it's basically the equivalent of text, the way ChatGPT uses text.

[00:25:48] And then they use an LLM, basically, to process all this data so that right off the bat, the robot already has a lot of knowledge.

[00:26:01] And so far, they're able to get it to sort of 80% quality, 90% quality, which is not good enough for a robot.

[00:26:11] But it's a starting point for the robotics.

[00:26:15] And the idea is they would have this – they would have like a ChatGPT for robots that they could download into any robot.

[00:26:25] A universal robot brain.

[00:26:27] Exactly.

[00:26:28] And right off the bat, the robot would have some knowledge and skill about how to do things.

[00:26:32] They wouldn't have to – every robot wouldn't have to start from scratch.

[00:26:35] It's a genius idea.

[00:26:36] One problem with it is that you look at something like ChatGPT, which basically has all knowledge known to mankind poured into this thing, which it's using.

[00:26:45] And the more data, the better.

[00:26:46] Well, there's not that much robot data out there.

[00:26:49] I mean, there's nowhere near as much robot data to be had when you compare it to all written knowledge everywhere.

[00:26:58] And so that's a problem.

[00:27:00] So they'll have to figure out how to produce more robot data.

[00:27:04] And then there are also – this is early days for this technology.

[00:27:08] But I think this is kind of where we're going with this technology.

[00:27:13] And it's just an absolutely brilliant system, and they're already having some success with it.

[00:27:18] So I just think that's kind of interesting.

[00:27:20] It's not good news for people who are afraid of robots and who fear that Skylink or whatever will go online and the Terminator robots in the future will come back and kill us all because this is totally going to help that whole scenario.

[00:27:34] But it's great news for industries who want to accelerate and reduce the costs around automation.

[00:27:40] So it's really, really interesting information.

[00:27:43] And I think that your audience would certainly like to know about it because it's –

[00:27:47] For sure.

[00:27:48] The way to look at it is basically like ChatGPT for robots.

[00:27:52] Yeah.

[00:27:53] I think what comes to mind for me is that so much of what we've seen in the last couple of years around LLM technology and generative AI and everything is that – and there's a couple of examples in today's rundown –

[00:28:04] is that artificial intelligence in the way that we're seeing it used right now in situations like this is very democratizing.

[00:28:14] It opens the playing field.

[00:28:15] It lowers the barrier for people who prior to this technology, you had to have X amount of years of education and all of this experience and everything to do this thing.

[00:28:27] And then here you have this unified language, this LLM that in some ways I would imagine this kind of lowers the barrier for kind of robotic brains essentially.

[00:28:38] It's absolutely designed to do that.

[00:28:40] It's a lot more accessible.

[00:28:41] Yeah, absolutely.

[00:28:42] It's designed to do that.

[00:28:43] And of course, this is a good thing and a bad thing depending on who's doing it and for what purpose.

[00:28:47] You talk about democratizing things like it's great for somebody who's trying to run a small business and who's doing a lot of coding.

[00:28:57] And needs to do some coding in a language that they're not that proficient in.

[00:29:01] Like, you know, they're LLM-based tools that help them write code and to debug their code and all that kind of stuff.

[00:29:07] But it's also very useful for malicious cyber attackers who, you know, it used to be like you get this like illiterate email saying it was from your bank.

[00:29:16] And you'd be like, come on, give me a break.

[00:29:18] And you'd dismiss it.

[00:29:19] Now it can be perfect.

[00:29:20] And it can have the same logo and same look and feel as the bank's email.

[00:29:24] And then it can be used for coding malware and things like that.

[00:29:31] So it's both good and bad.

[00:29:33] Like I said before, it makes everything better and worse at the same time.

[00:29:36] For sure.

[00:29:37] For sure.

[00:29:38] Well, speaking of robotics, you also wrote a couple of articles for Computer World that you shared with me in the last couple of months.

[00:29:46] Your most recent article, I think, was for like a week and a half ago.

[00:29:49] And it's all about humanoid robots, which, you know, I saw the Wii Robot events and, you know, I've seen Tesla events in years past where he's been talking about the humanoid robot.

[00:30:02] And some of them have been a little disingenuous in previous years.

[00:30:06] This year seemed like a little bit more of a step forward.

[00:30:08] But still, you're kind of like, how autonomous are these things really?

[00:30:11] And do we really need this humanoid robotic future?

[00:30:15] Like, what problems is it solving actually?

[00:30:19] And yeah.

[00:30:20] So share a little bit of your thoughts there.

[00:30:23] I'm super against humanoid robots.

[00:30:25] And I ask questions about them that I don't hear anybody else asking.

[00:30:29] And so let's start with the state of where humanoid robots are.

[00:30:33] They're actually being used in actual factories.

[00:30:36] And I've got a list here.

[00:30:37] So Amazon is using a robot called Digit from Agility Robotics.

[00:30:42] And it's basically picking up bins and walking over, shuffling over, and putting them somewhere else.

[00:30:48] It's a $100,000 robot.

[00:30:50] That's not what the cost is.

[00:30:51] But for training and everything else, it probably adds up $100,000 to move bins 20 feet.

[00:30:57] Mercedes-Benz is collaborating with Aptronic to use an Apollo humanoid robot in its production lines.

[00:31:02] BMW is using the Figure 2 robot, which is one of the more advanced ones.

[00:31:08] Hyundai is using – it actually is the company that bought Boston Dynamics and owns Boston Dynamics.

[00:31:15] And it's using Atlas in factories.

[00:31:19] Tesla is using Optimus robots to sort batteries.

[00:31:22] And it goes on.

[00:31:24] They're showcasing these robots.

[00:31:26] And just to be clear, a humanoid robot is a robot that's roughly the size and shape of a human being that has a head, that has two arms, that has five fingers, that has knees, that has feet, and walks with bipedal locomotion and so on.

[00:31:43] And the companies that make these robots say that the reason for humanoid robots is because we operate in spaces designed for people.

[00:31:53] And if you make a robot that's the size and shape and bends in the same way as a person, they can operate in those spaces.

[00:31:59] For example, a humanoid robot can open a car door and get in the car and sit in the car and put on a seatbelt.

[00:32:07] It can sit in a chair.

[00:32:08] It can walk upstairs.

[00:32:10] It can do all kinds of things like that.

[00:32:13] And this reason seems to make sense.

[00:32:18] But it actually doesn't explain the drive, the desire to make a humanoid robot.

[00:32:24] For example, it doesn't explain the faces they put on robots.

[00:32:28] Why do they have to have a head that is shaped like a human head?

[00:32:31] Why does it neck turn the way a human neck turns?

[00:32:33] Why two arms?

[00:32:34] Why not four arms?

[00:32:35] Why not 20 fingers?

[00:32:37] And so the thing that freaks me out about humanoid robots is that their main purpose seems to be to delude people.

[00:32:52] So there have been a bunch of studies conducted that found that humanoid robots are often perceived as human-like,

[00:32:59] whereas non-humanoid robots are perceived as objects.

[00:33:01] Humanoid robots are perceived as social entities.

[00:33:05] Research in Finland found that when people make eye contact with a humanoid robot that has eyes,

[00:33:13] they elicit the psychological chemical thing that happens when we make eye contact with another person or with a dog, for example.

[00:33:24] Dogs and humans can make eye contact.

[00:33:26] And one of the reasons we bond with dogs is that there's a connection there.

[00:33:30] There's a psychological connection by both species.

[00:33:32] And when people look at humanoid robots, they have something like that that happens, but not when they look at it like a non-humanoid robot.

[00:34:10] Right.

[00:34:11] They don't know what it is, but basically that's the main effect of a humanoid robot, to trick the human mind into thinking it's something other than what it is,

[00:34:20] which is in fact a machine, a device or a tool just like this.

[00:34:23] It's just non-sentient, non-emotional, non-human, non-animal entity.

[00:34:29] It's a machine.

[00:34:31] And so I basically urge people to consider this whole idea.

[00:34:39] Well, like why do that?

[00:34:40] You can make much more efficient robots.

[00:34:42] And then the other –

[00:34:44] And we have been for a very long time, you point out.

[00:34:46] It's not like these things haven't existed.

[00:34:49] They do pretty well.

[00:34:51] So up until now.

[00:34:52] So my most recent column on this subject is like by the time they get – so Elon Musk said, oh, it'll mow your lawn.

[00:34:58] It'll babysit your kids like at the Tesla event.

[00:35:00] Oh, yeah.

[00:35:01] The walking the dog one was my favorite.

[00:35:03] It was like, no, wait a minute.

[00:35:05] I don't want that.

[00:35:06] Yes, exactly.

[00:35:08] Well, I mean to me also, the mowing the lawn thing.

[00:35:11] So imagine a humanoid robot out there pushing a mower.

[00:35:15] We have robot mowers.

[00:35:16] They're like –

[00:35:17] Right.

[00:35:17] They just go around and trim your lawn.

[00:35:19] That's a perfectly designed thing.

[00:35:21] They're inexpensive.

[00:35:23] And then the walking the dog thing, that's the other point I make in this article.

[00:35:26] The reason we get a dog is so that we can walk them.

[00:35:29] We can take care of them.

[00:35:30] We have a relationship with them.

[00:35:32] That's the whole point.

[00:35:33] Connection, yeah.

[00:35:34] If you don't want to walk your dog, don't get a dog because that's what it is to own a dog.

[00:35:39] You walk them.

[00:35:40] It's great exercise.

[00:35:41] It's the whole thing.

[00:35:43] The idea that it's going to babysit your kids, it's like, yeah, no, don't have kids if you want to outsource the job of parenting to a robot.

[00:35:54] I have a couple of dogs.

[00:35:56] I wonder how they would be if there was a robot walking them.

[00:36:01] I'm not certain they would be okay with that.

[00:36:04] Yeah.

[00:36:04] They might just bark at it.

[00:36:06] Yeah, right.

[00:36:07] And so anyway, I just think we're going headlong into this thing.

[00:36:11] Science fiction has been telling us for a century.

[00:36:13] Yeah.

[00:36:14] By the way, the very first robot – well, that's not true.

[00:36:16] One of the very first robots in fiction was called TikTok, and it was in the Wizard of Oz series.

[00:36:21] Oh, no kidding.

[00:36:22] Yeah, but for a century, we've learned that robots in the future will be these humanoid robots.

[00:36:28] And Star Trek data was like this – they had this whole courtroom thing about whether he was sentient or whether he was a real thing.

[00:36:36] And basically, they were really pushing the idea that he was just like a person and all that kind of stuff.

[00:36:40] But it's like, why are we doing this?

[00:36:42] Why do we want machines to join society and sort of like be and basically replace the people in our lives?

[00:36:49] I think a certain type of person wants that.

[00:36:52] And I think those people fall into two categories.

[00:36:55] One is the Mark Zuckerberg types who have a personality that's kind of like, I don't know, on the spectrum or something and just kind of like not super like into people.

[00:37:06] Always sort of think more logically and don't think super socially.

[00:37:11] And then the other people are like people like Elon Musk.

[00:37:14] Because if you create something that is essentially a human, then you are essentially a god.

[00:37:20] Yeah, yeah.

[00:37:21] I think there's a strong desire by some of the tech narcissists to be gods.

[00:37:27] They want – Elon Musk and there's some other people saying similar things.

[00:37:30] There will be more humanoid robots than people on the planet at some point.

[00:37:33] I'm sure they would love nothing more than that because they're the ones who create them.

[00:37:38] They control them.

[00:37:38] Exactly, exactly that.

[00:37:39] They're the slave army, right?

[00:37:41] And the whole thing just gives me the creeps and I want nothing to do with it.

[00:37:44] I like robotic everything.

[00:37:45] I want little robots all over the house.

[00:37:47] I want everything to be robotic, right?

[00:37:49] And I don't want to lift a finger and I don't want to do any work.

[00:37:52] But I don't want like a person walking around.

[00:37:54] Yeah, no.

[00:37:55] That would just –

[00:37:56] It's a machine, right?

[00:37:57] It's just –

[00:37:58] I just don't – I don't see that as near term as I think people like Elon Musk might assume.

[00:38:05] I can't imagine a life – during my lifetime walking into the other room to see a humanoid robot standing there waiting to serve me food or I don't know what it would even do or why I would even need that.

[00:38:18] Like what is the purpose?

[00:38:20] You could build a kitchen counter that would make food very efficiently and wonderfully that has no face and no like – nothing.

[00:38:27] Talk to you.

[00:38:28] Just make my breakfast and don't talk to me.

[00:38:32] And I want a robotic doors and windows and I want a self-driving car and I want all that stuff.

[00:38:37] Those are all robots, right?

[00:38:39] Those are all robotic devices.

[00:38:40] I want lots and lots and lots of robotic devices.

[00:38:43] I don't want a robotic person.

[00:38:44] I want to reserve my humanity for other people and for pets and things like that.

[00:38:49] Yeah, yeah.

[00:38:50] Reserve the humanity.

[00:38:51] I like that.

[00:38:52] That's true.

[00:38:53] That's part of what – that is what makes us human and what is so beautiful about the human experience is that aspect.

[00:39:00] That aspect and getting rid of it and replacing it with a robot.

[00:39:04] Yeah.

[00:39:05] Let's not replace people if we can avoid it.

[00:39:08] That's all I'm saying.

[00:39:08] Yeah, indeed.

[00:39:10] All right.

[00:39:10] We're going to take a super quick break and then when we come back, we've got a few more news stories to talk about.

[00:39:15] That's coming up in a second.

[00:39:19] All right.

[00:39:20] Let's see here.

[00:39:21] What else do we have here?

[00:39:21] Anthropic posted on their blog highlighting its company line about AI safety.

[00:39:27] This is definitely a topic that comes up very often on this show.

[00:39:29] I know Jeff Jarvis has some strong opinions about this, so I'm curious to hear your thoughts.

[00:39:35] The post discusses the need for targeted AI regulation.

[00:39:39] They say to prevent catastrophic risk, that sort of stuff.

[00:39:44] Anthropic, of course, as a company is known for being the kind of risk-aware AI company, responsible AI, throwing that in air quotes.

[00:39:55] But to a certain degree, kind of touching on what you were just talking about, the people in power who have the ability to shape the thing because they say there's a need for regulation here.

[00:40:10] We're the ones to do it right.

[00:40:11] So let's regulate them because – and we know so much about this that we can inform you on how to do this the right way.

[00:40:20] And this post kind of seems to fall into that category.

[00:40:23] I don't know what your thoughts are.

[00:40:24] It's also the – so I have three basic points on this.

[00:40:29] One is that the cynical point is that companies that are already in Anthropic is one of the more established AI companies.

[00:40:36] Yes.

[00:40:37] They love to call for regulation because it's both virtue signaling and it's also a barrier to entry for small startups that want to challenge them.

[00:40:48] So that's one thing to be aware of.

[00:40:51] And you see this all the time in tech in Silicon Valley.

[00:40:54] It's the big companies like, oh, yeah, we should have strong regulations.

[00:40:57] We basically just put the smaller startups out of business and they can afford to pay for the – dealing with the regulation.

[00:41:03] That's one thing.

[00:41:04] The second thing is that, yeah, we do need regulations.

[00:41:06] I mean the things that Anthropic is concerned with is like people making pandemic diseases in their basement and nuclear bombs and who knows what.

[00:41:19] So any sort of roadblock to malicious activity using AI, which of course greatly magnifies human ability, would be nice.

[00:41:32] And the third thing is that the problem with laws is that the lawbreakers are not going to follow the laws, right?

[00:41:43] So it's like the North Koreans are not going to follow the law.

[00:41:50] The cyber criminals are not going to follow the law.

[00:41:52] So you basically hamper the law-abiding citizens and nobody else.

[00:41:57] But still, I still think it's better.

[00:41:59] Anything that slows the progress of malicious activity using AI is probably a good thing.

[00:42:05] Well, and then it also kind of calls into question – another topic that comes up often is kind of the open source aspect or avenue of artificial intelligence.

[00:42:16] Of course, meta, a large part of what it's doing.

[00:42:19] We've called it open-ish because it's –

[00:42:21] Yes.

[00:42:22] It's open adjacent.

[00:42:23] It's open adjacent, but it's opener than a lot of the other models out there and a lot of the other strategies and everything.

[00:42:30] I think it's funny, yeah.

[00:42:32] Something like this is kind of calling that into question, right?

[00:42:35] Exactly.

[00:42:35] If you can't control – if you could not be stopped from being fine-tuned, for example, which is part of what Anthropic has written here, it would preclude them from being released at all.

[00:42:50] So then you're saying goodbye to open systems or open models entirely.

[00:42:55] And is that okay?

[00:42:57] I think it is.

[00:42:59] And I think it depends.

[00:43:00] I mean what happened with meta is really kind of funny actually.

[00:43:03] It's funny in a dark and scary way.

[00:43:05] Basically, they have Lama.

[00:43:07] It's open source.

[00:43:08] They say it's not open source.

[00:43:10] Basically, it's available for anyone to download, but they have terms and conditions.

[00:43:15] In those terms and conditions, they have a bunch of things you're not supposed to use it for, including military uses.

[00:43:20] Okay.

[00:43:20] So the Chinese military just downloaded it and started using it for military uses.

[00:43:26] And so they don't give a rat's hiney about terms and conditions at the PLA, right?

[00:43:33] It just – it's a powerful tool and they're using it for – you know, to –

[00:43:37] Yeah.

[00:43:38] And it's accessible.

[00:43:38] They're using it to prepare for their war against the United States over Taiwan, right?

[00:43:42] And so quickly meta said, oh, now we're partnering with the Pentagon to like let them use it for military uses.

[00:43:50] And so the whole thing about don't use it for military uses is just thrown out the window.

[00:43:55] Everybody's now going to be using Lama for military uses.

[00:43:57] So what does it mean to have terms and conditions?

[00:44:00] And what does it mean to have something you call open source?

[00:44:04] None of it means much really at all.

[00:44:07] And I think what we really – what really should exist if you are serious about preventing powerful AI tools from being used for dangerous military and other uses is you – it has to be that you can't use it for those uses somehow.

[00:44:24] There has to be something that locks it down.

[00:44:28] Law won't stop the law breakers and terms of service won't stop anybody.

[00:44:33] So I think we need another approach to this that basically prevents the use of powerful AI tools.

[00:44:43] On the other hand, there are thousands of LLMs and they're all getting better all the time.

[00:44:48] And so we're heading for a world where AI is going to be perfectly ubiquitous, available to everyone, the good, bad, the ugly.

[00:44:56] And there's really not a whole lot laws, terms of service, or anything else will do to stop its use.

[00:45:03] No, it feels like the cat is already out of the bag in that regard.

[00:45:09] And yeah, so that's interesting.

[00:45:12] So what related to this kind of regulation discussion is in the EU, they've got the AI Act that's set to come into effect in January and over the course of the next two years.

[00:45:25] And I just thought this was interesting.

[00:45:28] There's a tool called the LLM Checker that's used to test models for compliance with the AI Act.

[00:45:36] As we are only a couple of months away at this point, it's just right around the corner.

[00:45:41] And the tool showed just how poorly many of the models are actually adhering to it at this stage.

[00:45:48] No single major model would make the cut if that happened right now.

[00:45:54] Yeah.

[00:45:55] Yeah.

[00:45:56] Not performing very well according to that anyways.

[00:45:58] Yeah.

[00:45:59] It just seems like that kind of regulation is not, first of all, it's not the right kind of, it's not the right tool for the job.

[00:46:05] I don't know what is, but like that.

[00:46:07] Right.

[00:46:08] I guess that's a big question.

[00:46:09] Yeah.

[00:46:10] Yeah.

[00:46:10] I spent a lot of time in Europe and it's like some of the regulations around things are just so onerous and so burdensome to just ordinary users.

[00:46:19] And this is not about AI.

[00:46:20] Like just for just every single website you go to, you have to agree to the cookies and all that kind of stuff or disagree with the cookies or the other.

[00:46:30] And it's like, you know, I open a thousand web pages a day, you know, it's like really a problem.

[00:46:34] And then there's all these other things.

[00:46:36] So there's a bunch of US sites that you can't see in Europe unless you have a, unless you have a, whatchamacallit, express VPN type thing.

[00:46:45] And so I can't imagine the damage they're going to do.

[00:46:48] I mean, one, one thing that could happen is that a huge number of AI tools are just going to basically not comply.

[00:46:54] Europeans won't be able to use them.

[00:46:55] And so Europeans will be the only people in the world not making the advancements that are possible with AI tools.

[00:47:01] And so that's one possibility.

[00:47:04] Another one is that the tools will comply so that they can gain access to the European market.

[00:47:10] And yet another one, the best case scenario is that Europe basically becomes a barrier to entry for foreign AI tools in Europe.

[00:47:22] European made AI tools are favored.

[00:47:25] I think that might be nice for AI companies.

[00:47:29] France is actually in lead in terms of Europe for AI funding and for the startup scene.

[00:47:36] So that might be good for France and its AI industry.

[00:47:38] So I don't know.

[00:47:39] I don't know where that's going.

[00:47:41] It's going to be interesting to see what happens, but I also don't know how the, how the Europeans would stop its use.

[00:47:49] I mean, they can stop it for universities and law abiding organizations, but.

[00:47:54] Right.

[00:47:54] But you can't stop in all the places as we talked about in the kind of the previous story as well.

[00:48:00] There's only so much.

[00:48:01] Yeah.

[00:48:02] Although you've noticed, I know you're a perplexity user.

[00:48:04] You've noticed that they now don't allow you to use VPNs.

[00:48:08] Oh.

[00:48:09] Have you, have you detected that?

[00:48:10] No, because I'm, I'm, I'm usually using it on my home computer.

[00:48:13] I'm not running a VPN on here all the time.

[00:48:15] So I haven't noticed that.

[00:48:16] So when you're running a VPN, it says no.

[00:48:19] It says no.

[00:48:20] Oh, interesting.

[00:48:21] You can't use a VPN.

[00:48:22] So, so, so I turn it, turn off the VPN, which is a problem, right?

[00:48:25] I don't want to turn off my VPN.

[00:48:27] Mm-hmm.

[00:48:28] But like it's.

[00:48:29] Especially you're doing a lot of traveling.

[00:48:31] Yeah.

[00:48:31] Yeah.

[00:48:32] That's, that's so interesting.

[00:48:33] Yeah.

[00:48:34] And is that in your, in your estimation, is that for kind of what we're talking about

[00:48:38] here to, to kind of adhere to some of these rules so that, Hey, we're not, we're not

[00:48:43] allowing people to get around your regulation and see we're doing.

[00:48:46] I don't know why perplexity is doing it.

[00:48:47] I mean, I, I, I guess I assumed that they were just doing it on their own because they, because

[00:48:54] they know that, you know, foreign actors were using it for malicious purposes or whatever.

[00:48:58] And so they just set that up, uh, on their own.

[00:49:02] But I don't know that.

[00:49:05] I really don't know what, what drove them to do that.

[00:49:08] Yeah.

[00:49:09] Um, that is so interesting.

[00:49:12] I'm like trying to.

[00:49:13] It's getting harder and harder to use VPNs.

[00:49:15] It is, you know, Google will challenge you like every 10 minutes and make you find the

[00:49:21] crosswalks and the motorcycles, which drives me nuts.

[00:49:25] And, uh, well, you know, an AI can solve that for us.

[00:49:28] You're right past that, right?

[00:49:29] Yeah.

[00:49:29] AI is way better proving it's human than humans are.

[00:49:32] Exactly.

[00:49:34] So, so true.

[00:49:36] Um, a couple more stories here.

[00:49:38] Amazon continuing to develop its AI Alexa product has now reportedly postponed its launch until

[00:49:44] 2025, uh, technical challenges along the way.

[00:49:49] And really just the greatest hits of AI challenges, uh, slow response times, hallucinations, difficulty

[00:49:56] with basic tasks that Alexa had previously handled well, which just reminds me of the assistant

[00:50:02] Gemini transition, which has been kind of noisy and weird.

[00:50:07] Yeah.

[00:50:08] It's like, no, we want you to use this new thing, but it can't do so much that assistant

[00:50:12] could do before it.

[00:50:13] Yeah.

[00:50:14] So, and, and I think, you know, they're talking about, they're talking about Alexa, uh, eventually

[00:50:18] becoming a gentic, which is means you basically, uh, would tell it, you know, uh, I really want

[00:50:24] to, uh, take my spouse on a, on a nice date.

[00:50:27] Can you just, and I have no time.

[00:50:29] Can you just like set it up, find up, find a great thing to do, find a restaurant, make

[00:50:34] the reservation, pay for it with a credit card, like, and just, you know, without even

[00:50:39] telling them the details, just give them the goal and have it go do it.

[00:50:42] Right.

[00:50:43] So that's, you know, years away, but that's probably where that sort of thing is going.

[00:50:46] It also should have characteristics of the swarming technology that I told you about.

[00:50:52] There are multiple tools, by the way, that already exist that they're semi swarming technologies.

[00:50:57] They basically allow some coordination between different agents.

[00:51:00] And so I think that for Alexa will be ultimately the, the, how, how it should work.

[00:51:07] I mean, it should, it should basically, you'll say, oh, what's the, you know, uh, give it

[00:51:12] some, some, some question.

[00:51:13] It'll go, that's perfect for Wolfram Alpha.

[00:51:16] And it'll go to the Wolfram Alpha agent and just use Wolfram Alpha Pixie Dust to come back

[00:51:21] with the answer.

[00:51:22] Or, you know, actually that's something that, that, that would be best handled, you know,

[00:51:26] by a search of the current news and, but, but to have an all purpose thing, um, you know,

[00:51:33] it's basically chat GBT, but not as good.

[00:51:35] That's not, that's not going to end well for, for Alexa users.

[00:51:39] Yeah.

[00:51:40] Are you, did you, you know, in the, in the world that wasn't that long ago, but at this

[00:51:45] point seems a little distant of the, I've got Google homes in every room or I've got

[00:51:49] Amazon echoes in every room.

[00:51:51] Like, has that persisted for you?

[00:51:53] I mean, you, you do a lot of travel and you know, you're very much a digital nomad as you,

[00:51:58] as you are known for.

[00:51:59] Um, so I doubt you're bringing an echo along with you everywhere you go at this point.

[00:52:03] We used to.

[00:52:04] Is it phased out or do you?

[00:52:05] No, we used to, we do, we don't anymore, but, um, we traveled, I think for like five

[00:52:10] years with an Amazon echo.

[00:52:11] Oh, okay.

[00:52:12] Um, and, and, uh, when we, we got a new one, we gave our old one to some, uh, friends

[00:52:19] in Italy and they, that was before Amazon echo was even in Italy.

[00:52:23] So they were the only people in Italy with an Amazon echo, but yeah, we used to travel

[00:52:26] with one nowadays.

[00:52:27] We don't, we just, you know, we're just using, um, you know, we're using, I mostly use perplexity

[00:52:33] for those things.

[00:52:34] And I use the voice so that I have the app on my iPhone and I just use the voice you press

[00:52:38] and hold and ask the query.

[00:52:40] And it comes back with an answer that's usually better than anything the echo would give me.

[00:52:44] So we don't, we don't do, we just don't want to, it's another thing we could eliminate

[00:52:48] that we don't have to carry the weight of, you know?

[00:52:51] Yeah.

[00:52:52] Yeah.

[00:52:52] Indeed.

[00:52:53] Yeah.

[00:52:53] That, that was kind of, I was curious about that because like we do have the, the Google

[00:52:58] homes in every room and our usage of it has just gone way down, you know, primarily

[00:53:04] because it stopped being as useful as it used to be.

[00:53:07] It used to be a lot more accurate and I keep hoping or wondering like, is there going to

[00:53:12] be a next wave of, oh, okay.

[00:53:13] Now, now we've brought in Gemini, Gemini does these things better, which I don't believe

[00:53:18] is, is necessarily the case right now.

[00:53:20] Um, but will that happen and will that come back?

[00:53:23] Or are we just, you know, again, so used to the device we already have and, and, you

[00:53:28] know, like, I mean, I, I, I would guess that, yeah, I would guess that half the Amazon echoes

[00:53:32] that, that are out there are just there because they just keep on working and they use them

[00:53:38] for oven timers.

[00:53:39] Right.

[00:53:39] Totally.

[00:53:40] I'm boiling some eggs and just like, you know, give me an alarm when the eggs are done.

[00:53:44] And, and, and so, but I use my Apple watch for that.

[00:53:48] Cause I do a lot of bread baking and I do a lot, I use timers all the time and I just

[00:53:52] use my Apple watch for that.

[00:53:53] And so I think, I think wearables is, is, is, uh, right now it's like the smartphone is

[00:54:00] replacing the Echo appliance, but I think wearables will, especially glasses.

[00:54:03] So you, so when I'm wearing Ray-Ban meta glasses, I'm not currently owning Ray-Ban meta glasses.

[00:54:09] I own them for a while and currently don't, but that, that's like way better than Alexa

[00:54:14] for random queries.

[00:54:16] Um, because it's, you know, it's just on you.

[00:54:18] It's like, you don't have to be in the same room or whatever.

[00:54:20] Yeah.

[00:54:20] Yeah.

[00:54:21] Yeah.

[00:54:21] I still have not checked out the Ray-Bans.

[00:54:23] I'm super curious.

[00:54:25] You'd love them.

[00:54:26] I mean, yeah.

[00:54:27] You should wait for the next one.

[00:54:28] That's what I'm doing.

[00:54:29] That's exactly what I was going to say.

[00:54:29] Yeah.

[00:54:30] I think at this point, I'm probably just, we'll wait for the next one and see what it's

[00:54:34] all about.

[00:54:34] But I've heard, heard really great things.

[00:54:36] And I do agree with you.

[00:54:37] I do think that the kind of distant kind of end, end point for a lot of this is something

[00:54:43] along those lines.

[00:54:44] I don't know if it's that everybody is wearing glasses because there are a lot of people

[00:54:47] that'll be resistant to just that fact.

[00:54:49] But, um, but who knows where that technology leads to?

[00:54:52] You know, I, I think, I think it will.

[00:54:54] I like, I mean, of course there'll, there will be a million different options, but I think

[00:54:59] the main option will definitely be glasses.

[00:55:00] I mean, one, one data point, this is, this is a problem with humane pen.

[00:55:04] This is the problem with all these things that pin to your clothes or, or some non glasses,

[00:55:09] uh, AI wearable, which is that this is a new behavior.

[00:55:13] Like the number of people who wear glasses every day is something like, you know, something

[00:55:16] like 5 billion people in the world wear glasses already people, right?

[00:55:21] People wear sunglasses.

[00:55:22] They wear prescription glasses.

[00:55:23] And I know people who wear Ray-Ban meta glasses who have perfect vision.

[00:55:27] They don't have a prescription.

[00:55:28] They just wear the glasses because they want that thing.

[00:55:30] Yeah.

[00:55:31] And you'll never, the problem with the AI, AI pins of the world is you'll never do better

[00:55:37] than something that's right in front of your eyes, right?

[00:55:41] For AR and right next to your ears.

[00:55:43] Like you can't, you can't do better than that.

[00:55:46] If, if you've got a camera and you're doing, you're doing, um, multimodal AI, which then I,

[00:55:51] I'm guessing that the next, uh, Ray-Ban meta glasses will use a multimodal AI that uses

[00:55:58] video right now.

[00:55:59] They use, they'll take a picture and they'll use that for the AI.

[00:56:02] I think the future, it'll be just video, right?

[00:56:04] Processing the video like the Google demo that they did at, at, at the recent, uh, uh, well,

[00:56:09] both, both, uh, open AI and Google did videos about, about multimodal AI.

[00:56:13] But, but basically that's, um, you basically turn your head to look what your, your attention,

[00:56:22] uh, is, is all about your gaze and your, where you, which way your face is pointed.

[00:56:29] Right.

[00:56:30] That's the thing you're paying attention to is the thing you're looking at and the thing

[00:56:34] you're pointing your head toward the glasses go with you on that journey, right?

[00:56:37] For sure.

[00:56:38] And you can detect your gaze.

[00:56:39] And so there's no extra step there.

[00:56:41] No pin or watch or anything will ever be better than that.

[00:56:45] So glasses are the perfect form factor for AI wearables, I think.

[00:56:51] And I don't think there's any way to get away from that.

[00:56:54] Yep.

[00:56:55] Yep.

[00:56:55] And we're, and we're seeing more development on, on the miniaturization, which has long

[00:56:59] been the real challenge there.

[00:57:02] And yeah, the batteries are a big, big, the battery technology.

[00:57:04] That's a really big deal.

[00:57:06] Yeah.

[00:57:07] Be curious to see where that all leads.

[00:57:09] Mike, it's always a pleasure to get to get the chance to talk to you about all this stuff

[00:57:13] and just, and just hang out in general and live vicariously through you staring at a beach

[00:57:19] right now.

[00:57:19] Every time I talk to you, you're in a different part of the world.

[00:57:22] So that's the way I want to live.

[00:57:25] Don't ever catch me.

[00:57:26] Yeah, that's right.

[00:57:27] Yeah.

[00:57:28] Well, Mike, you're, you're awesome to hang out with machine society.ai.

[00:57:32] Also an awesome writer for those of you who are not following Mike's work, definitely

[00:57:37] head over to their machine society.ai, subscribe to the newsletter.

[00:57:41] And then of course your work can be found in a lot of other places, but if you head there,

[00:57:46] you can find all those places through that location.

[00:57:50] Yeah.

[00:57:50] Right on.

[00:57:51] Thank you, Mike.

[00:57:51] This has been a lot of fun.

[00:57:52] Thank you.

[00:57:53] Yeah.

[00:57:54] Yeah.

[00:57:54] And everybody who's watching the show, you know, really the main thing you need to know

[00:57:58] about what we're doing here is just go to aiinside.show.

[00:58:02] That is our place on the web where you can find all of the different, you know, ways to subscribe.

[00:58:09] I'm trying to, I'm vamping while I pull it up.

[00:58:12] Ways to subscribe.

[00:58:13] We've got all of our episodes, audio and video.

[00:58:16] Everything is kind of neatly organized and color coded to the show.

[00:58:20] So, you know, it took a while to get it all together, but anyways, that's where you go

[00:58:25] to subscribe.

[00:58:26] And if you want to go a little bit deeper, you can support us on Patreon.

[00:58:30] That's patreon.com slash aiinsideshow.

[00:58:33] And we have a lot of, you know, nice perks.

[00:58:35] If you do that, you get ad free shows, discord community, you get some regular hangouts.

[00:58:40] If you are an executive producer level, then you get an aiinside t-shirt as well as being

[00:58:47] called out at the end of every show, like Dr. Dude, Jeffrey Maricini, WPVM 103.7 in Asheville,

[00:58:54] North Carolina, Paul Lang, and Ryan Newell, our five amazing executive producers.

[00:59:01] We just can't thank you enough for your support of what we do each and every week here on AI

[00:59:07] Inside.

[00:59:08] Jeff Jarvis, of course, returning once again next Wednesday.

[00:59:12] We'll have another new episode then.

[00:59:13] Until then, everybody take care of yourselves and we'll see you next time on another episode

[00:59:19] of AI Inside.

[00:59:20] Bye, everybody.