In this episode, Jason Howell and Jeff Jarvis discuss the Rabbit R1 AI device with guest Mark Spoonauer from Tom's Guide, delving into its capabilities, design, and potential use cases. Also, Microsoft's VASA-1 model, Meta's slew of AI announcements, the closure of Oxford's Future of Humanity Institute, and the appointment of Paul Cristiano to the US AI Safety Institute.
Consider donating to the AI Inside Patreon: http:;//www.patreon.com/aiinsideshow
INTERVIEW WITH MARK SPOONAUER, EIC OF TOM'S GUIDE
- First impressions of the Rabbit R1 AI device
- Design and form factor of the Rabbit R1
- Capabilities and limitations of the Rabbit R1
- Potential use cases for the Rabbit R1
- Comparison with other AI devices like Meta's Ray-Ban glasses
- Pricing and availability of the Rabbit R1
- Concerns about the Rabbit R1 being a companion device rather than a phone replacement
- Social implications and potential issues with the Rabbit R1
NEWS
- The Humane AI pin bad-review backlash
- AI wearables like Limitless
- Limitations of audio-only AI interfaces like the IYO One
- Closure of Oxford's Future of Humanity Institute and resignation of Nick Bostrom
- Appointment of Paul Cristiano as head of US AI Safety Institute at NIST
- Microsoft's new VASA-1 AI model for animating faces from photos and audio
- Google's restructuring, combining Android and hardware teams for AI integration
- YouTube's new "Ask" feature for premium subscribers to interact with videos using AI
- Meta's announcements: multimodal AI support for Ray-Ban glasses, AI assistant integration, and Lama 3 language model
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside, Episode 14, recorded Wednesday, April 24, 2024. Rabbit R1, first look. This episode of AI Inside is made possible by our amazing patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly. And thank you for making independent podcasting possible. Hello, everybody, and welcome to another episode of AI Inside. This is the show where we talk about the AI that is apparently inside everything.
It really feels that way. Anyways, I'm one of your hosts, Jason Howell, joined as almost always. Last week was the only time that Jeff hasn't been here, but Jeff is back. Jeff Jarvis, how are you doing, Jeff? Hey, boss, good to see you again. Good to see you too. Did you have anything exciting happening last week?
I was at a journalism conference. But I got to have breakfast tacos in Austin, so that's always good. That's nice.
That sounds delicious, especially because I'm hungry for tacos. Cool. Well, it's great to have you back.
We've got some really great stuff to talk about this week. So I'll just real quick just say, hey, thank you, Patreon supporters for enabling us to do all of what we do. Patreon.com/AIInsideshow. You could have your name called out at the top of the show in gratitude like Brian Jagger or Yagger.
I'm not really quite sure, so I'll say them both. Regardless, Brian, you are awesome. Thank you for your support. Thank you for enabling us to do this each and every week. All right, so let's get into it because we have a guest today.
We only have him for a certain period of time. And I really wanted to pull someone in because yesterday was another one of those AI hardware moments, and this is hardware that we've talked about on this show plenty of times before the Rabbit R1. I actually have one on order going to get it sometime in the summer, I think.
But I've been using perplexity kind of tied to the membership or the subscription of that so far and really liking perplexity. Anyways, Mark Spoonauer is the editor in chief at Tom's Guide and was at the event last night, saw a lot of a lot of people familiar faces that I that I know at the event and has the Rabbit R1 in hand. How you doing, Mark? Doing well. Thank you. Wow.
This is it right here. How do you feel? How do you feel having the the follow up future disastrous humane AI pin in your hand?
I feel like I'm living in the future and the past at the same time because of the retro design. Really? Yeah. That's a great way to put it. I mean, because that hardware, it's it's unique looking, but it's also very retro looking. It is.
Yeah. And I know they did this on purpose because I think they're trying to appeal to like a younger demographic, but I think the design is actually pretty compelling, like out of the box. And the other thing that surprised me is just how light it is. Like I think we're all used to like smart smartphones.
It's a two point eight inch display, so it's not like like a huge screen, but it's there to just obviously gives you a big advantage over the humane AI pin where you don't have to project anything. It's just right there. Geez. So how do you how do you like it? What's your first review? My first impression is that I think I like the the vision of what they're trying to accomplish in terms of maybe relying on our phones a little bit less to get stuff done. So for example, if I wanted to just like call an Uber and say like order like a car for me from here to here, like I don't have to like fire up my phone. The idea is to maybe like stop stop using apps a little bit less and focus more on the tasks. So I think there's definitely like a play on words with task rabbit. So just imagine behind the scenes, you have all these AI agents working on your behalf and working with the apps that we use every day, whether it's you know, DoorDash or Uber or Spotify to do things like play music, order food, get from here to there. And yeah, I mean, I think if it works seamlessly, then the idea is that you don't have to use your phone as often. But they're definitely saying that this is not going to replace your phone.
It's more of a companion. Hmm, which can be a little troubling because, you know, I. OK, so I am a technology reviewer. I've reviewed many phones right now in my life. I have three phones that I carry with me everywhere. Let me tell you, it's not always enjoyable having to find like a different pocket for each device so that this phone doesn't scratch the display of this phone. Like it's not always fun having to carry around more than one device.
So like that. Do you foresee that as being kind of a potential issue here if it's meant to not replace the phone, but rather be a companion to it? I do just because even though it's light, I mean, it's still another gadget. If you wind up buying it, that you would have to take around with you and charge, by the way. So I've only been using it like for a day so far. But especially when you're using the camera, the battery can like eat up pretty quickly.
Yeah. So you want to like charge it maybe at least a couple of times a day, depending on what you're using it for. But I don't think it's a coincidence that last night the CEO was demoing or just showing off and teasing some other form factors that this technology could take, including what looked like a bracelet or wearable. So I think to your point, I think over time, this type of technology is probably going to integrate into wearable devices similar to what Ray Ban and Meta are doing with their sunglasses. Do you think this is a better form factor, the glasses in the long run or the short run, I should say, to immediately get you sort of it? I think the fact there are some advantages in terms of having the display, right, and having that immediate feedback, like with the glasses, there's no overlay, for example, it's very like first generation, it's all audio cues and things like that. So in that respect, I think they're ahead because there's a built in display. But I still feel like there's like some missing features. So for example, you can generate a bunch of images using mid-journey, which is great. But then what do you do with it after that? You can't say, OK, now email these to my contact or whatever.
When I tried to email something, it was like, I don't have that functionality. Right. So I feel like they have to they have to figure out what people want to use this technology for, and they seem very receptive to the feedback. But I think the question is, like, what do people want it to do out of the box?
And is it going to feel limiting in that respect? One impression I have from what you've already said, Mark, is that when we talk about AI, we go from the phases that I understand, are we go from analytical to generative to agentic? And it sounds like this is going over the line now into agent. The use of this is more about agentry than it is about search or chat.
Is that fair? One of the more impressive things during the demo last night is that he pointed the rabbit at the desk or the podium in front of them. And there was a sheet of paper there that looked like a spreadsheet and it digitized that and he said, while you're digitizing this, move this column from here to there. So imagine just like having this by your side to manipulate documents or summarize this.
And the same thing goes like we're no taking in meetings and things like that. So I'm still scratching the surface of what this thing can do. So it's hard for me to pass judgment just yet. But I think that's part of the issue is that it promises so much.
Is it going to deliver? Mm hmm. Yeah, that really is where the rubber meets the road right now in all sorts of, you know, in a lot of these these AI hardware, which actually when we're done with this interview, we're going to talk about a few other things. Because I first see like, yeah, I mean, this is right. Like this is kind of the early stage of where we're at with AI is there's a there's almost this perception that like we got to be in there early.
We got to figure out what this actually is. So that five years from now, when it's a huge deal, we were there first. But, you know, really again and again, what that means is we end up running into as consumers, we're looking at these things and we're like, OK, yeah, that's that's all fine and good. But I've got my phone that seems to be the always the thing that we return to is. Yeah, but I've got a phone and in many ways, especially right now, my phone does things faster, easier, it's more familiar, doesn't require a second device.
I mean, it's got they've got a lot, a lot to prove about this this form factor. I agree, especially when you consider what's happening with iOS 18 in particular, and the fact that Siri is supposed to get smarter. I mean, there's a lot of cynics out there who are saying that the reason why like Humane AI and Rabbit exists is that they basically just want to be bought and folded into the technologies that Google and Apple are already working on. But if yeah, I mean, if you're if you're a betting person, you would say that when the next Siri or iOS 18 comes out, that it'll be able to do a lot of these things. And the same thing with Google Gemini as Android 15 materializes in Google.
I always write around the corner. So you know that Apple and Google are looking at the feature sheet of what of what rabbit can do and say which of these things can just become a feature of a phone as opposed to a dedicated device. Are there any of those things that you think that is unfair, you've only had it for a few hours, but where this is better than the phone that you can see picking this out of your pocket instead of the phone? Can you can you see any of that kind of use case? Not yet, but I also think there's a lot of things that they're promising on the horizon, like features that aren't here yet in terms of like planning things like, you know, trips and I mean, I think probably like the pocketability and portability is something that I think maybe like the younger generation would appreciate. Where maybe I don't need my phone when I'm going out and I can just like use this for everything else.
It does have an LTE connection built in, but I don't believe that it can it can make phone calls and you can't like text on it. So it's definitely not a communication device. It's more about getting stuff done. So I would say it's definitely like pretty limited right now.
But that doesn't mean it can't get better over time. Yeah. Is there any social angle to this? I mean, I think there is to like to the degree that I think there's going to be a lot of interest in there has been on like TikTok and other platforms, especially because like people want to know like what this can do. Like up until now, it's been mostly about the design. Now we're finally figuring out, OK, so here's the features and here's how they work. I do give the CEO credit because during the demo last night, he said that there are some of these features that are in early stages, especially things like, you know, DoorDash where like you can order like the idea is that you order something and you say like, order my favorite blank and have it delivered here now.
And I would go all the steps through all the steps for you. But there was a delay. There was definitely some latency there. I think one of the big questions that has been raised online is whether or not rabbit, they're saying that they're not using open APIs. So they're actually in a way, they're using these agents to learn the interfaces of all of these apps and work on your behalf. And I wonder and I wonder if like the likes of like DoorDash and Spotify and Uber are going to figure out a way to close.
I don't know if they can, but are they going to try to close that loophole because it's not a quote unquote official partnership? So it's the same problem we have in media is everybody wants to be the destination. Everybody wants to be the brand.
Everybody wants to own the relationship, which is offensive itself. But yeah, yeah, agents stand in the way. Exactly.
Or agents get or agents get middlemen out of the way. But it's more. That's the hope.
I mean, and again, like there's still like stuff that I want to test out. Like I think the vision feature in particular is pretty cool. Like I pointed it at a plant at home. It correctly identified that. But then on another, like I pulled out my iPhone 14 Pro Max and I said, what iPhone is this? And it said it's the iPhone 13, which is the latest iPhone. Well, that's not right. So you could definitely.
So like it gets some things right and some things not. And I think it's going to take some time for you. You also told us before we got on that when you were thirsty, you asked her the question. That's right. Yeah. So I you could double press this side button here, which is basically it, which turns the camera on.
So if I hit that, you could see that the camera like should swivel around. Hang on. Let me just make sure. So that's that's the vision. Yes. So that's the vision.
Yeah. So that's the vision feature in action. And then you can point it at something and then use the so-called press to talk button to say, in this case, I pointed at a home bar and I said, what, what can you make out of the out of these bottles? And then like not only identified the alcohol that was on the shelf, but said, here's the different cocktails that you can make. Again, I thought that was a pretty cool use case.
That's a fun. Yeah. Yeah.
Did you make it? Yeah, right. That's a good question.
You could invent the new rapid cocktail. Yeah, maybe. Totally.
Maybe at five. There you go. Yeah. Yeah. Exactly.
Maybe a little bit later. I mean, you know, the promise of like a fridge knowing all of the items that you have and telling you this is the recipe you need to make. I mean, you know, maybe a device like this can actually deliver on that promise. I feel like we've heard that promise for so many years and it's never quite fully delivered, which I think at the end of the day is the real crux of these devices.
Is it going to deliver on the promises that they're making? Kind of the final note, because I know we have to let you go from the presentation of what you saw. Did you get the feeling, the sense in the room that it was, you know, that the demos as they were kind of delivering them were impressive in their own right? Like, obviously, a lot of things can fail when you're there in a live environment.
Sometimes things on a stage, you know, they kind of scale it down because they don't want to push the system too hard and show their cards too early. I mean, what was the overall kind of like temperature of the announcement with the other people you saw there? I mean, I think it was pretty positive, but you also there was a mix of people that had like press but also early adopters who wanted to be the first to get the device.
So they were definitely more like cheerleading. Super excited. Yeah, right. But I think as far as the demos went and how that all went, I would say I would give it like a B plus because there was some connection connection issues like with the hotel Wi-Fi and the CEO was saying that that's not us. And then he also pointed out that the device itself has a 4G LTE connection as opposed to 5G. But I don't think that's a huge deal given like the limited amount of data that's going back and forth.
But I think most people who are going to buy this will probably not necessarily put in a SIM card but just pair it with their phone in personal hotspot mode. That's what I would do. So one related question, one less related question is at the end of Marques's humane review, he said no one should buy this. So based on your limited exposure to the rabbit R1, who, if anyone, should buy it? It's a good question.
I think it really comes down. Well, first of all, I'm in a wait and see approach because I really do feel like AI is about to explode even more on the phone. So a lot of the features that have been shown off, I think, could very well make it into our handsets in the not too distant future. But in terms of like the target audience out there, I would say younger people who want a cool new gadget and want to experiment with AI. I mean, it's affordable enough where I feel like you could potentially do that if you have a disposable income. And the other potential on tap market is those who are older. So for example, like my mom doesn't, if she needs an Uber ride somewhere, it would be much easier for her to just like talk to a gadget and say, come pick me up and take me to blank without having to go through multiple screens in an app.
And the same thing goes with ordering food and other tasks. So I wouldn't write off the potential for that for like an older audience. Interesting. You know, meanwhile, that coupled with the fact that this design is definitely skewing young.
As far as that's concerned, so, you know, there will have to be a balance over time. Mark Spoonauer is the editor in chief at Tom's Guide. Mark, I really appreciate you carving out a few minutes for us this morning to tell us about your impressions.
When is your when do you expect your review at Tom's Guide to go up? Sure. So first impressions will be like tomorrow in terms of like early pros and cons.
And I would say by early next week, we'll have a full review. Awesome. Right on. Well, we'll be following. We'll be watching and thank you again, Tom. This is sorry, not Tom.
Mark. I just really appreciate having you. Yeah, I know.
Right. Mark's Guide. It's your it's your guide, right? It's Mark's Guide.
Yeah. Anyways, thank you, Mark. It's a pleasure.
It's great to see you again. All right. Thank you.
We'll see you soon. Take care. All right. Take care. All right. So you mentioned or we mentioned during that conversation, Jeff, that, you know, there was the humane AI pin craziness that happened last week.
You were out last week. We had more of like an interview focused show. So we didn't really get a chance to talk about it, but I'm curious to know what you think. I mean, I know I watched the review and I definitely have thoughts. Not only not only Marques's review, by the way, anyone who had the review, I feel like they were all on a very similar page here. Marques was not out on an island.
What did you think? So I watched Marques's. I respect Marques immensely.
We all do. He's always fair and thorough and does his research and thinks them through. So I really respect what he said. And he said, don't buy this thing. It didn't work. I think he had the receipts for where it didn't work.
Where it just didn't make sense. He gave them respect for trying to do something innovative. And I think that's worthy. And I can see some early adopters wanting it no matter what. You know, I still have my old Apple tablet from, you know, what do we call it? Jesus, I can't remember the name of the old thing.
The iPad? No, no, no, no, no, the original way back when. Oh, oh, what you recall it? Yeah, right. I can't remember it now. And I have it because I'm glad I knew the new thing. I'm older.
I should remember that my own one. So, you know, I can understand some people saying, well, I want to try it because it's fun. I think he was extremely fair with it. I think he got it right. He saved me from wanting to buy it. And I think that's important for a reviewer is to help you to make your decisions informed. And I was pretty shocked at the negative response that he got as a result because I thought that the days of expecting tech reviewers to be fanboys and stands was over. But apparently, if you still like, I see this happen with, you know, if I dare criticize the Tesla, some people are going to come after me.
So I guess that we're still in the stand mode to a certain extent with some brands and devices, but it just doesn't work as a device. It was clear. It hadn't been thought through. And Benedict Evans said, you know, if you're going to charge for a device and say that it does things and it doesn't do them well and you took the money, then fair game.
What do you think? Well, I mean, I support Marques through this. I think really at the end of the day, the job of a tech reviewer that's doing their job, the correct way is to, you know, live and use a device as they would normally to get a really good sense of what is capable of, what it doesn't do well if it delivers on its promises. And I think Marques did that and demonstrated it incredibly well. It lined up with what everyone else seemed to be saying about this device. And, you know, he's got, he's got years worth of receipts to back it up as far as, as far as credibility is concerned and trust.
So I mean, I fully support Marques on this. If I was reviewing that device and I bumped up against the same issues, I would have said the same thing. Now, he may have, he may have gone a little overboard as far as like, you know, saying worst product I've ever reviewed, but he also might be telling the damn truth, you know, like it might actually be that. And I would give him more credit in that statement than I would someone I don't know. So, you know, because he's got a lot of writing on the line.
He's got a whole empire at this point. So making a superfluous statement and not backing it up, you know, would be damaging for his brand. I just don't see any of that.
Yeah, I don't see either. And it is interesting to, you know, what you were saying about the rabbit is that these things are new and we see a whole rush of them. And nobody's yet figured out how AI ties to a device, whether a device is needed.
These are unanswered questions. And the problem is that you want to put a lot of R and D into a device. This means you've got to try to sell it and keep going.
And if you haven't been able to figure it all out yet, well, you may not want to buy any AI device. Well, yeah, and that might be the kind of real kind of key thing here at the end of the day is it is early. You know, we're going to see, you know, we're about to talk about a couple of devices here that you've probably never heard of until suddenly they stumble upon the scene and we're going to see a stream of these probably partially for the reason that Mark was talking about, you know, that a lot of these companies want to be there early. Their ultimate ambition is probably not that, you know, this little pin is going to be around 20 years from now. I mean, maybe some of them have that ambition, but I'm sure a lot of them are like, if we're there early enough and we prove ourselves enough, we can have a really big payday when Google comes along and says, OK, we want to do something similar to that. It's easier for us to buy you and integrate than it is for us to start from square one.
I'm sure a lot of them have that plan. What's interesting to me, too, I can be wrong. You know, I think many, many will be interesting to me, too, is the is and Mark touched on this is which modality is going to make sense for this. Mm hmm. Right.
So the thing that's gotten fairly good reviews all in all is Metas Ray band glasses. Mm hmm. And, you know, it's not a big deal.
It's not glass whole time. People seem to like it, though I've never seen one in the line. It's true. People have been pretty darn positive. They've been pretty positive about it. And and and they're doing some more with them through new designs. So that's the one that might tempt me. But I'm not sure that audio is the modality that I want.
So the rabbit having a screen and the ability to touch things and do things with my adult phone accustomed to brain. Yeah. Makes sense to me. Right.
Yeah. Um, the the the laser in the hand, Marques, I thought that was cool, but Mark has showed how that really doesn't work very well. And then you're trying to do your fingers at the same time to do things. No, that just, I think that just as a design step, that doesn't work. Um, so is it going to be eye or ear that's going to matter more when you're going to try to interact with an agent doing things for you?
Is it more about agent, agentic functions? I love saying that word. Sounds so fancy. It is a nice word. Or is it about information? Yeah. Is it about doing things or learning things? I just think that we're not sure yet, why you'd get these things. And then if so, what's the best way to do it?
And is it worth the money or not? And the only way we get there is through these these fits and starts in the experimentation and the different form factors and stuff like that. You know, no one knows. That's that's why we're going to see some really wild kind of fluctuations between things that work and things that don't.
No, you just stay there for one more second. If you wouldn't, because you make me think that, you know, an apple can't do this. An apple can't put out a product like either of these because it's just not apple enough. It's not done enough.
They can't experiment. I think Google of old could have put out something for the heck of it. And oh, it's just Google. So it's okay, but they can't now either. They've got to release, you know, really polished phones. So the innovation is left to these startups where they're going to try to get VC money. They're trying to get something out.
But it's the hardest for them to do this. So where's the where's the innovation properly happening? I'm not sure. Sorry. Yeah, totally.
No, no, no, I think you're absolutely right. It is the hardest for them to be able to effectively pull it off. You know, they will almost need the the bottomless purse strings of of a company like Apple or Google or something to buy them up so that they can realize that similar in some ways to what Oculus did with with Meta and everything. Yeah, you had put into the the rundown, which I hadn't really, you know, heard much about these until you did. One is a piece of hardware called limitless, which is a, you know, it's like a clip device. It's a wearable AI device, magnetic, lightweight.
It can record conversations, meetings, personal thoughts, all hands free. So it's just like tiny little clip or it seems tiny. I mean, I'm seeing the the USB-C port on the bottom, which gives me a sense of, you know, kind of the overall size. The thing is not that large, but it can do all the usual things. Transcription note taking, summarization.
It has zoom integration, AI assistant for questions. It's very encrypted or so the website says, you know, very protected there and pretty inexpensive. Ninety nine dollars on pre-order right now. And you don't need a subscription with it. But one big, but you can. And what does that get you?
I couldn't, I couldn't really see where that, well, that got me. Well, so free gets you 10 hours of AI features for the month. So depending on how often you're using that, yeah, I don't know how long it takes you to bump up against that.
And what is, what is qualified as 10 hours? Is it I ask it a question, is that three seconds? Or is it I ask it a question, it takes 10 seconds to solve it or to come up with an answer and then it takes, you know, five, five seconds to read that to me. Is that whole amount of time? Like I don't know what that breaks down.
If you go to one conference and you want to record the conference and have it right, then you're out. That's probably the more like that's you nailed it. That's probably exactly what it is. You go to classes and you want to do it. Well, after three, three hour classes, you were just about out of luck for the month.
Well, those are really useful scenarios, right? I had a really great meeting with a friend of mine yesterday. And at the end of it, I was kicking myself because I didn't have a transcription of it. Yeah, because my mind has a really hard time holding on to all the disparate details.
And even if I'm scratching it down, like I'm going to miss something. And so more and more, I want that. And so my old days is a reporter. It's useful to me. I mean, I have terrible handwriting. So I called my version and reporters notebook Jarvis Hand. It goes cold after 20 minutes. What was that word?
I can't even tell. To imagine being a reporter now, being able to record. Yeah, we always have recorders. We always have to be able to record and get a transcript and get a summary. Geez, well, luxury. And with hardly any time waiting, you know, compared to the old paradigm transcribing in an hour long, you know, interview, that was a that was an endeavor in and of itself, potentially pretty expensive. Yes. So this is it's twenty nine dollars per month monthly, but yearly 19 dollars a month.
Bill Riley. And you get audio transcription notes, meeting summaries and more for that. Your phone to do it to unlimited. Yeah. Right. I guess maybe it's just out.
You don't have to go starting things. I guess you can just say record. Mm hmm. Yeah. And I do think that there's there's a lot like there's a lot that's very appealing about that for me, if I'm completely honest, because so often like, man, when I'm driving, I have some really great ideas when I'm driving. You know, there's something about the mind mode that we're in when we're doing something like that or when you're in the shower and you have you come up with this really wonderful idea because, you know, your mind's just in a different state.
I know there's a terminology for that, but I can't remember what it is. But I would love to wear something like this when I'm driving so that when something pops in there, I don't have to, you know, there's no temptation to pull the phone out. And, you know, it's just safer to just like tap the thing. And, you know, that right there might actually be worth it for me to try this. You know, unless you're Robert Scoble, you don't wear the shower.
Yeah, no, I wouldn't wear this in the shower. What would I clip it to? Let's not go there. There's there's there's something I'm gonna I'm gonna be constant ads on TikTok for the PLAUD note PLAUD at plaud.ai. And so it's $159. It's you the idea is you put over the back of your phone. You can record stuff. And then I don't think it I don't think it physically interacts with the phone, but it interacts with the phone otherwise. So you can get in the same functionality. But once again, why do I?
Is this because it's just a one button hit and it's easy? And three month free Plaud membership. Well, what is that? Otherwise, what are you buying into?
Right? I think people are going to have to buy into subscriptions with devices. I think they're going to get even though they do with their phone. And, you know, we're just at this point subscribed to death. Yeah, exactly.
And so everything costs 20 to 30 bucks a month. And it's just patience. Problems being tested. If you're in Spotify, you can't use Spotify yet with these things for music. But all of this looks much cheaper than the other device that I put in. So the other device that you put in, this is the IO one. Is that right?
Yep. When you put in audio computing device, audio driven agents in the form of in-ear monitors that are the size of a half dollar. Oh, they are big. Didn't didn't Microsoft have in-ear buds that kind of looked like this? They were like the large flat circles, not nearly this large though.
These are gigantic. Yeah. So the price for this. Yeah, what is the price?
One thousand six hundred fifty dollars. Are you kidding me? Close your mouth. Wow. Wow. Yeah. Yeah. For that enormous. Okay.
Not for me. No, I don't know who would say. They call it the first audio computer.
Well, okay. You can reserve it for I think 69 bucks or something like that. So it has many microphones. It has lots of technology and that huge thing as well as should.
But at the price, no. And then second, again, I don't know that the audio is the interface that I'm going to want or the only interface I'm going to want. I mean, it certainly has limitations. If the only way that you interface with something is to use your voice and speak into it.
I mean, we know that that really limits you. If it's a noisy environment, which, you know, maybe with technology, they can figure that sort of thing out. If you want to be discreet about something that becomes really difficult. It's not always the right solution. And so a device that only allows for that, I feel like that has has some inherent issues, but maybe that's maybe that's closed minded thinking based on a current paradigm. You know, so they're saying it has a quad core CPU with 32 gigs of storage and two gigs of LPDDR4 RAM, which is interesting because if you look at what's happening with your phones, they're getting, you know, tensor chips are going into the pixel phones so that the AI can can happen locally.
And the rabbits must be happy and this it's got to happen in the cloud. That's one of Marques's complaints was that was the turnaround time to say something to get it computed, get it answered and get it back to you was considerable. So I don't know that these small little devices, these cheap devices are going to beat the phone just in terms of the compute power. And they're not going to beat it for 1000, whatever it was. No, no, that's insanity.
I'm sorry. This this little earbud thing is not you're not going to sell a lot of these at that price. Oh, no, I was thinking more like the $600 range or something like that.
Well, that was the that was the pro. I'm trying to find the price of the one. Yeah, I think the one, my understanding was the one was going to be 599 Wi-Fi 699 cellular plus the MOLLE. Yes, sorry. Yeah, there you are. And then the pro is, oh, I see there's a separate tag for the pro. Well, still good. And I didn't realize that.
Holy moly. Okay. Yeah, no, thank you. I mean, it was $600, $700. No, I could get a phone for that. A very good phone for that. If they want to send me a review unit, I'll review it.
I'll give my absolute honest thoughts the way Marques does every time, which is not to say that I will skewer it, but I would give it a fair review. But I guarantee you, I don't know what it would have to do to justify that cost, but I can't think of anything off the top of my head. That's crazy in my mind.
Anyways. So, and all the device world, it'll be really interesting to see, but who knows? There is definitely more on the horizon.
No doubt about that. All right. We're going to take just a quick pause here. And then when we get back, we're going to talk about things that aren't hardware related, maybe a little bit, but we've got more coming up.
All right. Nick Bostrom's future of humanity Institute now officially kaput shut down after 19 years. Bostrom resigned from Oxford shortly after its closure, saying it was quote, death by bureaucracy. How do you feel about this? Well, so you've heard me go on about the show, fair about about TESCREAL and the eugenics related philosophies that drive some of the AI boys. Bostrom is kind of the philosopher king of all that. And I've been shocked for some time that he's been associated with Oxford. And this future of humanity Institute is the one that came out with the first pause letter. It's all full of doomsterism. Bostrom is the one who put forward the doom scenario where if you tell the AI to make paper clips, it will do nothing but make paper clips and it will destroy us because it's only mission in life used to make paper clips. So I have to say I was amused, you complained about being killed by bureaucracy. Too many paper clips.
Yes. Being killed by too many paper clips. At least the department was killed by too many paper clips. There is there is some serious overlap right there, Jeff. That is a really good point. So I was glad to see some reaction that Oxford at some point said, we don't really want to play with you. And now part of this was that Bostrom, an old email with racist comments that he tried to back down. But what it was found some time ago, and it was old, but it was there. And then his apologies didn't quite go far enough, one would say.
Maybe they never do. But in terms of his views toward eugenics, he still doesn't completely reject that, which is the worst of this. The most frightening. Amiel Torres has written about that. Emile Torres and Timnet Gebru came out with a new paper at first Monday about all of this. So if you want to get background, you can go there and see that and and follow Emil on Twitter and company.
And I think you'll see that. Oxford was smart. Mm hmm. Yeah. Got it.
Got out at a pretty good time. You also put in here the the National Institute of Standards of Technology, appointing Paul Cristiano as head of the US AI Safety Institute. Paul is former open AI researcher and his work in the past is focused on safety slash predictions around potential doomsday scenarios with AI, so, you know, connections to effective altruism, connected connections to longtermism. So the guy who's going to work on safety at NIST, which is an important agency for all of this, is a doomster. And so my fear is that it's going to distract the immediate present tense safety concerns about AI for these doomsday, ridiculous future of humanity concerns. And again, I think the problem is that media have not been covering test real well so that reporters would have looked into this.
And maybe the bureaucrats at NIST, I understand the appeal of it. Oh, he's a safety guy. We want safety guys. Safety AI. Good. OK, let's get the safety guy. But it's the definition of safety that's being used.
And if that definition is far out there, crazy land, then that's going to skew. That's some people said, no, the guy's really smart and he has some expertise and he understands this. And I said, fine, have him advise, but to be in charge of this at NIST.
Not a good move. Yeah. Yeah, it's interesting. Yeah, I think I think my my feeling on this, not knowing him, but kind of having, you know, read through this article and get a get a sense of him is, you know, I don't want someone who is full court press doomsday scenarioist in a position like this. So I also don't want someone who's a total AI stand in the know. Exactly. You know, it's it's got to be someone who can who can be reasonable and and kind of, you know, direct this from the inside, from both directions.
And by the way, not for nothing. The people who I think fit that definition between in terms of being able to judge both sides tend to be women. I'll see more women and white man. My goodness.
And they need more women in these roles, more people of color in these roles, because the sense of risk is different. Yeah. Yeah. Amen. I totally agree with that.
That's a really good good point. Let's see here. What next? Microsoft showing off a new AI model called Vasa VASA-1.
I may have just skipped ahead, but sorry about that. And this is basically my understanding is it's about turning a single photo and an audio snippet of a person into a fully animated, fully like an almost interactive because some of the videos I've seen that, you know, you just place a mouse in and basically turn their gaze anywhere. And they, you know, it looks very, very natural.
I'll have to pull up one of the videos here so that we can watch it. So so this is the input. OK. Sometimes everything happens all at once and you just got to deal with it. Wait a minute.
And it's also just examples of audio input of one minute long. OK. And then also be extremely. And there's something about there's something about the eyes that's still coming. Like this, you know, it's very starey. And I mean, hey, you know, the uncanny valley thing is it's gotten a lot better. It's certainly they've come a lot further, but there's something very starey about it. I would say that we as readers are not meant to look at him in any other way, but with now, for those of you listening, treats, it's the same person.
Of course, he is shot by four cameras. Yeah, looking in different directions again. And it's the same. Yes, it's really interesting. And then, of course, you've got these out of distribution generalization.
It's the same thing, you know, stuff which they can make them all turn the Mona Lisa into a TikToker. I'm a paparazzi. I don't play no. I'm sorry, audio listeners. It's it's really annoying. It was the Mona Lisa. She's so irritated.
Just shut up Mona Lisa. My gosh. Yeah.
Anyways, very interesting stuff here. You know what it reminded me of? It reminded me of what was the name of it? HeyGen, do you remember? HeyGen, right? Yeah.
Back last year. Yeah. And people were calling it. HeyGen was. Was that go ahead? You go explain. HeyGen, well, I was just going to clarify for those who don't know what HeyGen was, it was a what we're talking about was a demonstration of an A and A I generated video that basically was shot on a person speaking English, essentially, and reading this thing.
And HeyGen could dynamically. I'm not sure if it's in real time or post processing, but basically jump between languages. So it's doing live translation into another language. You're hearing the audio in the other language and you're seeing the lips and facial expressions match the audio of that other language, even though it's, you know, the source is the person speaking, let's say, English. And it was really convincing and it just got my mind and imagination going on how useful that could be in places like kiosks and airports and and these things to make people feel more in, you know, it's it's kind of a tool of inclusiveness, potentially, because instead of you feeling like the only way I understand this is to, you know, read the subtitle down at the bottom.
You know, my my language wasn't important enough to translate into. You know, I mean, instead of that barrier being there, it works for everyone. And I think that's amazing. I think that's really a really positive thing. Yeah, I think what's going to scare people on the other hand is you're never going to know by watching a video, whether it's the real person or a made up version of the person, which just says, OK, we got to accept that now and we need other mechanisms of identity and verification.
And those are going to be social verifications. Yes, I know Jason. Yes, he said he I know the human being Jason. He told me he said that I heard him say that that's Jason. Or I don't know if that's Jason. So don't take the video's word for it until you ask Jason. And then the question is, how do you ask Jason?
Email, if you really know, you have Jason's email address. Yes, texting on the phone. Yes. But what happens if Jason turns around and has agents answering his email and his texts? Mm hmm. So it's going to be interesting to see how we how we establish these norms of verification.
Yeah, that's really true. I just heard my name many times there. Jeff, thank you for it. It's like, well, yes, yes, yes. Um, let's see here.
What else do we have? We have Google kind of doing a little bit of an org shift. I'm sure you're going to be talking about this and probably some of this other stuff today on this week in Google. But this is a big restructuring at Google announced, Android and hardware teams combining under a new platforms and devices team with a real focus on unifying their teams around the AI.
That they're creating and integrating. So it's still Rick Osterloh at the head, who has been doing this for years with with Google's hardware efforts. Hiroshi Lockheimer, who has been really the kind of the face of Android for a number of years, love Hiroshi's been on a few of my shows from time to time. He's moving on to other areas of Google and alphabet. Sameer Samat, who is also part of the Android team, is going to move kind of more into Hiroshi's role.
Sameer is great. And this is I think ultimately what is this about? This is about, you know, the Google's often, you know, complained about problem of the right hand, not knowing what the right hand is doing.
Having this. Oh, it's a good point. Having this solution that everybody is having this problem that all the departments are solving in their own ways differently and not having some sort of like a unified voice or strategy. But then when I say that out loud, it also kind of reminds me of Google Plus and how Google was like, well, we've got to have Google Plus and everything.
And that didn't go so well for Google Plus. Maybe it'll be different here. What do you think? Well, the other thing is, so let's put that's half of it. The other half of the story is there and DeepMind was already brought in more into Google in one hand. Now they're combining all their model building research and DeepMind together as well.
And so I think you raise a really good point, Jason, is that right hand, left hand issue at Google? One hopes this is going to bring more continuity and strategy into what they're doing. Yeah, we shall see. Well, remains to be seen.
This is Google we're talking about. I mean, this has been an issue time and time again for them. But yeah, yeah, we'll see. And I think the other the other thing, and we did talk about this a little bit on all that Android last night is kind of the the firewall between Google's hardware and Android. And that's that has a little less to do with with AI than it does. You know, maybe other things with like with partners, you know, in the Android ecosystem, or maybe it does have something to do with AI because so much of what Google is, you know, going to be leaning into and differentiating its hardware with is its unique approach to AI, its unique hardware in the tensor chip.
And it'll be interesting how they to see how they balance those relationships, kind of focusing more on these unique qualities of what they're developing in house. Right. Yeah.
Interesting stuff. Well, speaking of Google, YouTube is integrating a feature called Ask. This is for YouTube premium subscribers.
And you can actually find this right now. If you are a YouTube premium subscriber, you will likely end in the US, mind you, until the end of this month. Anyways, they're testing it out. But it's an ask button that will appear in the on the YouTube app on your Android device.
I don't even think it's iOS right now. So this is purely a test. But essentially what it is, is it's a way to implement and assign Gemini to analyze the contents of a video and allow you to interact with it. So, for example, you're watching a video on YouTube and instead of watching a 10 minute video on how to do this thing. Oh, I hate that. Could you could you give me the, you know, the steps, numbered steps?
What do I need to do? And, you know, of course, Google has access to the transcripts of these videos. It's probably a very easy task for it to do very quickly. So I don't think it's interesting. I'll be curious to see how it impacts kind of viewership and, you know, ads that are served on.
Well, it wouldn't impact ads because it's premium subscribers. But yeah, there's there's some weird things about this. What do you think? It was interesting that the need to kind of disintermediate video as as a modality, we use that word again. The video can be engaging, but it's also inefficient. Right.
And, you know, I get frustrated with a lot. And New York Times has podcasts, audio, the same thing as podcasts that they put up the transcript. I'm not going to read the damn transcript. Give me a summary. Mm hmm.
Yeah, I think there's something to that. I've been trying to rewatch Jensen Wong's keynote because there's one thing he said in there about how when when machines get this big, you got to fill them up. And I want to use that quote in a paper I'm writing about a different topic, about a totally different topic. I can't find it. I can't find the exact words where and I have a transcript, but even then I can't find the words.
Maybe I could query it. Right. Right. So I can see being able to get into video differently. My hope is that when I ask the question, it lets it also allows me to watch the video so we don't lose people who make videos like us. Don't lose entirely the the ability to interact that way as well.
Oh, yeah, I doubt that's going anywhere. I think it's just an extra kind of interactive, you know, option for people who are on the page. I mean, I could and I can fully admit, like I've been doing a lot of reading and research and everything about YouTube because of what I'm doing for my career right now.
I've got I've had a lot to learn about, you know, the ins and outs of so many things. Right. And sometimes the information that I feel like I'm looking for is contained in a video.
Yet I don't have unlimited amounts of time. So I will fully admit I have pulled an MP3 from a video on YouTube, transcribed that with Revoldiv, which is one of the apps of the services that I chatted with the founder, Surafel, last week on the show when you were out to get the transcript and then ran that transcript through Perplexity. I said, all right, tell me what I need to know about this. Like summarize the steps.
You know, what what is if I had to like put this in a book of best practices, tell me what I need in that book based on this transcript. And it did a fantastic job. It gave me exactly what I was looking for. And so when I saw this feature, I was like, OK, I know that's useful to me.
Yeah, I'm curious to know like people creating the content, how they're going to end up feeling about that. Yeah, you know, does it mean that they take a hit? It's like newspapers feel watching their video anymore. Right.
When the sports scores get quoted in Google. Yeah. Yeah. Yeah. Entirely.
So interesting. And again, this is not like a this is not like a full feature right now. It is being tested. I think at the end of this month, it goes away. But I bet you we see it again. I'm super curious to see the life of that feature in the near future.
Sure, we'll see it. And then finally, we saved Meta for the end. Meta's been they've been busy this week. They've been busy. Had a bunch of announcements related to AI this past week. One of them we kind of mentioned, although we didn't really talk about the news. We talked a little bit earlier with Mark about about the Meta Ray-ban glasses being kind of very interesting hardware for AI.
But Meta did have an announcement. They're rolling out multimodal AI support. There's been a very long early access program for this. So now they're rolling it out to folks who have the Ray-ban glasses. You say, hey, Meta, look and dot, dot, dot. That's followed by what you want them to do.
Look and tell me what this plant is or look and read this sign in English, whatever it happens to be. So, yeah, I think that's that's going to be a real interesting use case. I'm almost feel like how much do they cost?
I need to pick up one of these. Yeah, that's a good question, actually. My problem is I have to get prescription lenses. Otherwise, I'll, you know, walk into walls. Oh, right. 299.
Two hundred ninety nine dollars for the smart glasses. And, you know, they've got the cameras on them. Of course, that's how they're doing the multimodal stuff.
But and there's, you know what? These are pretty stylish glasses. They're OK. It's it's. Yeah. Yeah. Yes. I mean, based on where we were 10 years ago, you know, with with Google Glass, which Google really leaned into the this looks like science fiction.
This looks like the future. And that that bought them some time. But at a certain point, people were like, all right, but come on. I want my glasses to look like normal glasses. And it's taken a long time to get to the point to where they kind of do.
You know, I might try this. Yeah, I think it's farther along than the rabbit. The rabbits are fun device that I can see playing with for a week. And then it stays on the shelf and that I put it. Yes, my new. All right. Where's the glasses?
Maybe I can see using. But then I've also got PTSD when it comes to having bought Google Glass. You were in and then what exactly gave you the PTSD?
Was it the public reaction? The money because I also got I got prescription lenses in this. So this cost me something like two thousand bucks. Yeah, to be in the future. Yeah. And now they've I got mine. They sit on the shelf right behind me back there. Schmuck right out of the cameras.
Can I rest distance? Hey, no, we were cutting edge. We were indeed. We were cutting edge at the time. That was not the only news, though. Meta also had a big integration announcements for its AI assistants.
So Instagram, WhatsApp, Facebook, Messenger, also standalone site at Meta.ai, where you can basically go on there and ask Meta AI anything like play nineties, music trivia or make my email sound more professional. All this the standard stuff, you know, paint New York City and watercolors. So it does imagery, it does taxed, it does all the things that we're getting really used to seeing from all these systems. Yeah.
So if you type in, for example, I just did this a minute ago, draw a watercolor of San Francisco in spring. OK, continue without log in. So I do not have to log in, though. I do have to tell it. I do have to lie to it and tell it how old I am. Sure, 1960, whatever.
Finish. OK, that is not how old I am, by the way. OK, so it's thinking. It's drawing a watercolor. Oh, so I do have to log in if I want to generate images. Yes, so it'll be worth it because when you do, what we're going to see is you can it'll do it live as you type. OK, all right, so I've logged in and now it's generating. Now it says me, it says draw watercolor of San Francisco in springtime for audio listeners.
We've got this now. Now type in draw watercolor of New York in the springtime. Draw a watercolor of New York. See how it's changing? Oh, that's neat. So while I'm typing, it's kind of iterating on this as I'm typing. Right now, if you could not see that for.
You can see the leaves change in the image. Or you pick a different city. Dave Winer did this with various cities.
A watercolor of Albany. In the summer time, that's a hard one. Dusk. Yeah, I don't know.
I don't know. We're going to come up with Albany. It's like Albany.
Why'd you choose Albany? I don't know. I wanted to get creative.
But it's a it's a it's a it's a parlor track, but it just shows how fast this thing can do. So as Jason was typing and use the word water, it had an image of a pool. Yeah.
And then the next word. That's really neat. Changed it. So it just is a demonstration of how fast this thing is. Wow. Yeah.
I mean, and, you know, you've got all the resources of the behemoth that is meta behind this. Doing what they did for, you know, VR and Oculus. Doing they're just they're putting it all in there.
And then that wasn't it. They actually also had news of Lama three saying it outperforms other models and benchmarks. So they launched Lama three, two smaller versions of Lama three open open sourced, which is immediately. And this is a big part of what meta is doing is they're going the open route. We're kind of going the Android route with with AI, essentially. Those two smaller versions, eight billion and 70 billion parameters on the two smaller versions, the larger version capable of more than 400 billion parameters isn't out yet.
That's coming soon. But they say the data set is seven times larger than Lama two. Also much more code data in the system, which means that meta is according to them, particularly meta AI is particularly good at coding.
Coding tasks. Yeah. But it's still I hate this word hallucinates.
It still makes mistakes. I asked who I am. It's the usual, you know, hello world. Who am I? It said that I do a podcast called the Jeff Jarvis show.
You know what? Maybe it's not wrong, Jeff. Maybe it actually knows the future.
It's looking at the drop. I can hear Jason. Oh, no, I think you're building up your podcast empire, Jeff. It's time for you to lunch. The Jeff Jarvis network.
Damn it. So so it's still it's still with all the weaknesses, the LLMs, but it's open source. It's fast.
It's interesting to play with and it's open for everybody. Yeah. Some people complain that I haven't really seen this happen. If I go to Facebook and I ask a question.
Hold on here. So if you go into Facebook, the app and not meta AI. Yeah. And I ask a question there. It does not. I thought that the search there was handed over to to meta AI, but it's not.
At least what I would be a big step, I think it would be a huge step. That's always happening on the web app. Maybe it's something on the phone app.
Yeah. But if you just go to meta.ai, it's open. It's free. You can play with it.
You can draw images. It's pretty cool. Yeah.
And again, and it keeps your history tied to your account. Yes, for two. That's nice. Man, these things, soon enough, they're going to be total. I mean, these things are already everywhere. But with that, we're so used to paying the monthly fee.
And then you've got a company like meta basically saying, no, here it is. One of the best out there and it's completely free. It's open source, whatever, go for it. What does that do for the competition who are all fighting to exactly pay twenty, thirty dollars a month for this?
It undercuts that pressure early on. I mean, it's the Craigslist issue, right? Yeah, the free free is pretty hard to compete with.
And yeah, I think meta is saying that. No, Apple still succeeded when there was Android out there. If it's that good and that special and that amazing, fine. If you have your B2B strategy and you're selling it to companies, OK. But from a consumer end, I think AI is going to be free. I think it's going to be the gateway to other things. Oh, yeah, it certainly seems.
I mean, after after working with this specifically, that's that's what really but it's expensive for the company because the electric, you know, again, if I go back to Jensen Juan's keynote at Nvidia, the power consumption of these machines. Yeah, is still huge. Yeah, that's my ring and that's environmental impact. So it really is. But it's fun for us to we get to sit on the side and watch all this and learn and get to get to generate images of New York City and in watercolor that we didn't paint.
That's what we get to do. Here's the question to get a picture of Petaluma. Oh, if it yeah, if it couldn't do Albany, it probably can't do Petaluma. In the it's got the majors. Doesn't it has the major leagues, not the minor leagues? Yes, it's working or the or the T-ball leagues.
It doesn't doesn't have the T-ball leagues either. So well, it did it for me. I think it's I think it's Petaluma. It's OK. A lot of leaves. Yeah.
OK, yeah, sure. Maybe Petaluma just looks like a lot of other places. I think that's probably living here.
I don't know that I totally agree with that. But anyway, Jeff, I'm happy to have you back. I'm happy to be back here today. Yeah, this is a lot of fun.
Tell people where you want them to go to check out your work. Just gutenbergparenthesis.com. You can get discount codes for my book magazine, which I'm proud to say is now in the magazine store, Magculture in London. I was so I just emailed about nowhere and said, would you carry my magazine?
They're very nice. They did. So I tweeted a picture today of my magazine on the shelf, my book magazine in a magazine store looking like a magazine. And I hope it fools people in mind it. And then also the Gutenberg parenthesis. So thank you, folks. And then also benefit Jason and go to the Patreon for Jason and all of his shows because he's started all kinds of new things.
And we want to make Jason into an empire, a podcast here. What someone pointed this out to me based on the news that that I put out there last week, that they were like, you're a network now, man. And I was like, well, I didn't really think about it. But I guess I am. I mean, I guess that was part of my idea all along. So first of all, before we get there, Patreon.com, you know, slash AI inside. And there you can find the Patreon for this show, which I highly encourage you to do because I actually went in there over the weekend and kind of made some tweaks to everything, added more tiers, added more benefits. There is now merchandise. So if you get in at the five dollar level or above, you will get an AI inside sticker.
I wish it showed you when you clicked on it, but it doesn't for video viewers. And then there is a higher tier, a twenty dollar a month, which is, you know, a good chunk of change, which we would really appreciate, of course, but you also get an AI inside t-shirt. And I think the cool thing about that is that when you're wearing an AI inside t-shirt, oh, I'm not even showing it.
Sorry about that. When you're wearing an AI inside t-shirt, people will wonder, like, are you AI? Is there AI inside of you? See, it really works. It works. Yeah.
So anyways, patreon.com/aiinsideshow. It'll be like having an English accent. I think you're smarter than you are. Right. Don't ask me a question and expect me to know the answer.
I might hallucinate all over you. But anyways, patreon.com.com/aiinsideshow to support this show directly.
That support comes to both Jeff and I and enables us to do the show on a weekly basis. You can also, of course, go to aiinside.show. That is our website for the podcast. You can subscribe to the podcast, find everything you need there.
We keep it all nice and tidy and neat and everything you need there. Now, what Jeff was talking about is last weekend, I launched my new project called Techsploder. So if you go to youtube.com/Techsploder, that is the new YouTube page that actually this show streams live to.
It's streaming there live right now. And so it's, you know, it's it's the AI inside podcast. It is also a new podcast that I'm launching next week called Techsploder podcast, which is going to be all about technology, the human element, the human aspect of technology. It's really conversations. It's not news as much as it is conversations with people who I adore like Jeff Jarvis, like so many people in the technology industry that I've worked with over the past few decades and talking about our shared love of technology, where that comes from, where we the same, you know, how do how does that different? What how does technology disappoint us and not deliver?
What do we wish it would do better? I am so excited for that for that podcast because I get to talk to so many amazing people. And then on the YouTube channel at Techsploder, all of my, you know, my reviews of different technology, hardware. I'm just thinking of all sorts of ways that I'm going to really lean into this human side of technology.
And I'm just really fired up about it. So search for @Techsploder. You can also go to Techsploder.com for the podcast specifically.
But the YouTube channel will give you basically everything. So thank you for mentioning that, Jeff. Gave me an opportunity to talk about it. And also I should say for this show, because I mentioned that we have new perks for the patrons. One of the new perks is if you're at the twenty dollar level, you are officially an executive producer. And I will call you out every episode during the month of which you are an executive producer, like I will right now for Dr. Dew and Jeffrey Marriccini. Both of you are executive producers of this episode of AI Inside. And we deeply appreciate your support. All right, I think I'm done plugging at this point.
That was that was a lot. Thank you for your patience. Thank you for watching and listening to this episode. And Jeff and I will see you next time on AI Inside. Bye, everybody.