Jason Howell and Jeff Jarvis discuss Apple's M4 chip, OpenAI's deepfake detector, Microsoft's MAI-1 model, and the impact of AI on stock photography and audiobook professions.
Support AI Inside on Patreon: http://www.patreon.com/aiinsideshow
NEWS
- Apple’s new M4 chip is focused on AI
- Apple Is Developing AI Chips for Data Centers, Seeking Edge in Arms Race
- OpenAI Is Readying a Search Product to Rival Google, Perplexity
- OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers
- Fights over disclosing AI content
- Those A.I. posters in True Detective: Night Country were supposed to be a knock on A.I., apparently
- Meet Microsoft MAI-1: Microsoft Readies New AI Model to Compete With Google, OpenAI
- Gary Marcus: Microsoft and OpenAI’s increasingly complicated relationship
- The teens making friends with AI chatbots
- The Last Stock Photographers Await Their Fate Under Generative AI
- Invite-Only KDP Beta for Audiobooks
- Amazon has 40,000+ "virtual voice" audiobooks since it opened the doors last November
- HarperCollins Publishers and ElevenLabs to Bring More Stories to Life Through Audio
- AI Dungeon
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside episode 16, recorded wednesday, may 8th, 2024.
Apple upsets ai ante with the m4 chip. This episode of AI Inside could not happen without the support of our amazing patrons at Patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly and thank you for making independent podcasting possible. Hello everybody and welcome to AI Inside. I am one of the hosts, Jeff howell here to talk about the ai that's apparently inside a whole lot of things.
We've got many news stories that tell us that today. Joining me as always, Jeff jarvis, how you doing, Jeff? Hey there boss, how are you? Great, I'm super excited. We've got some news. We've got apple in the rundown, which is, you know, sometimes I feel like I have blinders on to apple because I'm so used to not talking about apple in my technology career. As I say in Manhattan, the upper east side is a different country. Apple is a different country.
Yeah. I mean, I'm just, you know, for anyone who doesn't know, like I've been on android and podcasting about android since almost the beginning of android. So I've never really owned an apple device. Well, an apple like smartphone, but you know, I'm running all of this show and all of my computing on apple devices is just the smartphone aspect of things is so lost on me. I just haven't spent much time with it. Have you ever actually owned an iPhone or have you always been android? No, I did back in the day, but I was android pretty early.
Once I got on this weekend, Google. Yeah, I know. I saw a story today. It's not an AI story, but an argument that Steve Jobs thought the iPad was going to win on the iPhone one, not he still won both ways.
And there was a moment on twig in the early days when I re-boxed my iPad. So I feel vindicated. There you go.
Good. Well, we are actually going to talk about the iPad kind of sorted today. That's going to be coming up here in a moment, but you know, real quick, just a reminder, if you if you are subscribed to this podcast, thank you. If you're where you are subscribed allows you to review.
Please leave us a review. Want to try and continue to raise awareness about the podcast, spread the word. If you know someone who loves AI, loves tech podcast, that sort of stuff.
Tell them about AI Inside and get them to subscribe. That would be be a huge favor to us. And then, yes, we do have a way for you to support us directly.
Patreon.com. Oops. Actually, that was the wrong lower third there, but AI Inside show. Actually, so Patreon.com/aiinsideshow. Make sure you go to that one. And you can support us directly like Jeff Brady, one of our amazing patrons who supported us from practically the very beginning of the launch of this podcast. And we seriously, when we say we could not do this without you. Without your support, we could not. This is how we run the show. So we really appreciate you all being on board.
Appreciate you reaching out and telling us how much you support what Jeff and I are doing. And on that end, why don't we start delivering on some promises here? We got a lot of news to talk about. Let's start off with Apple, who held a iPad event yesterday. Showed off some AI announcements along with that, specifically the new next gen M4 chip coming to the iPad Pro.
With a focus on AI related tasks. Tim Millet, who's Apple's vice president of platform architecture called the M4 an outrageously powerful chip for AI. And, you know, Apple's making more mention of AI because I think the kind of running thing around the AI race in the last couple of years has been whether accurate or not that Apple has seemed of all the big tech companies to be the laggard when it comes to really embracing and, you know, going full, fully vocal about, you know, the presence of AI in its devices and its strategy. Yeah, I responded to Benedict Evans on one of the socials a few minutes ago that he was talking about how Apple's now going to say AI in every presentation. And I said, that AI is the new 5G. Remember, we used to make fun of Apple for what presentation was 5G, 5G, 5G, 5G. Now, we're saying, oh, yeah, big time. And it's going to be the same next week with IO, though they've been AI obsessed for a long time at Google. Yeah, that's interesting in all kinds of ways.
You know, the reason that I did rebox my iPad was because I said it wasn't a very good creation machine in the time. And this was when? When was this?
Early, early iPad days? Oh, first one. Yeah. It was the very first one.
Okay. It's changed a lot. It has.
It has. And I ended up with Google tablets. I wish I missed my Nexus 7. I loved my Nexus 7. It was pocket sized. Yeah.
And then phones became that size. So the tablet piece hasn't interested me. But this creation piece with AI is interesting. So I wonder whether, you know, the problem with the tablet is you didn't have as many tools as you could muster on a desktop to do all this creation stuff. Now the tools have gotten way different, way better. Plus, the help from AI. I wonder whether it can make the tablet into a powerful creation machine now.
Oh. No question it will. I mean, we've and we've seen on our smartphones, you know, kind of what what is kind of to a certain degree table stakes as far as AI functionality in the last year. All the smartphone manufacturers are really doing a lot of similar things, right? They're doing, yes, your your generative AI around text and, you know, composing messages, helping you, you know, change your tone. They're also leaning into the generative AI around image manipulation and, you know, Google's doing really cool things with the best, the perfect shot or the best shot.
I can't remember what is called up the top of my head. But anyways, we're seeing those kind of creative elements and that is kind of part of the realm in which Apple hasn't been playing in the sandbox so much. And I think part of what this announcement really seems to set the pace for is, all right, the M4 can do a lot of this stuff on device now. We know that Apple is really, at least, you know, from a publicity and marketing standpoint, really sensitive to the privacy of its users and keeping things out of the cloud as much as possible and on device. And this processor kind of, you know, and also this processor debuting on the iPad with WWDC coming up next month. We're probably going to hear a whole lot more as far as how all these things kind of come together.
And we're just getting a more clear picture of Apple's AI intentions, their strategy there. Yeah, I'd just be interested to see what the differences are with your phone. You take the picture with the phone, then you want to manipulate it on the phone. You want to go somewhere, you want to do something, you want to translate something, you want to transcribe something. All of those functions can be aided by local AI on the phone.
Plus, if Google finally gets its way with the sandbox, add targeting as well. On the iPad, again, it's a different modality. And I think Adobe-like things, creation in that sense, I can see working really well on Apple. And then, you know, it's kind of like an iPad. On a laptop, you're doing writing, you're not really doing that on the iPad.
So, maybe it's maybe it's more about music and art and design and things like that where it becomes a help page. I mean, the iPad Pro, you know, with the pencil, with the Apple pencil and everything for the last handful of years, has been such a standards bearer when it comes to art and creativity on the tablet form factors. Samsung also plays pretty strongly in that space, but I think Apple probably gets the most recognition as far as that's concerned, at least here in the US. And so, yeah, and Adobe has played pretty closely to Apple in its mobile devices on, or its mobile apps on those devices as well. So, yeah, and we've also, on top of that, we've also seen companies like Adobe really leaning so heavily into integrating generative AI into the tool set.
So I think that's exactly what we're gonna see in the next year or two, especially coming to iPads. To transition into the next story on the rundown, I think the other reason that Apple has been less vocal about AI, I won't say they're behind, to be nice to them. Let's say they've been less visible and vocal on AI, is because AI has been concentrated in servers. And Apple is not a server company really much, not the same as Google. But now that this compute power is coming down into the device, now the irony is that Apple has to catch up on the server. Yeah, totally, yeah, it's true.
So, it's like, we don't want to have to, but I guess we have to. And this is called Project ACDC, it's dynamite. So Apple is developing AI chips for data centers. And I think everybody's in a stranglehold within video where there's a supply problem. Totally.
And I imagine there's things they wanna do. You know, to think about it, Jeff, it's so amazing that to me, I'm old enough to remember when Intel made the vast majority of chips and it was a really complicated task that took such high specialization and such time. And now how this whole industry has gone and it's something we used to have Stacy, I think about them on, she actually knew about chip making and I still don't know about it, but it's fascinating how this is spread around. Where on the one hand, you have extremely high efforts like Nvidia's chips, but you also have Apple and Google, you know, making their own as well. And I think the whole separated fab world makes difference. I imagine there's a lot of compute help exponentially greater on the design process.
So it enables a company like Apple to say, okay, we're gonna have our own damn data centers. Take that. Yeah, yeah. Well, and in this case, so the ACDC, I didn't just throw that out there to get like a rock music reference in there. Apple chips and data center is what it stands for. And they're actually working with CSMC on design and production of these chips. And the focus is more on running AI models rather than training them, which essentially leaves that to the players like Nvidia primarily.
So kind of picking their lane, picking their lane and sticking to it, it seems like. Right, and Apple's been talking to, I think both Google and OpenAI, we said last week, right? Microsoft. So I don't think that Apple's gonna necessarily create their own models as much as you say, as use them, yeah.
Yeah, yeah, yeah, indeed. So maybe this is the turning point for Apple. Maybe they can pull themselves out of being a little bit of the AI punching bag by a small, probably a small group of people, you know, love to know that they've got something on Apple and they can jam at them and say, oh, you were behind here, but I don't think it, you know, it's Apple.
Like I don't think it takes them very long to catch up if that's needed. You know, as you talk about this, it's interesting that because I use Chromebooks and because I use Android. I don't have much relationship at all with Apple or Microsoft, by the way.
And only recently, I finally started using Apple TV Plus. That's good. It's really good.
They have all kinds of good shows on it and I'm enjoying it. And it just made it clear to me how little of a relationship with Apple I have because I'm not in their hardware ecosystem. And I think in the long, long run, that's a disadvantage for them. And in an AI world, they've gotta figure out how to serve people no matter what platform you're on.
And I don't know how much that conflicts with their, we own you in our fence strategy. Yeah, right. Yeah. Yeah, that's a really good point.
I mean, it does, it is a total conflict. I mean, Apple is really, you know, by and large the last decade and a half of their mobile strategy has been largely about creating an ecosystem that is self-contained, walled off some might say, but incredibly powerful when you're within it. It's kind of like if you're willing to hand over, you know, the control to Apple as far as, you know, making certain decisions for you, then you stand to benefit because all these things are interconnected and operate so well together. And I guess I would be surprised to see Apple do it, choose a different strategy when it comes to AI outside of, no, we've got our users, we've got the people who will follow us and will religiously love, you know, and use the things that we create for them.
So let's just make this the cushiest, nicest, fluffiest cloud for them that we possibly can. And certainly as with search, Google is happy to solve them services, open an eye on them. Yeah, happy to solve them services. Yeah, that's true. Yes. Yep, having the right partners is important. Search and AI sounds like, Right, sounds like open AI is getting into this game.
Could possibly happen next week. And there's a few different bits and bobs on this story, potentially competing directly with, like you said, Google, Gemini, Microsoft Copilot, Perplexity, which is the one that I tend to use the most often. A user could, according to sources, user could ask chat GPT a question and get answers with details pulled from the web, citations included, also possibly some images, when appropriate, along with the written responses. The information actually first reported on this back in February. And in the last week, folks have noticed that search. Search.chatgpt.com had before shown a not found message before this weekend.
But now that points to chatgpt.com. And that lines up with the fact that open AI had scheduled an event for tomorrow, Thursday, I believe, to share some product updates. They've now reportedly moved that to Monday, which happens to be right before Google I.O. So we're at that time of year where everybody's playing strategic chess with all of their major moves and it all seems to be AI related. So search possibly coming to chat GPT, search capabilities and integration. What do you think about that? You've mentioned before that you remain in unconvinced at the integration of search and AI.
Yeah, I really don't see a solution on the horizon. I can be wrong and God knows what they're working on, laboratories of the future. But generative AI has no sense of meaning or fact or truth and let alone being sentient and all that. And so I just don't trust it when it comes to search.
I don't trust it with facts at all. I always start, it's the egotistical, when the search engines came out, we used to search for ourselves. Now we ask, at least I do, I'll admit it, I asked chats, who am I? And they always get something wrong. Always.
Oh yeah, for sure, for sure. And it's not hard because there's bios, there's Wikipedia and there's stuff out there. And there's words that you can see, like one of them just said I'm an associate professor, I'm a professor now. And I looked at the source and it didn't have a citation. It wasn't in there. It's just it's so used to saying professor and associate, it just did that. Oh, interesting. So it cited the location that it pulled that information from, but that location actually didn't have that information there. No, it's my university. So my rank as professor. Interesting.
And not associate, it hasn't been for quite a number of years. So to associate generative AI with search, I just think is risky and dangerous. And those who are saying this is gonna kill Google, I don't know, because I think enough people might see that it just doesn't work very well, unless they solve problems in ways that there's a lot of discussion about RAG. I always forget what RAG stands for. We were talking about that last week, right?
Yeah, limiting it to a set corporate. Retrieval augmented generation, thank you. But still, I was just thinking about this, Jeff. In a sense, the primary skill that generative AI lacks is the ability to say, I don't know. But when it doesn't know anything, it doesn't know what it doesn't know. Right, so it can't say, oh, I better go fill that in.
I better go find that. And of course, I'm speaking anthropomorphically in all of this, it doesn't think it just works. But it has no sense, because it has no sense of meaning, and it is designed to please us, to give us what we want, no matter what.
I will give you the next word, no matter what. And unless a specific guardrail has been built to say, well, I can't tell you about events now because my data only goes to blank, that's a built-in feature, in essence. So it's hard for me to see how a search is gonna work. Well, clearly, Microsoft is working on this for Bing and OpenAI are trying to. Clearly, they wanna attack Google, and clearly Google's nervous, so I could be wrong about all of this, but color me dubious. Yeah, I mean, I think my interaction with Google's kind of an integrated AI search experience, what was the GSE, or I can't remember what they called it, Way Back Win.
My experience with that is, I've continued to keep that activated since I activated it however many months ago, and I don't rely upon it. It's just another kind of a nugget of information that I can use and integrate along with everything else that I get inside of a search query. I rarely, if ever, look at that and I'm done. You know what I mean? And like, okay, I trust what I read right here, but that's kind of my own personal approach to using these AI systems in general around research and understanding different topics is, I rarely, if ever, get an answer and go, okay, I got what I'm looking for, I'm done.
I will almost always click on the citation and do exactly what you did, which I think is, you know, it's there for a reason, to be sure. And I would say, am I experienced with like, perplexity? And then like I said, the Google Gemini kind of integrated search AI capabilities is that they've gotten a lot better. Are they perfect? No, and I don't know that there is such a thing as perfection when we're talking about AI and information, but I have noticed that they've gotten better, that I find the less wrong, the more that I use them.
Yeah, and I can see how it can be fine tuned to that effect. And you know, as you're talking, it occurs to me that Google didn't have to be perfect. All Google had to do is to give you results.
And those were elsewhere. Then of course, it tried to give you answers, but it tried to do that in ways that were fairly reliable, Wikipedia. But we're old enough, each of us, to remember that when there really is Wikipedia, you were told never stop there, always go to the next level, only use it as a beginning. There's times when it'll come up for me on the side and I wanna remember something or I wanna explain something, I'll say, oh yeah, that's right, that's what I was looking for, that's fine, but a fact, yeah. I just don't think it's gonna ever be reliable in its present form, which again goes to the issue of the next generation of AI is gonna be agentic.
And if it doesn't get things right for you, if it doesn't get you to the right city when you ask it for a plane reservation, you're not gonna use it. Yeah. So I think there's a- Absolutely. My guess is that the generative AI versions we see right now, the large language models we see right now are an evolutionary step, and it's gonna take some next big step to a new kind of model of how these systems work before we're gonna see them working in areas like agents and search and writing.
A new- Yeah, yeah, working to the point to where people, by and large, can just outright trust that when you want that ticket to that place, you're gonna get to the place you wanna go, you're gonna actually save some money in the process, it's going to be the shortest distance, shortest route, and all these variables that right now, man, I wouldn't trust an AI to do that for me, no way in heck. No, absolutely not. But I do, but I can look forward and see that happening at some point. I do kind of believe that at some point, we will get to a point to where these things are more trustworthy than they are right now, and I think it will always be a matter of individual kind of perception or kind of gut feeling or experience that tells each one of us what our specific level of trust is as far as integrating or interacting with these services.
We'll all have a different sense of that in other words. That's true, and I think the other interesting thing, which leads right into, what is the genius of Jeff when he does the rundown for both of us, it flows from one idea to the next. I think this flows into the next idea, which is the web being filled with not only disinformation, but AI generated crap. And so if we expect, even if open AI got really good at search, more and more it like Google is searching through crap. And so one of the challenges here is to determine what is wrong, not only wrong, but artificially generated and thus less reliable. And that's a challenge on both ends. In other words, the LLM itself needs to have better functionality to understand whether there's truth to something. And then the sources that it goes to are gonna get worse and worse and worse, the web is gonna degenerate, and that's gonna only amplify this challenge, which leads to the next story.
Yeah, yeah, yeah. Well, open AI working on a tool to detect deep fake imagery. They're sharing this tool with a small group of disinformation researchers for testing out the results. And this specific tool only works on open AI's image generation service, the Stally 3. So this isn't a tool that you could apply to anything and everything that's generated online. It is specific to their service. And that is one question that I have is, is it truly possible to really create something that can by and large make the determination on anything and everything practically?
But I mean, even in this case, they're working on their own service. They say that it correctly spots 98.8% of Dali 3 images and 99.5% of real images. So we're talking about like a half percent to a little more than 1% kind of fluctuation on either of these stats, which sounds small, but it's all about scale, right?
And 1% of a ginormous number is a ginormous number of, you know, is billions of images that make it through, even at that rate and are misidentified or misdetected. And, you know, that question always comes back to you. Like, is that reliable enough?
And I mean, 1% sounds great, but billions of images, like, I don't know. I don't think it is. Well, what struck me about this is that it's open AI trying to detect fakes made by open AI.
And by fakes, I just mean images made by, let's call it that way. That would strike me as something that should be easy. They should be able to give themselves a signal that they can detect on their own images. They are working on that. Open AI is working on a watermarking technology that they say will be difficult to remove. But again, you know, difficult slash impossible.
And that kind of ties in with my question, right at the top of this story was, is it only easy enough, well, not the wrong way to phrase it. Like, are these companies going to have an easier time identifying only the misinformation or the deep fakes or whatever that are created through their own systems versus wider than that, you know? And I mean, I don't know if they answer that question. I'm sure there are patterns and things that probably pop up regardless of the system you're using. But I mean, at the end of the day, there is no certainty on this, you know?
It's not a 100% thing. Right, even if open AI cannot guarantee that open AI can recognize the images that open AI makes, then when you get past that and stuff, it would be, you know, I wish that the researchers who had access to this could put it against others' images. Because I imagine it just falls completely apart that there's something open AI put in there. Watermarking is generally rejected as a solution in the long run because even though they always say you can't get around it, you can get around it. If it's really well hidden, somebody will figure it out and figure out how to get rid of it.
Once you know where it is, you know how to get rid of it. There's a certain pattern, yeah, that eventually is somewhat perceptible by people who look into this long enough. Yeah, these things never last forever, that's for sure. Yeah, and in a way, the funny thing is, these companies are creating, I mean, I don't wanna say they're creating a problem because most everything everybody's making is interesting for fun, for good use, not used for the purposes of deep fake disinformation and fully in the world. So I don't wanna say that everything made by these image generators is bad, it's not. In fact, the image generator is no good for mad.
It's just making an image, doing what it's told. How someone asks for it and how they use it is the issue. It's the user of the behavior that matters, not the content itself.
And whenever you try to concentrate on the content itself, you're gonna get fooled. Yeah, and this is why, to go to the next story, there's an interesting thing happening here where people are falling over themselves to try to get people to label their own AI. So the one thing that this is from Axios, Hollywood's AI disclosure dilemma, they're being told or expected to disclose when they use AI. Well, some of me says, they've had CGI for ages. And they've used it on the same page. Yeah, they've had Photoshop for ages.
They're on the same page. They've had, what do you call it when you auto-autotune? Autotune for ages. Yeah.
Right? There's tons and tons of tools, but AI has become this scare word that, oh, if you use AI, you have to disclose it. But if you use CGI, you don't, it's kind of ridiculous. Yeah, I mean, and I think it also complicating the situation is, all of the writer strikes and everything that happened in Hollywood last year, which really, kind of got everybody up in arms against this AI threat that's closing in fast on Hollywood jobs. And so this kind of plays into that argument. It's like, see, Hollywood's already integrating.
But I mean, I'm telling you, like, and Megan Morone, hello, Megan, wrote this article for Axios, she links to a couple of examples, one of them being True Detective, which back in January, you know, people who were watching True Detective on HBO, Night Country, I guess is the sub-name for it. Notice that in the background are these posters. And when you look closely at the post, oh, let me pull this up.
I realize I'm not showing it for video viewers. These posters in the background, that when you, you know, start to really kind of look at them a little bit closer if possible, you start to see true, tell-tale signs of AI generation, you know, metal, US tour, some of the faces, I suppose look a little weird. It is often the distance, but anyways, some of this looks generative AI to it, sure. But like when I see that, I'm kind of like, well, if they didn't make it with generative AI, they would have made it with Photoshop or they would have made it with After Effects or like, I really don't care how they made that fake poster that's hanging up on a wall, like I don't care.
Make it with whatever you want. It's like, it's a prop. And are we really trying to fool people by doing that?
Hollywood is all made up. It's the whole point of it. Totally.
It absolutely is. But then the next story Megan has is that recognizing the heat that's going to come upon them, big tech companies are requiring users to label AI. So, well, we can't really detect it. So instead, we're going to require it be labeled by you if you put it up. And if you don't, then we're going to blame you. So there's going to be this tail chasing going on. Is it real or is it Memorex or is it open AI?
And I think we're going to have to admit very soon that the generalized answer is always going to have to be, I don't know. And it's like a deep fake of a Rembrandt done with brushes. If you're going to counterfeit something, then provenance is the only answer.
And you go back to a human being you trust. Right. Say that this, yes, this was made in this way. It was acquired in this way. I verify that it is. And I think that's going to require a lot of expense and effort, but it's also going to enable, I think, new business models, authentication.
And trustworthy sources of things and trustworthy creators of things. And it happened in print. Sorry, go ahead and do my Gutenberg moment.
Those of you who know me drink Gutenberg Princess on sale now at GutenbergParenthesis.com. The same thing happened with print, where print was unreliable at first because you didn't know where it was made. Anybody could make it. Then the institutions of editing and publishing came along to create sources. Oh, yes, this came from Cambridge University Press.
Then I know it's for real. That was mattered more than the medium or the form or any clue within it. And I think we're going to find ourselves in the same case across the entire web and information ecosystem, which is going to be an expensive and weird transition. Because now there's just tons of stuff out there.
And we can get there. I put another story up at the rundown one minute before we got on. So you didn't have a chance to read it. And Jeff actually likes to read these things and nobody's talking about before.
So I'm going to jump this on him. Futurism has a really good story, which I didn't finish reading either, but Meat Adbond, the AI-powered content monster infecting the media industry. So this is the company that definitely did the sports illustrated fake writers. And the lead is somebody who used to be hired to write this crap to put on the web, the content factories. Then the next job the person had was to edit the crap that was made by AI to train the AI.
And then he lost his job. So now the web is going to be filled and filled and overfilled with this crap made by AI. Going to make the web less useful, going to make search less useful.
I think going to make search more difficult. And so the answer is going to have to be, I think, going back to creating institutions of authenticity and authority. And we don't have them now. We don't have anybody that can deal with the scale of this challenge. And it's not going to be one size fits all.
No. So it's definitely not one size challenge. Yeah, it's fascinating. Not one size fits all, because I think the question that comes up for me with the conversations that we've had as far as noting provenance and when something is generated and the importance of that when it's related to something like journalism and where facts do matter, compared to fake poster hanging on the wall on the back of an episode of an HBO show. Like there are two very different things in my mind.
Like one, yes, I want to know disclosure because it's critical to so many things. And the other, it's like, I really don't care how that was created. Yeah, I like how that actor performed in that scene.
That was pretty captivating. It's just a different thing altogether in my mind. When it comes to just this idea of content, which I argue in the Gutenberg parenthesis on sale now, is commodified by AI, right? And we thought we were filling the world with something and now that's over. So I think for AI to just make content, oh, it made me a song and I like the song. I think we're going to say okay to that. But anything requiring decisions, right?
Medicine, education, business decisions, even personal commerce decisions, financial decisions, that's the case where you're going to have to know, where do this come from? How do they do it? Do I trust them? What's their track record?
And we're very early days and we don't have the structures. Is that regulation? I think there'll be a reflex to try to do that, but it's going to go too fast and be too big for regulators. And so it's going to be about, I guess, branding and reputation. Yeah. Well, speaking of reputation, after a quick pause here, we're going to talk about Microsoft's reputation in AI because they've got some new projects that they're working on.
Microsoft prepping a new LLM named MAI one is my one. Sure. Why not?
Let's just call it that. Led by Mustafa Suleyman, who you may remember, former Google DeepMind co-founder, also inflection founder, you know, big news. I think it was like last month, last month of the month before that that he and much of the inflection staff was hired off by Microsoft.
This model in particular is not carried over from the inflection acquisition, but could possibly use some of the data, the training data from that company's work. And, you know, we've got, we've already talked about Google IO, we've talked about Apple's, you know, WWDC. Yes, now we've got Microsoft build also happening next month. So, you know, you can guess that some of this is going to be shown to some capacity next month at the event. And this, you know, kind of ties into a story that I think we talked about either last week or the week before Microsoft had a smaller model, the 5.1 mini model, and that was unveiled last month at some point, this one in particular, much larger. So the 5.3 mini 3.8 billion parameters, this one 500 billion parameters, not quite the something trillion parameters that open AI touts for chat GPT, the latest chat GPT, but there you go. They're filling in the spaces with their larger and smaller models. And this is definitely the larger one.
Yeah. And it's interesting that Microsoft is hedging its bets. I think that that the truth of it is so Microsoft was seeing is jumping ahead in AI because of the relationship with open AI. The truth is Google was way ahead of the AI. Google was doing it for years, implementing all of its products, was using it heavily. And then Microsoft was managed to do a leapfrog thanks to open AI. But that made them dependent upon open AI.
And as we learned with the whole Sam Altman Michigan, that's a vulnerable position to be in. So it makes sense that they got Suleiman, it makes sense that they're trying to build their own models and find their own way here. Yeah.
Yeah. As a lot of the companies are trying to kind of lay their stake in the ground and have that for themselves too. And you put in an article that ties in so perfectly with this, Gary Marcus, who we hope to bring onto the show sometime this summer, we're kind of in the conversation right now.
So I'm really hoping that happens. That's all about the really the complicated relationship between Microsoft and open AI. It does a really great job of just spelling out like how at one point it seemed like they needed each other. And that may still be the case, but also there's so many competing efforts happening within both of these companies that it really creates a very bumpy road for the long-term kind of partnership. Or at least understanding like what are the real intentions here?
Who is your competition and who isn't based on these relationships? What did you think about this? You were the one to put this in here.
Yeah. I thought it was Gary is a always good skeptical view of AI and the players. He asks the questions. He's also smart asked, which makes it fun to read.
And he goes, he pushes them, all the companies, I think fairly equally. He's an AI person himself. He knows AI, but he's tough. One thing that struck me here too, Jeff, was not just who has the technology and who's building the technology and who do we depend upon, but also the chart that he has up on the screen here for the corporate structure of open AI. And for those of you listening, it's hard to describe how it starts at the top where the open AI board of directors controls open AI Inc. the non-profit, which fully owns open AI GP LLC, which fully controls open AI Holdings LLC, which is partially owned by investors and employees. In itself is the majority owner in open AI Global LLC, the capped for-profit company, which is the one we hear, in which Microsoft is a minority owner. Right.
And if you try to figure out there, well, who owns a percentage of that? Impossible. Absolutely impossible. Who controls it? Impossible, especially when you look at the weird governance of open AI Inc. the non-profit. So that's so complicated too. I've got to believe the Microsoft board of directors is saying, put aside the Sam Altman drama and all of that stuff, put aside even my beloved topic of TESCREAL and the insanity that brings the discussion.
Just look at this corporate structure and they're going to say, and you're putting our AI strategy on the basis of this outside entity? No. No. So I've got to believe that in the long run, open AI just becomes a supplier, but it can't, it in turn can't depend upon just one customer, Microsoft. So it's got to decide, it's interesting, Jeff, I saw this in media over the years, the inevitable conflict of whether you are B2B or B2C. If you are B2B, then you serve your customers, you do not get in the way of them, you let them succeed, you do not compete with them. You do not have a B2C brand.
Well, open AI and all of these companies are B2B because they want to sell to each other, that's where the real money is, but they're also trying to build B2C brands. Yes, indeed. And it's going to be, I think, really interesting and really difficult to see where all this lands. Not that there's a lot, not that opening AI is making a fortune from 20 bucks a month from people, it's not. It's making it elsewhere. Nonetheless, it's basically competing with Microsoft in that sense. And if it goes into search, then it loses the potential of Google as a customer on the B2B level.
Yeah, it's very fragile and delicate with how they're developing out these things and the relationships that already exist, coupled with the products that they're also building because they all want to be there and it gets messy really fast. It really does, which makes me come back and I think the dark horse in all of this is Meta because they're open source, open source scene, they're not open source to everything, but open source scene and doing small models and doing interesting things where they put their best model out for free at Meta.ai. So they're undercutting everybody who's trying to charge 20 bucks a month. They're putting their models out so people can use them as open source.
That undercuts everybody. And so I think you could find yourself in a weird position of commodification or it's just another plain old bubble like we had in 2000, where a huge amounts of money went into it. The revenue just simply, and the user case is simply not there yet.
And it will be, but not in time for that payoff to happen. So this is why it's really fascinating to look at this one relationship as one little piece in a very complicated family. Yeah, indeed. Indeed.
Very great article and really interesting way to tie all the pieces together by Gary Marcus. So I'm really happy to put that in there. I've missed that one. I had not missed the next story though, because I had read this and I was like, oh, we got to talk about this on AI Inside. The Verge has an article that focuses on kids who seek companionship with AI chatbots. And so it kind of checks in on how kids, how teenagers are using AI chatbots in their kind of lives for things like friendship, emotional support, using it as like a safe place to vent their frustrations, explore fantasies, practice social skills. And they also note the teens who are using these also say, there's no judgment tied to these conversations.
These are conversations that I can have and not be judged along the way. The way I might be if I was to bring any of this to the real world, they're using platforms like character.ai, one bot that comes up continuously through the articles, the psychologist bot on character AI. But character AI is a whole platform with, you know, that where people can create the bots of all different types. And yeah, you know, and of course, with something like this, what it has to do with kids has to do with emerging technology, like artificial intelligence. That gets experts worried about like what's happening to our children, you know, addiction, how you transition from the cues and the social clues that you learn about in conversing with a bot and how you transform from that experience into taking that out into the real world and real life interactions. And yeah, it's just an interesting article that looks at something which I imagine is, you know, the interaction and the comfort in communicating with chatbots, I think that doesn't lessen over time. I think that only increases over time because these bots are going to get better at serving, you know, the people who use them like this.
And so I just, what I found really interesting about it, and I'm curious to hear your take, is that, you know, always the younger generations are open to more things than the older generations that are current to now. And this is one of those examples, right? This could be, you know, the rock and roll or whatever is like, oh, well, AI is just, AI is just a thing. And I'm growing up and I don't have a lifetime of, you know, of experience and ethical, whatever, to look at this thing and be wary or concerned or anything. I just see it as this thing that is now that is cool and I'm going to use it, dang it.
And how does that serve them as they grow older with this technology becoming more and more pervasive over time? What are your thoughts? That was a lot. First off, I mean, the company that Mustafa Suleiman left was Inflection. And Inflection's model backed by Reid Hoffman was to create Pi, which was supposed to be a personal AI, an AI that got to know you, an AI that would be used for all of these kinds of functions. I don't know what happens to Pi in the long run.
It's not gone. How much development will go into it? I don't know. And there's other competitors like this. That's one thought, is I think there is a business here.
The other thought is, and I put this in parenthesis as my own smartass remark, I'm counting down to the moral panic over this. Oh my God, teens are talking to a machine. Oh my God, they can be manipulated because they're all stupid. We were smart, but they're stupid. Exactly. We were okay with our horrible decisions, with our dumb decisions, but they will not be.
Oh, no, we know better. And oh, they're not talking to real people. You know, I bet Jonathan Haidt, the moral panicker of moral panickers, is already writing about the perils of AI, having moved on like Kristen Harris from the perils of social media. It's a next peril. Now, I said this on social media, and a professor I respect greatly, because, well, where's the moral panic here? And I said, no, I'm not saying it's there yet. I'm predicting that it's going to be. It's the next video games. It's the next comic books. It's the next Nickelodeon.
It's rock and roll. Because I think as you say, Jeff, it's really important, is that young people will experiment with things. That's what youth is about.
Yeah. And they experiment with the world that is before them. And this wasn't before us in our youth, but it is before them now. And so, of course, they're going to experiment with it. They're going to try these things.
And it's cool. And, you know, yes, if the Chinese government ends up doing this and they can brainwash young people, are there perils potentially? Yes. What it will mean is that if this is known as a behavior, I think it could be two reactions in a regulatory view. One, as is happening right now in the UK with Ofcom, is they're trying to say to social media around algorithms, its own form of AI, that they have to do age verification and that kind of stuff. So, I think they'll probably be a reflexive effort to say you can't use AI under a certain age, which of course won't work. The other thing that people are going to try to do is create guardrails.
And certainly, if people are going to look at self-harm of any definition and get answers for that through AI, that's a case where it's a behavior you know is going to happen. And so, you can build a guardrail against it. That guardrail will not be foolproof because people will figure out how to jailbreak it.
But companies should, even though I think guardrails are generally futile, I would agree the company should morally, ethically, and regularly be required to try their best to not have it feed self-harm of any form here. But once young people come in, it's going to raise all kinds of complications, all kinds of media panic. And so, I think this is one we'll come back to. I'll bet we'll come back in about three months and we'll say, remember three months ago when we said people were going to panic, there's already panicking.
Yeah, almost certainly. And when I was reading through this too, like something that struck me is it never sounded to me like the teens that were referenced in the article ever felt like they were actually talking to someone real. Like they knew. They understood the realm in which they are playing here.
Now, there was one teen who said, you know, I will plainly admit it. Like I'm addicted to these things. I'm using them all the time.
They reference someone using it for like 10 hours or something like that. But my point being, like these are, they are stupid. Like they know kind of the dimension in which these things exist.
And they found a use for them that is helpful in society shape or form to their teenage years. In a certain sense, it really reminded me of Eliza, right? Like, you know, way back when playing Eliza and granted, it was not nearly as capable as these chatbots are now, but it's really a role playing game at the end of the day. That's, that's what we're talking about here. And, but it's, you're right, it does have the potential to serve potentially very damaging information. And I think these companies do need to do what they can to address that and, you know, lessen that.
Like I said, not eliminated because they won't, not because they don't want to, but because it's, it's next to impossible to do that, but they do have a responsibility there, I think. So let me ask you a question, because I'm not a gamer at all. Yeah. Just don't like games. Because I don't like losing. I guess that's what it is.
Have you seen any good games built on top of generative AI yet? Or is the fact that it's not, because it would seem to be, I'm going off what you just said. It's like the old, you're in the room with the wizards. Yeah.
What do you do? Yeah, Zork. And yeah, the word games. Yeah, absolutely.
No, that's another, another correlation that I heard. Problem four, we talked about this last week, that it doesn't remember you from one to the next. It's not, it doesn't hold state. But if it did hold state and chat to PDs as well, it's, it's really fascinating to imagine how a kid could invent their own game. Oh, 100%.
Yeah. I'm, you know, I use, I was a gamer way back when I met, you know, at least a decade and a half to two decades now, I would not consider myself a gamer outside of, you know, just kind of playing here and there, random games that come along and, and everything. I'm sure there are games that exist around generative AI. I'm not aware of what they are, but I would love to know. And if anyone watching or listening does know, contact@aiinside.show, send us an email and let us know. And yeah, if there is something, I'd love to check it out and, you know, kind of check back in on the show to kind of see like, what does that look like? And I do, you know, if that doesn't exist, right?
Be really surprised if it doesn't at this point, it's going to. And, and I'm curious to know like, what does that look like? Yeah. Yeah.
This is why the show is so fun. You know, just trying to think of what's possible, what's coming out there. Yes, totally, totally. Two more things here, two more stories, both of them very interesting about kind of, you know, kind of along the, the AI taking our jobs kind of beat, I suppose. Wall Street Journal, in the first story has an article that focuses on the changing landscape or stock photographers in light of generative AI. And, you know, something that I hadn't really connected the dots on before, but they mentioned that digital cameras, when they came along, the stock photography industry was in shambles was because now suddenly, you know, that bar had been lowered and everybody had access to high quality imagery and in a digital sense that they could just shoot their own, their own, you know, content instead of hiring somebody else.
But the stock photographers that survive that are around today, you know, really had to kind of figure their way through that challenge and kind of change what they did to form, you know, form around this new reality. And now they're faced with a new challenge, which is that, you know, we can very easily just go onto a site and create the image ourselves through the use of words, as opposed to licensing out images that are already shot. And I think also on top of that, stock library companies, as we've talked about in previous episodes, are taking their vast, you know, proprietary kind of licensed imagery that's in the database and training their own generative AI models so that they can offer this as a service too. And so it's really putting a lot of pressure on stock imagery photographers, which is unsurprising, to be honest, but it is nonetheless interesting to read about. The interesting thing is that in the timing here, what happened as well was that when the web came along, I say this whenever I'm a journalist at conference, I did it a week ago, where if you go back 40 years in newspapers, maybe 10% of articles were illustrated. Now, absolutely everything you put online has to have an illustration with it. So on the one hand, it created a boom market for stock photography that compensated, in part, for the loss of business by going to digital and new competition, but new competition and a new market.
Now we come to AI coming into this, now I think it is going to be disastrous for stock photographers, though it's hard to imagine AI as it stands today, getting the nuance of everybody's favorite meme. The girlfriend is appalled when the guy is looking enviously at the woman, right? Jeff, for those of you on audio, just paid the guy. Sorry, I did my best impression, but for video viewers, I could just show the picture. There's the imitation of the pissed off girlfriend. Anyway, there's nuance to facial expression there. There's an understanding of emotion there.
There's humor there. There's all kinds of things built into that photo, and it's hard for me to imagine the best prompt being able to create that photo. However, if all you need is a new way to say person working at desk, pod of problem, you're going to be able to do it. This reminds me of one of my favorite stories, and I tell it in, sorry for the third plug already, I tell it in the Gutenberg parenthesis, page 124 for those of you following along.
Wow, so specific. Well, thank you for search. I looked at the index and they didn't index this, so I had to search for it. In 1901, the New York Times fretted about the fate of illustrators with improvements in camera work.
The average illustrator is looking on improvements in camera work with consternations at the times. They compared two news events. When Lincoln was shot in 1865, Harper's Weekly sent all of its artists to Washington by train to draw locations of people. And by the time they arrived, he was dead. They rushed back from the train in New York, drawing on the way, and then redrawing everything, cutting the engravings onto the blocks so it could get out with illustrations. A full page would take more than a day of work to just engrave it. They put out an extra edition of Harper's four days after the event, which was phenomenal.
Think about our time since today. So years later, the Times said a magazine editor on deadline would send artists to the scene of news and have them sketch their illustrations, then get this. They would telegraph back a description of their illustration to an illustrator in New York who would redraw the illustration.
They're prompt engineering. Big go. Big go. And then what happened was when President McKinley was shot in 1901, Collier's telegraphed orders to photographers on the scene who managed to physically rush their images back to New York within 14 hours. Two hours later, the pictures were ready to print because you didn't have to engrave the old way.
And the Times estimated Collier's extra edition cost a tenth of Harper's and was so far ahead of the old time extra that there was simply no comparison. What happened to all the unemployed artists said the Times, most of them have gone higher up as in the heaven. Others took to illustrating advertisements or fiction. So we've been here plenty before, which is, and even if you go back, I'm sorry, to Gutenberg, it was not just a revolution of print of text. It was a revolution of images. Now you can replicate images. You could show how something looked. You could have something that was accurate. It wasn't scribes, didn't redraw everything.
They just rewrote. Now with printing and engraving, images became important. So that alone is a fascinating history to me. And so now you come in with AI able to make images on its own.
Of course, it's going to cause another revolution. There are tons of uses, the simple stock photo, but also I can't draw, but now I can illustrate what I want to say. And I think that puts power in people's hands. So yes, I feel bad for photographers. I do.
We still teach photography at journalism school and we still should, but a lot of this is going to go away. Yeah. Yeah. Yeah.
It's going to be interesting. Well, they point out in the article also, like, and I think this is true with with generative AI music and other creative output, you know, imagery, generated images and stuff is yes, these systems can do so much. And it's interesting through the perspective of, you know, two years ago, this seemed like science fiction to the quality that it's doing it now.
That's really fascinating. But there is also on the other side of that still the desire for the human touch, still the desire for the nuance that a human can bring to this thing. And whether we can look at an image and immediately tell that was created by, you know, a computer versus a human or not, once we know, I think there's something to the fact that when we know that that was created by a human that connects us to our own humanity. And I think there's a certain layer of respect or will continue to be a certain layer of respect that we pay to something along those lines that maybe we don't pay to something that's created with a generative AI.
And that just kind of tells me that there will always be the need, the desire, the want for the human created artistic thing, even in a world where we can just write some words on a screen and come up with something to compete with it. Yep. Yep. That's fascinating times. Yeah. And then finally, I put this in here, even though it's not current news, but I just, I hadn't realized this news story and you have read audio books in your time because you have written some books and then you have been the narrator for audio books.
So this is a little bit of old news, but it ties into this. Back in November 2023, Amazon announced the Kindle Direct Publishing for Audio Books. And it was essentially an opportunity or an option for authors to sample an AI voice and then have that voice read their book into audio book form if it didn't exist already and then put that for sale on Amazon.
It was in bite only at the time. Turns out now there are more than 40,000 virtual voice audio books in on Audible right now. So I imagine, you know, many of those, if not all of those, well it says narrated by virtual voice. So I think these are just all the books that, well, some of them are not narrated by Adam Loughbaum, virtual voice. So this isn't a perfect search, but anyways, there's a heck of a lot of audio books that are narrated by the virtual voice. And then last month, Harper Collins publishers struck a deal with 11 labs to go through its back catalog and make audio books using 11 labs AI voice models. Some of the things that hadn't been put into audio book format so far. So we now have audio book narration threatened by AI voices. And I'm just kind of curious from your perspective, like you've gone through the process of doing this narration. I can't imagine it's easy, but I can imagine that a lot of people like this is, you know, again, this is their livelihood. And this is a seemingly capable replacement for that.
What's your perspective? You may be amazed considering what a mush mouthed fast talker I am that I can even do it. And it's the poor producer who has to say, Jeff, could you take that again a little slower? You must do that work. You do that again. And it's hard, hard work. Yeah, this is took three full days.
And you're exhausted. The poor guy, the producer can't just look around the room. The producer has to check. Oh, no, sorry, you said the when you should have said, ah, and you go back.
And so it is expensive to produce. I didn't get paid extra for it because it's my book and I wanted to do it. And I had to prove to them that I could do it by having done I think four books by now. So I understand on the cost side why the publishers will be happy with us. And I also understand on the reader side why on the one hand, I think this could be awful because I want to hear authors with their own. I prefer an author read book generally.
Some authors read. Yeah, I do too. I do.
Generally, I like that. Yeah, sometimes you get that narration and it's like, okay, you probably should have had someone else to hear you. And then there's some that are just phenomenal performances. I just listened to James by Percival Everett and the performance of it. Because everything about that is the accents and the what am I trying to say the the the dialogue, the emotion, but also the dialect.
That's what I want. Because that's the whole point of the book is that the dialect of an enslaved person was put on by Jim in Huckleberry Finn to make the white slave owners comfortable. And that's the the conceit of the book. It's a brilliant conceit, brilliantly done. No AI could ever read that book. Ever, ever, ever.
However, in a way, I think this is as my West Virginia father would say, bas-acquards. Because what I would want in this case as a reader, there's so many books that I would I my rule is that if I need to read it for research, I need to underline it, I read it in print. But if it's for pleasure or just interest, then I'd like to listen to it. Because I've got time to that's how I fill time. Alongside podcast folks, of course. But I'm frustrated that there are all these books that aren't in audio. Yes. Now, of course, on Kindle, you could get it to read to you.
So this has been around for quite a while. But you wouldn't buy that separately. It was the Kindle. So to me, the question becomes, Jeff, whether this is a supply or demand technology. Right. So now, in a sense, it was before a demand side in the sense that your Kindle could read to you. It wasn't very good.
Now they're turning into a supply side. Oh, we could make money because we can make it sound like a real audio book rather than that Kindle crap. But what if the book that I want, an academic book isn't in audio?
Could I get this quality of audio on it? So it's an author, would that author say, well, no, I did a really crappy job of reading that book. And I don't want that out there with my book. So there's all kinds of interesting plays in this and how this works. Yeah, it's really interesting to me too. Because there I have my preference for reading books. I just, you know, I'm not a huge fan of sitting down with a book and staring at it.
I get tired really easily when I start reading a book. And I don't know what that means. There's probably a name for it who knows, but I would much prefer to listen to it.
I stay far more engaged and everything. But there are just certain books that just don't exist in the audio book form. And so I'll pick them up and I never finish them. So the user in me would love this because something's better than nothing, right? Exactly.
Existing in audio form is like this, as imperfect as it may be, would be better than it not existing in audio form at all. And yeah, another thing that kind of ties into this that I just had this morning, I was like, I had to do a lot of driving around with my x and everything. And I was like, well, you know, why don't I put this time to use like, let me just go on YouTube and find like an AI news channel. Like tell me some AI news because maybe I missed some stories for today's show, you know, that I can when I get home, I can slot them in.
And it turned out the one that I found, I wish I could remember the name of the channel. I started playing it and it was about two minutes in where I was like, this is not a real voice. This is an AI generated. Like it was a faceless YouTube channel. It was talking, you know, about an AI story.
I think it was the Microsoft story actually. And about two minutes, two full minutes for me to go, oh, wait a minute, this isn't actually someone narrating this. I mean, it's really well done because it took me two minutes and I feel like my radar is pretty spot on. But what it made me realize is, you know, there's a lot of people that would come across that content and be immediately revolted or rejected outright because, well, that's not a real person reading the news. Like, why would I choose to do that? But to a certain degree, I was kind of like, well, yeah, but, you know, would I prefer there to be a human kind of, you know, reading this news and sharing commentary?
Yes, I would. But this, I mean, this was convincing enough and it was interesting enough. And to a certain degree, it was filling a need that I had in the time, which was, I want to know a little bit about AI news right now and I can't read it.
I can only listen to it. Yes. Yes. And so it was fine. You know, I was okay with it in that moment.
So I think this kind of plays into that. Watch out. One of the stories I don't think I put in our rundown is that Elon Musk is talking about how he's going to create an AI based news service out of what's on X and what's in his feed brain. Yeah. And I just dread that. I think that's going to be I had something else in front of the comments. So I didn't see them. So my apologies.
So first, I want to thank Tay for two things. One, thank you very much for your tip. And second, another kind of tip here where he tells us that an AI game does exist. AI dungeon. AIdungeon.com I'm going there now where you can play for free and enter your character's name, Jeff. Which I don't have your screen.
So just for anyone watching. If you go to play.com. So I just put in, I asked for a fantasy game and put in the name of the character. You are Jeff, a fairy in the kingdom of Laryon. You live in a film hidden under a grassy hill under the castle. Your skin is a light tinge of blue. That only happens to me when my camera goes off.
Your wings sparkle in the sunlight and you are very small and good at hiding and on and on and on. So you can well imagine that thanks, take this is great. Yes, it's great. You can see exactly how I could remake this in all kinds of ways and it's infinite games. Indeed. Yeah, kind of create.
Wow. In the future, we'll have game studios creating games and we'll have more of this, I imagine where it's just like the game that I've always, my mind is kind of blown right now because I remember when I was younger, having ideas, you know, being so into video games and having ideas for games, but having no idea whatsoever how to create it. And you know, that's very, that's this is very similar to the paradigm that we were talking about so often right now, which is I'm not an artist, but now I can create images. I'm not a musician, but now I can create music.
I'm not a game designer, but I can have an idea for a game, plug it in there and come up with something and maybe over time that something became becomes something really cool and can cultivate it. And that's a really eye-opening moment. And it makes no matter because it's fantasy. Yeah, it's fantasy.
It's game. Yeah. Exactly.
Yeah. So thank you, Jay. And thank you, everybody, for the comments. Really glad for your input on the show.
Indeed. Thank you so much for being here, everybody. Thank you, Jeff, for a fantastic show. This is a lot of fun.
I knew we were going to have fun with the stories this week. Yeah. So much.
It went long, but there was a lot to it. That's okay. GutenbergParenthesis.com for folks who want to kind of check in on your books and that magazine. You're with discount codes. Thank you very much.
There you go. Thank you, Jeff. This show, AI Inside, records live every Wednesday at 11 a.m. Pacific, 2 p.m. Eastern. We do this on the Techsploder YouTube channel, which is, you know, I had renamed my YouTube channel a couple of weeks ago.
So go to youtube.com/@Techsploder, when we've got a live show coming up and we're doing a live recording, you'll see a little kind of placeholder for that live event. And you can actually have it remind you if you want to be here live while we do the show. But if you don't want to watch live or even use YouTube, that doesn't matter.
It's totally fine. We publish the show to the podcast feed later in the day, which you can get to all of the links to the various different audio podcast subscription ways at aiinside.show. Please be sure to like, rate, review, and subscribe wherever you happen to listen or watch the podcast. And of course, support us directly on Patreon at patreon.com/aiinsideshow. We offer ad-free shows, early access to videos, discord community, regular hangouts with me and Jeff, the rest of the community, and a whole lot more. And speaking of the community, we also give you the opportunity to give it a certain level that makes you an executive producer of this show.
So you can have your name read out at the end with a whole lot of fireworks and flair like DrDew and Jeffrey Mariccini. Thank you so much for your extended support of AI Inside. We literally could not do the show without you all. Thank you again for watching and listening each and every week. We'll see you next time on another episode of AI Inside. Bye, everybody.