Jason Howell and Jeff Jarvis discuss OpenAI's record-breaking $40 billion funding round, ChatGPT's image generation update sparking controversy, Amazon unveils its first agentic AI model, and Elon Musk's xAI takes a bold step toward the 'Everything App'.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the YouTube channel! http://www.youtube.com/@aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
NEWS
0:02:07 - BREAKING: Google Gemini is shaking up its AI leadership ranks
0:06:21 - OpenAI closes $40 billion funding round, largest private tech deal on record
0:13:51 - ChatGPT’s new image generator is really good at faking receipts
0:23:03 - OpenAI plans to release a new ‘open’ AI language model in the coming months
0:28:37 - Amazon unveils Nova Act, an AI agent that can control a web browser
0:33:18 - How Meta’s Upcoming $1,000+ Smart Glasses With a Screen Will Work
0:39:41 - Musk's social media firm X bought by his AI company, valued at $33 billion
0:42:20 - MCP: The new “USB-C for AI” that’s bringing fierce rivals together
0:45:36 - Sinofsky: MCP - It's Hot, But Will It Win?
0:48:28 - AI-made works can’t be copyrightable, says US court… for now, at least.
0:56:24 - The Bluetooth Lady Speaks! ‘Voice-Over Actors Will Be Artisans in the AI Age’
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:01] This is AI Inside, episode 62, recorded Wednesday, April 2nd, 2025. Nothing is Forever. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
[00:00:28] What's going on, everybody? Welcome to another episode of AI Inside, the show where we take a look at the AI that is layered throughout the world of technology. Like a delicious lasagna. I actually happen to have a non-AI infused lasagna in the refrigerator for today's lunch, so I'm excited about that. Maybe it'll be a little better with little AI layered in there, I don't know. I'm Jason Howell, joined by Jeff Jarvis. Good to see you, sir. Artificial general lasagna.
[00:00:56] A-G-L. A-G lasagna. Yeah, good to see you, man. I'm excited. We've got some great news to talk about today. Today, lots of open AI as I scroll through here. I kind of put all that up at the top. Like, let's get the open AI out of the way. Then we can get to everything else. And then we have some actually right before showtime, some breaking news that we're going to get to in a moment.
[00:01:21] But of course, I'd like to first throw a thank you to everyone who supports us for doing this show on Patreon. We're on Patreon.com slash AI Inside Show. And in fact, our patron of the week, Steve Linthicum. Thank you so much, Steve, for being there since almost the beginning. And yeah, everyone who supports us, we really can't thank you enough. So Patreon.com slash AI Inside Show.
[00:01:46] And then, yes, of course, if you're watching live, of which we seem to have quite a turnout week after week to the live recording, you should subscribe to the podcast just in case you missed the live stream. Then you got it in downloadable podcast form and you won't miss it entirely. With housekeeping out of the way, why don't we jump right into the news? And I mentioned that there was some late-breaking news, right?
[00:02:12] As I was kind of getting everything set up last minute, I noticed on TechMeme that Google Gemini, there's a bit of a shakeup happening at Google when it comes to Gemini. And let's see here. I'm going to go ahead and throw this on the screen for video viewers. Leadership change essentially happening at Google in the AI division. Sissi Cao is stepping down from her role as the head of Gemini's AI chatbot team.
[00:02:40] And in place of Sissi is Josh Woodward. He's been the lead of Google Labs. And we've talked a lot on this show about Notebook LM being such a big deal and kind of one of Google's really big examples of success in AI and a product that's just really useful. And he was one of the main drivers of Notebook LM.
[00:03:05] Also Project Mariner, so the autonomous agent inside of Chrome that we've heard about. I don't know that we've actually seen that necessarily yet. But yeah, so a little bit last-minute switch up. I don't know exactly the reason or the rationale or any specific cause. Like, did Sissi – did something happen with Sissi? Maybe she signaled that she wants to move on to another project. Or maybe Google made the decision. Whatever the case may be, she's out. And Josh Woodward is in.
[00:03:35] I think it was Josh Woodward who brought in my friend Stephen Johnson to be the editorial director of Notebook LM. And clearly who shepherded that into reality and fought for it because we had a story a few weeks ago about how there was some effort to stop Notebook LM from the drive team or whatever the hell you call it. They call themselves these guys. But interestingly, two things.
[00:04:03] One, he's going to stay in charge of Google Labs while being at – Okay. So he's just taking on more responsibility all across the board. But it also tells me that with Notebook LM, he had a real good consumer sense of a useful tool. Taking this AI and getting it into a form that makes – There's a whole – There's a new model and it does the so-and-so test 1.3 times faster and all the stuff that means nothing to people out there.
[00:04:31] Notebook LM became very tangible in its utility and its developments. And so I think this sounds like a good move to make Gemini more tangible in our lives. Yeah. Yeah. Super curious. As we hear more about this, I'm sure maybe at some point we'll get a better understanding of kind of the strategic kind of rationale.
[00:04:57] Because I've liked watching Sissy's star kind of rise throughout Google over the years. And she was leading the pre-Gemini team as well, the Bards development. There was some controversy there with hallucinations, image generation snafus and stuff like that. I doubt that's behind this. But I am curious what that means. And actually – Sorry? She's not leading Google. Yeah. Right. She's going to go for a new gig soon. Yeah.
[00:05:26] Yeah. And I was just going to say last night on Android Faithful podcast, another podcast that I do, we were just talking about the general tenor of Google right now, how there's so much change happening. So much of the leadership in the Android team that we've seen for years and years, Dave Burke and others, out and replaced by new energy and new blood in that team.
[00:05:51] And then I think in AI as well, and a lot of these efforts are kind of converging and merging. And in that, maybe there's redundancies that Google's actually addressing. And so it's interesting. It's just kind of a moment of change from a leadership perspective inside of Google, and this just seems kind of part of that. Well, I think you – I put your finger on it. It needs – the company needs new energy and new blood. Yes. It needs kind of a kick here. And I think – Kind of a new era, right? Yeah. I think it's good. Yeah. Yeah. Cool.
[00:06:21] Well, that's the late-breaking news on Google that we had. Like, literally, I put that in there ten minutes before show time. Yeah, Jason surprised me. I thought, uh-oh, what don't I know? Boom! Yeah. It's not that complicated. It's just some leadership changes. But I had planned on the top of the show being a focus on OpenAI just because there were a number of things that were happening at OpenAI. And I thought, well, why don't we get it out before the first break? And then the rest of the show could be everything else. So there's kind of three things that we can focus on here.
[00:06:50] The first is the news that broke yesterday or that was announced yesterday that OpenAI closed a record-breaking $40 billion funding round. And, I mean, it's just pretty incredible. $300 billion valuation. Like, that's a single funding round, $40 billion. It just blows my mind. I think the next behind it, I can't remember what company it is. I want to say maybe Juul was three behind.
[00:07:17] But anyways, it was like a quarter of this amount or a third of this amount. So this is first by a lot. SoftBank leading the investment at $30 billion, though there is a stipulation in there that if OpenAI can't restructure to a for-profit entity by the end of 2025, that that reduces down to $20 billion. So a little bit on the line there for OpenAI to deliver on its desire and promise to restructure.
[00:07:45] And, yeah, Sam Altman kind of punctuated this a little bit by saying on X that in the first five days more than two years ago, ChatGPT gained 1 million users. It took five days. And just the hour before he sent that tweet, the platform gained another million new users. So a lot of new people coming through the door, a lot of eyes and attention on OpenAI.
[00:08:12] Yeah, fund rounds normally aren't that interesting because they happen all the time. It's what companies do to keep going. But in this case, the size of it is pretty amazing. And OpenAI started tied at the hip to Microsoft. And I think this is also somewhat freeing for them that they get more tied, I think, now to their funder. Yeah, yeah, that's true.
[00:08:36] And, you know, SoftBank, like I said, the majority of this funding, Microsoft's still in the mix on this funding round. But to a much larger amount than when compared to SoftBank. And I think a large part of this, my understanding, is investment going to its Stargate production.
[00:08:55] So increasing the amount of compute, you know, heavily at a time, whereas we've talked about, you know, is that the saving grace for AI right now? Is that the kind of, you know what I mean? It's a large company really flexing its funding muscle and putting it all into compute. And we might find out that, like, all that isn't as necessary as it was once believed to be. And I think that's still an open question right now.
[00:09:25] Yeah, the last time we watched the NVIDIA keynote, it's – for every new efficiency, it's not to save you money. It's not to save energy. It's not to reduce the size. Instead, to try to – that's the way it scales is kind of internally. We're going to get more and more and more compute. And I still debate whether that's going to be necessary. I agree. I think that it may be an overinvestment. We'll see.
[00:09:46] I haven't read it yet, but Gambling Man, which is a book by Lionel Barber, former editor of the FT, is about Masayoshi Son of SoftBank. And, you know, I forgot that he's lost his fortune a couple times. He puts in just gigantic gambles. And this is a gamble. This is a huge gamble.
[00:10:09] It could pay off immensely or, you know, we've talked about various strategies, the open source or the lack of a moat and other issues, legal issues, Musk. Who knows? This could be risky, but it's a heck of a bet. It's a heck of a bet. I guess when you're – like I don't understand that world. You know what I mean? Like that world is so foreign to me.
[00:10:37] You know, where the money gets put in. How long do they expect to go before they start to see the returns? And, you know, it seems from my outsider's perspective that they are really willing to go a very long time in the hopes that something eventually pays off. And I guess when you're in the business of funding companies like this, looking at the progress, the attention of a company like OpenAI.
[00:11:03] I mean, I know that there's plenty of other elements that factor into it. But I would guess you look at that and you go, well, that's worth betting on, you know? Yeah. But I don't understand the business fundamentals that drive that decision. Well, that's the thing, Jason. There are no business fundamentals here, really. Yeah. That's a good point. I don't fully believe the revenue numbers here and the user numbers here because, you know, I was thinking about it today. It's as if the typewriter gets invented. I have to have – oh, there's a new typewriter. There's a new royal.
[00:11:33] And I sit down. I'm going to use the typewriter. Well, you don't just sit down and say you're going to use a typewriter. You have something to type. You have something to do with it, right? And that's the problem I have with AI constantly is I think, okay, it's a new model and, you know, I got to play with Amazon. And now what? I think it's constantly a solution in search of a problem in people's lives. That's, again, why the Google news is interesting to me because Notebook LM, I think, was an exception there.
[00:11:58] It was an application layer rendition of the AI that proved its utility. And obviously, OpenAI has that and it's new, you know, it's drawing tools and all that kind of stuff. It's amazing. I don't take anything away from it. But I just don't know what the business fundamentals are in five years. Yeah, and that's a big question. I mean, it's only been two. It's only been two years. You know, that fact blows me away.
[00:12:24] It feels like it's been forever the last couple of years and what we've seen in the tech industry really doing an about face and just completely going down the direction of full-time AI as the pinnacle. But it's only been a couple of years. We've seen a lot. And I couldn't even imagine where we're going to be in five years. And like you say, does OpenAI continue to be? You know, there was a time when we felt companies like Google were, oh, my goodness.
[00:12:52] Could you even imagine there being a company that would out-Google Google? And, you know, that seems less assured nowadays. Nothing is forever. Nothing is forever. Exactly. Fortune 500 companies that I don't know what the current age is of them, but it's young. It's generally young. You know, I go back. I was talking to somebody the other day. I remember back in the day when I was at Time, Inc.
[00:13:18] Or I'm sorry, when I was at Advanced Publications, we went in with Time, Inc. And panicked about this whole web thing and search. We were going to buy AltaVista together. And I was part of that whole task force to do that. You know, thank goodness we didn't. Thank goodness we lost. But there was a time when that seemed like a good idea. That was the winner. That was a smart thing to do. Let's get in on it. Right. Right. Right. Yeah. The winds have changed. They turn pretty quickly, don't they?
[00:13:47] So OpenAI story number two. Probably don't need to spend a whole lot of time on this because we've already talked about the image generation update to ChatGPT 4.0. But I think it's worth mentioning that there's been a lot. I mean, have you seen the meme content just blanketing the web? There's so much down the road of the Studio Ghibli remixes.
[00:14:12] And people are – you put in an article of – and I want to definitely show this off for video viewers – that the new model is really good at faking receipts. Yes. I mean, it's just – what we're seeing here is an advancement in the model, but also we're seeing a loosening of the reins. And kind of feels very now for where the company ethos across the board with technology is,
[00:14:42] where they're like, you know, we've been playing nice and playing into the woke world for the last handful of years. And now we don't have to. So let's just let it rain. And, you know, so now the model is not restricted from reproducing receipts. Maybe it was before. But, I mean, you know, this is something that could be used for fraud, basically. But OpenAI is saying, you know, their goal is to give users as much creative freedom as possible.
[00:15:10] It could also be used for teaching people about financial literacy, which, I mean, you're right. I suppose so. So that's a stretch, I think, maybe a little bit. But you know what somebody's using, you know, OpenAI to generate receipts to do. You know why they're using it. They're using it to fake a receipt and get, you know, reimbursed or whatever the case may be. So a real loosening of the reins. And I think this image generation AI system is an example of that.
[00:15:37] Yeah, it reminds me of, I think, Zuck of late. It's Zuck, no more apologies. Yes, totally. Right. That's where I am now. We got people in the White House. The feds aren't probably going to come after us. We want to grow, grow, grow. Accelerationism is in. I think that's part of it. Yeah, that's it. Accelerationism. A couple of things here. On the Studio Ghibli stuff, is it Ghibli or Ghibli? I don't know. I never know. We're back into another GIF GIF. Yeah.
[00:16:07] It's the other GIF GIF. It's Ghibli. Ghibli. So on the one hand level, I certainly understand their discomfort with seeing things that are in their style out there. On the other hand, that's what so much of art is. I can sit down a day and I can choose to write in the style of Hemingway. Yeah. And that's what I do. And nobody's going to stop me doing that because that's not wrong. And I can read Hemingway and understand Hemingway and choose to mimic it and probably not be nearly as good, but that's up to the reader to decide.
[00:16:37] You're a musician. You're famous. You're rich. You have the Jason Howell style. Oh, boy. If somebody comes along, just a musician, and mimics that style, is your reflex to take that as theft or as praise? Yeah. Now, mind you, I make basically zero money off my music. Yeah.
[00:17:06] But if someone were to come along and create music that sounds very much like me, I would actually be kind of intrigued and impressed personally. But I also don't make my living off of it. Would I feel threatened if my entire living was based on a very unique sound that I curated and crafted for the last 20 years and everything? I probably would feel a little threatened by it. But I don't know that I'd necessarily automatically say, so then this technology can't exist.
[00:17:32] I also totally recognize that technology evolves and things are possible now that didn't used to be possible, and that doesn't automatically mean that they're bad. It's a different perspective, different mentality. And the whole copyright, we go over training is different from quotation, and there's still an issue of acquisition. And then there's this fourth issue that comes up regularly, which is in the style of. And can you own your style? I don't think you can. I don't think you can. No.
[00:18:02] I totally agree. Because that's a real slippery slope. Yes. It's one of those things that, you know, how do you define a style? Well, then you can control everybody else's creativity, right? You can't do this. Now, of course, Disney did that for years. You couldn't recreate Mickey Mouse. Okay. That's a character. That's a character. It's a character. Rather than a style. That's like a licensable character. I think that's different.
[00:18:26] If these systems were producing, and I'm not saying that people aren't doing this, but if they're producing Princess Mononaki or whatever that Studio Ghibli cartoon is, specific characters from there doing things that the studio doesn't agree with. Maybe there's an argument to be made there. But if it's a style, I just don't see it.
[00:18:50] Now, I think the flip side of that, though, is still hinges on how did the AI companies that created a system to replicate that style, how did they get the source material? Did they get that legally or unethically or whatever? And copyright law around that content that was fed into it? That's obviously still an open question. But yeah, I completely agree. I don't think you can copyright style.
[00:19:19] That's my I'm not a lawyer take. No. No. And the other issue here is the receipts. It's just receipt number 576 in the issues of reality. Yeah. Right? And how are we going to know what's real? How are we going to know the provenance of things? How are we going to know what's what? It's not like people haven't had tools to fake receipts before. Oh, God, yes. To your point. So when I work at Time Inc. I have Pro Tools. Sorry, not Pro Tools. I have Photoshop, you know?
[00:19:49] Right. Right. Well, you had Xerox machines in the day. There's all kinds of things you could do. That's true. Way back in the day when I worked at Time Inc., there was one writer on people who would go into Bloomingdale's, look at a couch and say, I think that's about 35 lunches. That is to say, 35 receipts for lunches he didn't take. Yeah. And that's fraud, right? No, absolutely. Okay, that's straight up fraud. But it was known, right?
[00:20:17] And way back in the day, you could buy baggies. Remember when you used to go to a diner and the receipt was torn off from the shit that the waitress did, right? Yep. And you could buy bags of those. Because they were just thrown away. It's like, well, give it to me. I use them for receipts. I can buy to use for them. Yeah, it was lunch. Interesting. It was $14.95. Right. Taking Jason to lunch, right? I just hope they never asked Jason whether you took him to lunch because he's hungry. Yeah.
[00:20:46] Jason's just a really hungry guy. Yeah. So it's not new. There's just new methods. Just new methods. And quite honestly, if I pull up these receipts, and again, maybe it's because I know that these are generated by AI. It's hard for me to tell. But when I look at it, I'm like, that looks really good. And it kind of still has that little too perfect AI sheen to it. It does. You know? It does. It looks like photorealism art.
[00:21:15] It's not quite real, but it's almost. You know what I mean? Sometimes you look at photorealism art, and you look at it at a certain perspective. You're like, oh my God, that's real. But the closer you get to it, you're like, okay, I see how this is actually art and not actual photo. Yeah, the first one that we have there just looks like it's photoshopped. It looks like you imposed a layer on top of a picture of wrinkle paper. Right. And so the text doesn't look very real. You know, there's some iterations and stuff.
[00:21:42] But yeah, but you know, am I looking at it through the, well, I know this was done by AI lens. And that's why it doesn't, you know. The amusing one, if you go down to the Applebee's one, which is below that, it's also the human mistakes that get you. Because the Europeans use the comma instead of the period for decimals. Yes. So the receipt is period decimals until the total, and then it's a comma decimal. Busted! Yeah.
[00:22:10] I mean, even this part of the receipt looks like an overlay. It looks like someone put a text layer on there. We know this is going to get better. Yeah, it will. They will do this well. Yeah, it's going to get into cipherable. Yeah. It will be. Yeah, interesting. Well, there you go. Faking receipts. And then, oh, by the way, that tool is now opened up for free use. You only get, I think, like three images if you're a free user per day. But nonetheless, you can, that's three receipts a day that you can fill. So.
[00:22:40] Two weeks, you got a couch. Dang. I mean, seriously. Yeah. You're making bank with that. And then finally, I thought this was really interesting because this is a topic that comes up on the show a lot. We talk about open models or open source, open weights. And I saw this, personally, I saw this as a good opportunity for me to get more knowledgeable on the different types of AI, be it closed models, be it open weight versus open source.
[00:23:08] The news is that OpenAI has announced plans to work on and create an open weights model, taking feedback from the community with a, there's a forum on their site. You know, they're basically saying like, help us build this open weights model. They plan to hold an event in San Francisco sometime within the next few weeks, actually, as well as Europe and Asia Pacific regions to discuss the feedback, to showcase prototypes
[00:23:35] of the model eventually and possibly release sometime this summer. And yeah, I mean, this is, you know, definitely a response. A response to kind of the emergence of DeepSeek and a question, a pushback to a certain degree of the question of like, do we need all of, do we need to do, make our models the way we have the last two years?
[00:24:03] Are there, is there actual value in, you know, not going whole hog on compute power or not completely controlling the experience the way it is with like GPT-4 and, you know, CLOD3 and Gemini? Those are all very kind of controlled, closed models. What is the benefit to OpenAI going with an open weight model, which by the way is different than an open source model. This is not open source, it's open weight. They're very different things, but yeah. Let me stand there for a second.
[00:24:31] So I think the sequence is this open, in our openness scale. Right. Right. And it's open use. It's LLM, you can use it without any cost. Then there's open with weights, right? So you can now dig into a little more and then there's open source, which means you can change it. Is that kind of? Yeah. Like, you know, closed, think of closed as like the systems that are in the cloud that
[00:25:00] exist out there and you can use them the way they decided all the things, you know, they decided the weights or the parameters. Weights essentially are the parameters with which the model is controlled by. The code, the underlying code, the training data, all that is decided by OpenAI when you go onto the web and you use GPT-4. An open weight model, which, you know, Meta's LLAMA is one example of this. Meta has, you know, many open weight models that we've talked about. Mistral 7B is another one.
[00:25:30] This is, excuse me, where those weights, those parameters are publicly available for download and kind of malleable. So download. So we're now we're talking like systems that you could just run on your local machine instead of being entirely in the cloud. Still, though, the code and the training data is closed. It's proprietary. So there's still a little external involvement, but a little bit of control.
[00:25:56] Apparently, open weight models are really good for things like industries like health care and for banking because it gives them access to a model that they can run locally, that they can have some control over and not feel like they're sending HIPAA protected data over the internet into somebody else's API and losing control over it. Open source models. This is, you know, where you get to some of the models that DeepSeek is doing. Mistral Nemo is another one. Fully transparent, kind of like open source code, right?
[00:26:26] Fully transparent, fully accessible. You can replicate. You can modify. You can inspect it. It's very, very community driven. And so you, you know, it's often it lags behind the capabilities of closed models, but it's got a lot more eyes and attention on it. So you can have a lot more. Yeah. A lot more vision and view into how it's created and what you can do with it.
[00:26:55] And I wonder at a business model level, whether this is working as kind of a freemium thing. Yeah. If your competitors are out there, if Lama's out there, a whole bunch of free open source stuff and Stral is too, is OpenAI in a position where they have to have more open and free things out there to tie people to their platform? Yeah. Yeah. I'm super curious about seeing kind of like that. I think that's one of my, my bigger questions is what does OpenAI get out of this?
[00:27:24] You know, they, they feel the pressure obviously from others who are kind of going down this approach and, and, you know, they probably want to be in all places in some way, shape or form. You know, what do they get out of that? Are they looking for a speed in development? Are they looking to, you know, broaden access and broaden the reach kind of like what you're talking about? Keep, keep up pace with Meta and Deep Seek on, you know, on the success that they're seeing by being open and that we talk about a lot.
[00:27:54] Um, yeah. And, and I think that the point of being able to run on your own hardware before this has there been anything from OpenAI that is like, just put this on your machine and go. I don't, I think this is the first. And so OpenAI wants to be there too. And we'll see what that, what that yields. Yep. But interesting. It ties into a topic we talk about a lot and, and through this story, I feel like I understand that a little bit more now. So yay. Yay. I always like that. All right. We're going to take a quick break.
[00:28:23] Then we're going to get two things that have nothing to do, or at least aren't directly related. Everything is OpenAI. Exactly. To OpenAI. That's coming up in a second. Trust isn't just earned. It's demanded. And whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.
[00:28:51] Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks, like SOC 2 and ISO 27001, centralized security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk. Vanta not only saves you time, it can also save you money. A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
[00:29:19] Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, our audience gets $1,000 off Vanta at vanta.com slash AI inside. That's V-A-N-T-A dot com slash AI inside for $1,000 off. All right. Amazon's AI strategy kind of coming more into focus now.
[00:29:47] Amazon's one of those companies that were like, all right, you were kind of there early. With Madam A, as we'll call her on the smart speakers. But kind of like Google, it was like, you were there early, and now what are you doing? It took Google a little while to kind of catch up and refine. Amazon is taking longer to do that. But AGI SF Lab is one of their labs working on these products.
[00:30:12] They unveiled Amazon Nova Act, their first general purpose agentic AI model. And this was unveiled on Monday. And right now, you know, and I don't even know that we have access to this. Oh, yeah, it's just a research preview. So if you want to get in on this, you know, as a developer, you're curious about this or whatever, you can go to nova.amazon.com.
[00:30:36] And you can get started with the research preview, control of the web browser, simple tasks, navigating the web, form fill out, calendar date picking, that sort of stuff. But they, you know, have more plans in mind for the future and for upcoming kind of integration with Alexa Plus, which is kind of the overall picture. So I went to it. You can go to nova.amazon.com. And the story says something about controlling your browser. I couldn't get to do that.
[00:31:06] It's just, it's just, to me, it looks like another model where you can ask it questions and get pictures and so on and so forth. I tried to ask my standard, as I said last week, my standard is trying to get it to do Johannes Gutenberg at a laptop. And it wouldn't do it. It violated the rules. I think because if you have any name in there, they might as well tell you what the rules are so you know. So you don't violate it. But it wouldn't do it. See if it does it for you. Give me an image. And I'm sure I'm misspelling it, but that's okay.
[00:31:35] AI is good at correcting that, knowing what I mean. Give me an image of, I'm sorry, I can't recreate that. Yeah. It's like, nope. Not going to do it. Yeah. As a stark contrast to OpenAI, the guardrails here are definitely on. So you can't do that. But okay. So you can play around with this a little bit. I haven't really messed around with it yet to know kind of the, you know. Again, it kind of ties back into what you said earlier. Oh, I didn't see ACT. Like, okay, cool.
[00:32:05] We got a new one. What the heck do I do with it? So if you go down to the left column there, ACT, that's the take charge of your browser. Oh. Amazon Nova ACT. I'm going to sign up for a, you see, preview. A success. You're all signed up. Okay. That was easy. Okay. Now what? Okay. Now let me do it. Yeah. You can leave Python. Okay. Okay. Yeah. Now what? That's a really good question. There's no indication.
[00:32:34] It's not like it installed a, like an extension on my Chrome browser or something. It's more building with the Nova ACT SDK. So, all right. Well, I'm doing this kind of on the fly. So there's probably some information or some answer in here as far as what I could do. You know, is it a preview of the SDK versus here's a preview of it actually controlling your browser? That I don't know. Yep. We'll play with it. Apparently I'm signed up.
[00:33:04] Now this is not related to the show, but just breaking news right before we got on too, is that Amazon is bidding to buy TikTok. Oh, that's right. So I. That's right. I saw that flash by. And I was thinking about this. Everybody's putting their hat in the ring. One of the other later stories we have is a video platform and the problem of reality. And it strikes me that TikTok is valuable not just for the words and the sounds and the faces and the images.
[00:33:32] It's also for that piece of reality because people are doing things. And you start to train the machine what happens to the egg when it goes off the table. Yeah. Oh, that's true. Good point. There's a lot of cooking videos, right? So could Amazon become really good at envisioning cooking here and doing demonstrations of fake recipes and that kind of stuff, right? So the TikTok piece, there is a little tiny potential AI angle here.
[00:33:59] Or teaching you how to dance because if you're pulling in all your stuff, yep. All your videos from TikTok, there's a whole lot of dancing going on. Or doing a video of me as if I knew how to dance. Yes. Right. Yes, exactly. Cool. Oh, that's interesting. Okay. We'll follow that. Let's see here. Meta. I'm trying.
[00:34:25] I see you put in a Verge article, which is easier to actually show on the video feed than this Bloomberg article that I don't have a subscription to the Bloomberg. Anyways, Meta is working on their smart glasses. So, you know, the Ray-Bans have been a big success for Meta. Now they're thinking, you know, they're going to continue iterating on that. And there's some information in the Bloomberg article that talks a little bit about that.
[00:34:52] But a big part of the article is really about the next phase, you know, the next extension. And honestly, from my perspective, having checked out Project Astra, both the monocular and the binocular versions at Google, I read this. I'm like, okay, so this is, you know, playing in the same sport as that eyeglass solution.
[00:35:15] Codenamed Hypernova, projected to cost around between $1,000 to $1,500. There we go. That's probably the better way to say that. With a screen located in the lower right quadrant of the lenses. These lenses that are in the Verge article are not the actual lenses, just a hero image they included. But a little screen down in the lower right-hand quadrant of the right lens. So you'd have to look down for your information.
[00:35:45] That's a little different from Project Astra, which kind of puts it in your line of sight. It puts it in your line of sight, but it doesn't appear unless you, you know, move your head or, you know, something like a notification comes through. Anyways, that's the comparison point. But upgraded cameras from the MetaRay bands. There would be a second-generation version already in the works as well, Hypernova 2. That would be a binocular lens system.
[00:36:10] So you'd get more of that stereoscopic kind of information through that approach. And, you know, why are we talking about this on an AI show? Because my belief is that form factors like this, artificial intelligence is integral to what makes them useful. Without AI, it's just a camera that you wear on your face. With AI, it's a contextualization layer of the world. Right, exactly. And it knows your context. Yep.
[00:36:40] And it can listen to you, and it can listen to what's around it, and it can see what's around it and all that. I'm getting Google Glass PTSD. Yeah, yeah. It comes up for me too, Jeff. I feel you. I feel you on that. Yeah. But, you know, like I said, I tried the Astra, and they did not look bad. And, you know, like on.
[00:37:03] You know, the thing about the Google Glass PTSD is, at a certain point, I realized, like, I kind of look like a goofball wearing this thing. I don't want to wear it anymore. And I want glasses that just look like normal glasses, but do the neat things. And like the Astro glasses do. And Meta's been in this game for a little while with their Quest headset devices. They've been making eyewear. You know, and then, of course, the Ray-Bans are really close to things that look normal.
[00:37:33] Like, I've seen people wearing the Meta Ray-Bans and not realize that they were actually the meta versions. I thought they were just Ray-Bans. So, you know, they're getting closer. Yeah. What is it about it that gives you the PTSD? Is it the they look funny? No, it's $1,500. Ah, it's the cost. It's the cost. And so, you know, am I going to be enough of a schmuck the second time around? Well, good. I'm going to have it. Honey, I got to get one. You know, I'm in the AI business. You know, this is what it is.
[00:38:00] So I just went to eBay and the top listing for a Google Glass, $99. Oh, yeah. Worthless. Yeah. Even as an oddity for a museum. Nope. But, you know, it might be the kind of thing that like 20 years from now, it's worth a lot. All right. Let me see what my Apple Newton is worth. I have one. Yeah. Maybe not.
[00:38:30] Yeah. A hundred dollars. A hundred dollars. Okay. He's like Rain Man. It's about a hundred dollars. Yeah. Right. Everything old that's not new again is about a hundred dollars. You know, it's like back in high school, I had a whole bunch of heavy metal t-shirts. And I got rid of them for some reason. And if I had held on to them, there is like a big market now with high value for old school heavy metal t-shirts. You just don't know what's going to pop and what's not. What's that?
[00:38:59] Do you blame your mother for making you get rid of them? Jason, you have to do something. You're not ever wearing these anymore. This is ridiculous. I have one of them. Honestly, when I think about it, I can't remember exactly how that went down. She probably was the one to get rid of them. No, I wouldn't have let her get rid of them, though. Okay. That's the thing. I probably chose to. This fit of maturity. Yeah. It's probably like, no, I don't listen to heavy metal anymore. Or if I listen to heavy metal, I don't feel like I need to, you know, I want to wear it on my t-shirt. Advertise them.
[00:39:28] Yeah, advertise them. Anyways, you don't know what's going to pop. And apparently technology just doesn't pop unless you have the real, real original stuff, you know, like the Super Bowl school app of stuff. So I'll be tempted. I mean, I was tempted by Google Glass. I might be tempted, but I think this time I'm going to wait. Yeah. Yeah. I will definitely be tempted. I'm very, very interested in this. Given it looks good, it's got to look good. If it doesn't, if it looks like technology on the face more than it looks like a pair of eyeglasses. It makes you feel like you're Robert Scoble.
[00:39:58] Yeah. I don't need it. I don't want it. But if it actually looks like the Ray-Bans, I continue to consider, and I know that they're working on new ones, I continue to consider getting them because I'm like, you know what? Like, I've seen plenty of people wear them now, and they look good. They actually look like glasses, like sunglasses that I would want to wear. And it would be really convenient to have a camera like that for some of the video content that I create. Well, but maybe think about that. So if you get them with the sunglasses, then the only time it's usable is when you're outside in the sun. Yeah.
[00:40:27] Or you're just a cool-looking indoor dude. Oh, yeah. One of those guys. I wear my sunglasses inside. Yeah. No, you're right. You're right. I would probably get the regular ones. But I don't know. I don't know what I'd do. I still haven't bought them. Apparently, I don't care that much. Yeah. Elon Musk's XAI acquired X last week, late last week, I think.
[00:40:55] Valued at $33 billion. Air quotes. Yeah. Buys. Yeah. You know, I just saw this story as like a kind of F you, essentially. Like, see, we bought it. My other company bought X for $1 million more than I spent on it. So see, I didn't lose anything. So go away, all you haters. Right. It was a way to pay off, I presume, the debt.
[00:41:22] But it also, I think a lot of Tesla stock was involved in this. And X stock, there was all kinds of interweaving things. And I don't know how that works. But I think it was also a way to try to clean that up a little bit. But it was right-hand paying the left-hand. It's ridiculous. Yeah. It's actually, you know. Yeah. Well, but what it does also, and I think this is what's really, really pertinent to this show, is when we look at data sets that are incredibly valuable, right?
[00:41:50] Like data, AI is made of humans. It's kind of like Soylent Green is made of people. AI is made of humans. And that's what makes it valuable and beneficial. And so much of the really great human contributed content is on platforms like Reddit. It's on platforms like Twitter.
[00:42:10] I mean, it's just the historical data that the vast amount of information that's there of human communication of the last 20 years-ish, something like that, is there. And so if Musk owns both companies and they're separate, then there's still a little bit of finagling that needs to be done, I think, and potential for intervention when it comes to sharing this data with this effort. And now that they're owned together, it's kind of like all bets are off.
[00:42:38] It's like, okay, I can put that entire data source to work and be unencumbered and unhindered in the process. Yep. Yeah. So anyways. Also brings Elon one step closer to his vision of the Everything app, right? Kind of puts everything, you know, that's something he's been working towards. I thought we were supposed to have it by now. I thought we were supposed to get rid of all of our money and just use X. Yeah. Yeah. Someday. Chinese dreams.
[00:43:07] Someday, Jeff, you'll have your dream of Musk's Everything app. Let's see here. And what am I missing here? Oh, yeah, that's right. OpenAI. Okay. Oh, dang. We're back to OpenAI. But briefly. And Anthropic. They're getting along, apparently. At least when it comes to something called MCP. MCP is all about how – and that stands for Model Context Protocol, by the way.
[00:43:36] This is about how AI models connect to data sources. This protocol, I believe, was created by Anthropic, right? I think so. And now looks to be some sort of, you know, potentially widely adopted – I don't know if standard is the right word for it, but approach? Could be. Yeah, potentially. So it's middleware. It's trying to create a structure between the AI company and data sources.
[00:44:04] But what's – it's kind of – the way I look at it, and I'm going to get this completely wrong, and I hope that our listeners will correct me if I am, is that it's kind of an API in reverse. Rather than the data source saying, this is how you get to my data. Hello. This is the AI company saying, this is how you can be gotten. Oh, I see. I think that's how it operates. I could be wrong. Okay.
[00:44:29] But what's interesting in that case is whether the data sources want to be gotten. Mm-hmm. And that's what I think is uncertain here. I think it might have been Benedict Evans who raised this question. I can't remember who did.
[00:44:49] But this presumes that everybody, the AI companies want to connect to, and not just as data sources but also in agentic models, finds it advantageous to be connected. Mm-hmm. And so that by trying to create a standard mechanism of doing that, okay, that makes sense for the AI companies, but does it make sense for everybody else if there's no business model attached? Hmm.
[00:45:17] If there's no value accreting to me as the source. Right. Right. Or that's the data source view. If you look at the agentic side of this, okay, so I'm a travel agent and maybe I sell through the AI to a customer or maybe I get disintermediated and you take all my information and you do everything but the transaction and I don't get credit for it. Mm-hmm. And the AI company gets credit for it.
[00:45:46] These are all kinds of business issues that have to get dealt with. So I don't think it's as simple as saying, here, here's our standard. You're so lucky to work with us. Mm-hmm. Um, I don't know. Yeah. And then the other question is, so fine, OpenAI and Anthropic do this. Will Meta, will Google, will Microsoft itself, even though OpenAI is involved? Um, that's not going to be certain yet either. No, certainly not.
[00:46:13] Yeah, they call this the, uh, a USB-C, uh, what is it? A USB-C port for AI applications. Okay. Um, yeah, wait, like I saw this as like an interconnectivity piece. It's kind of like how do you connect these two things together in a seamless way? But what you mentioned as far as like who it puts the, kind of who it puts in the driver's seat is an aspect that I missed in reading through this. I think so.
[00:46:41] I mean, I could be wrong about this, but I think that's, I think that's an issue here. Mm-hmm. Uh, and then, and then I put up, but I don't fully understand it. Um, Stephen Sanofsky, I think its first name is Stephen, yes? Yes. Stephen Sanofsky. Mm-hmm. Thank you. Uh, wrote about this as, uh, at a, at a more historic level of, of how middleware has worked in the past. Right. And, um, that in the Microsoft, uh, Department of Justice antitrust case, that was all about middleware, he says.
[00:47:11] At the time, the browser itself was viewed as middleware, as was Java. Uh, and so, uh, the consent decree with Microsoft 2006, middleware was mentioned 38 times. Um, so how this model works, does, is it a standard that everyone, uh, accedes to and uses, and then it makes everybody's life easier? Mm-hmm. Or, um, do the big guys in the field, uh, refuse to use it?
[00:47:41] Uh, which means that it can't become a standard. Right. Yeah. Well, I think one thing that he, he pointed out in here is, you know, it's, it has the, it runs the risk of being a fragmented solution if even one. Right. And, uh, chooses not to use it. And, uh. Exactly. Yeah. Yeah. So it's, it's a shot by Anthropic and OpenAI to say, play with us. Uh, but I think everybody else should say, tell me why. Yeah.
[00:48:10] Well, and that's another thing that, uh, Sanofsky points out as well. Uh, vendors are, you know, can, can be concerned about losing control over their experience in the process. And I mean, I tell, you know, is Anthropic creating this for the good of, of humanity and for everyone else?
[00:48:28] There's certainly a, uh, you know, by creating their MCP solution, they certainly have their own reasons and their own rationales for why it's good for their business and for what they're doing to have something like this. And is that, you know, that they are the ones, like you said, to kind of, um, to be in control of, of that experience instead of leaving that control in the hands of the people on the other side. Right. Yeah.
[00:48:58] Yeah. Interesting. It's a, it's such a complex ecosystem that's building here. Mm-hmm. It is. It's hard to keep one's head around, um, who's on first. For sure. Sure it is. Very cool. Happy you put that in there. Cause that was another aspect of AI that, you know, I feel like I learned a lot this week prepping for this show. So that's a good one. Um, we're going to take a super quick break. Then we have a few more stories to round things out. That's coming up here in a moment.
[00:49:31] All right. In the, you know, checking in on court cases around AI and copyright. Right. There's a ruling by the U S court of appeals for the DC circuit, uh, that confirms here in the U S that work created solely by AI can't be copyrighted under U S law. This is, I find the story really potentially a very interesting conversation because it, it, it talks about.
[00:49:58] So basically the case says human authorship is fundamentally required for that kind of protection. The case itself is Thaler versus Perlmutter. Dr. Stephen Thaler attempted to, uh, get copyright secure copyright for artwork that was generated autonomously by his AI system that he called creativity machine. So he created a system, it created art. He tried to secure copyright for that art.
[00:50:24] And the court has now upheld the denial of his copyright application, basically saying, no, you as a human didn't create the art. Therefore you can't secure a copyright. Right. And this got me really curious. And I was like, all right, so are there counter argument precedents to this in, in the U S also outside? And of course I'm not a lawyer and you know, yes, I use perplexity to kind of help me do a little bit of research around this, but few things came up for that.
[00:50:53] So in the U S work for hire doctrine, which says that employers can claim authorship of work created by their employees. So that's interesting, but that does presume human authorship of some sort. So there you go, but it's kind of straddling the line there, uh, in the U S AI assistance. This is protection. That's granted when a human contributes creatively using AI tools. AI is a tool in that regard, not an independent creator on its own. So therefore that doesn't really support this at all.
[00:51:22] And then in the UK, this is where things are a little different. Their law apparently allows for copyright for computer generated works. That authorship is attributed to the person who arranged the creation. So a broader definition compared to U S law, but it just, it like my question around this is should creating an AI system that creates art infer human involvement, involvement.
[00:51:51] And it seems like the judge here is saying, no, that's not good enough. And I'm not sure that I agree. I agree with you, not the judge. Um, and then what's also interesting to me here too, is that this is a court saying this the meanwhile, the U S copyright office has been releasing things. And in fact, I think they just released as a part two of their report, um, of latest. And I think that, but, but your standard is the one that they go on.
[00:52:19] It requires human authorship and that's what they can see is that's not an off. That's not a binary. Mm-hmm. So what's the extent of human involvement in it? Yeah. Uh, is key. Yeah. It, these AI systems didn't create themselves. There's always a human involved in, you know, in it at this point anyways, until the singularity. Dirk, Dirk, Dirk. Um, but so, so at what point do you draw the line?
[00:52:46] You know, if, like, I want to, I want to compare, I feel like I want to compare it to something. Like if I, if I built a Rube Goldberg system that was all these levers and, and marbles and stuff. And part of the system is the marble drags past a marker that swipes across a piece of paper. And then I take that paper and I try and copyright it. Like, like, is that a comparison point? Because I feel like I should be able to do that.
[00:53:16] I created the system that created the artwork. Therefore I should be able to copyright it. I wouldn't think so, but it's, it's really confused. Now the other thing about copyright and this, this, I got into plug in the Gutenberg parentheses, my book, um, is that copyright was not created for the creator. Yeah. Right. Right. It was created for the industry. It was created for the booksellers and the, um, uh, stationers, the publishers who wanted a tradable asset.
[00:53:45] And in fact, it alienates the creator from their work. It says that here's a structure where you can, you can sell your rights. You no longer have those rights.
[00:53:54] Um, and then the other thing that interests me about this is that when the statute of Ann was passed in 1710 in the UK and publishers started to buy the rights of authors, the publishers argued that authors had rights in perpetuity because it was, it was, it was a right of authorship. It was right. It was a right of creativity.
[00:54:17] And so when I bought the author's rights, I got perpetuity and this went to the house of Lords and a case called Donaldson and, uh, that side lost. And so the way I interpret this is the copyright is not a granting of rights, but a limitation of rights. Ah, okay. You start with the idea that an author, when I create something until such time as I make it public, it is fully mine in perpetuity. The deal here is when I make it public, what are the circumstances under which I make it public? Hmm.
[00:54:47] And, uh, and so now it's no longer forever. Now it is limited for all the purposes of society that we create for copyright. So it's really not about protecting the author. It's about protecting society. Now, of course, Disney screws that all up in American copyright. Now it's 5,000 years and it's, uh, you know, it's awful. Um, but if you try to take that rationale and put it here on AI, I guess the question is,
[00:55:15] is there a need for the output of AI to be a tradable asset to economically support that creation? Hmm. So if you created a music machine, which you could do once you take one of these open source models and it just turns out, because the way you, you tune it, it turns out songs people just love.
[00:55:38] And, but it's made by the AI without you doing specific instruction from that point on, then you don't own it and you don't earn money for what you put the effort in to create. Mm-hmm. Interesting. That's fascinating. Um, so then at the end of this whole process, as we, I, I feel like I'm repeating something that we've, we talked about like six or eight months ago with this, another story along these lines.
[00:56:05] So then this gentleman should take the output of the machine, throw it into Photoshop and change the hue by one centimeter or whatever. It's like, cool. I did my work. Um, actually, yes.
[00:56:21] So a couple of ways when, when, when Montaigne, the inventor of the essay, uh, published various versions of his work, one of the presumptions about why he did new renditions was so that he could extend his rights over it. The other thing in the book I'm writing now, uh, about the line of type that, uh, I had to get back my head back into is that, is that typefaces are not copyrightable. Hmm. Hmm. But progress software is.
[00:56:51] So when postscript came along and you had a font that was, um, rendered for postscript to in turn be rendered by raster image processors and printers and screens, that can be copyrighted. So if you, if you have Helvetica and it's just Helvetica, you can copy it. You can do whatever you want with it.
[00:57:13] But once it's Adobe's Helvetica, not because they made it, not because they designed it, not because they own it, but only because it's a program. Can they then copyright it? I see. Oh, that's so interesting. The way around it, I suppose. Yeah. Fascinating. Well, okay. There we go. Sorry. Walk moments there, but. I love it. I'm fast. No, I love it. That's when I read this, I was like, I bet you this, we're going to get to some interesting places with this one.
[00:57:42] Because it's interesting to see how, how this is all shaping up and, you know, this could be disregarded at some point too with a new case. And suddenly we're talking about it in a completely different perspective. I guess we'll find out. Um, and then finally, just around things out. I just, I just came across a story, um, on Wired. That's a, uh, kind of a cool article about the Bluetooth lady as, as it refers to, uh, her, Kristen D. D.
[00:58:12] Mercurio. Her voice is the Bluetooth speaker voice, call center voice, you know, a number of other voices that you probably heard in the last handful of years. And, uh, it's kind of a, you know, a little bit of a puff piece sort of thing. It's kind of like, here she is and she awesome and everything. But, um, what she talks about in, throughout the article is just kind of a perspective on some of the topics that we talk about as far as the impact of artificial intelligence on the voiceover industry.
[00:58:40] You know, she mentions that, you know, many corporate VO jobs are being replaced with synthetic voices. I've certainly heard, you know, a lot of synthetic voices out there in the last year or so, but she still believes as I do. And as we've discussed the human voices, human, uh, human creation will always be relevant. It might just become artisan. It might just become kind of a, like a luxury thing.
[00:59:09] And I know that's probably, you know, a cold comfort for, for certain people in the creative industry, but I don't think it goes away entirely. And that's kind of her point is that there will, you know, she believes that there will always be a desire for a truly human, you know, AI driven output might exist. It might continue to improve, but knowing something is truly human has its own inherent value is kind of what I took from, from this piece. So I think it's wishful thinking, but I think it's worth wishing for.
[00:59:39] Um, I put in the link to her, uh, tick tock feed, uh, in the rundown because I've been following her for months and months. Yeah. And she does have the most remarkable voice. Her timber is just phenomenal. She's, she is the, the, the voice artist here. So if you go down and just pick anything at random, uh, you can see how to create character voices. That's there. If you go down, do, do, do, do. That's four or six down. She's one of the nice little house. Tick tock is, is forcing me to authenticate.
[01:00:08] So let me do that first. It's like, you've got to drag this slider thing. And then I had to, okay. Yes. I'm human machine. Yes. Scroll on. Where am I going? Scroll down about five or six rows here. Or there. Stop how to create a character voice. I haven't, I haven't listened to it, but that's the kind of thing she does. Well, if you play that, can you do a sound character voices and how you make them? So if I want to build a character voice from the ground up, there are six levers.
[01:00:35] I like to call them that you can kind of push and pull until you come up with something unique. The first one is pitch. This is pretty basic. Does your character speak very low or does your character speak very high? The next I like to use is speed. How fast does your character talk? It kind of seems like her voice is AI. I know. Because that's, she is. But also kind of slow. Or you can have somebody who speaks low and fast. The third one is volume.
[01:01:01] Is your character very quiet and timid and just barely using their voice? Or are they loud and crass? This next one's a bit trickier. She's good. She's really good. She's just amazing. Yeah. Yeah. So her handle here is KDImerc. KDImerc. I followed. And the voice is just absolutely mesmerizing. So yeah, she was doing, before she, I think, outed herself as a Bluetooth voice. Uh-huh.
[01:01:30] But she would do those things and you'd say, oh my God, that sounds exactly like my device. Yeah. And there's a way to do it. There are stories in New York. There's somebody who was the voice. There's the lady who was the voice of every airport. Right. In the country, right? Has a slightly southern twang. There's the guy who was the voice of the New York subways who was a radio guy and actually was trans. I think, I think we came a woman.
[01:01:57] Um, and, um, nobody kind of knew that until she outed herself, uh, in terms of being that voice. Mm-hmm. I think it's really messy. I can, you know, I get mumble mouth. I'm not very good at this. I really admire people who have that control over their instrument. Yeah. So she's so good. And, uh, that yes, I think she'll always be hired. I think that's true.
[01:02:24] But I don't think it's the case, uh, that every audio book is going to be produced by human beings still. I think we're going to, and we've talked about this many times in the show, that I see a markup language for emotion and jokes and things like that. I think ways to edit a computer generated voice. I think we're going to get there where a lot of this is going to get taken over by that. Yeah. Interesting. Yeah. I realize it's wishful thinking. I mean, I, and, and I, I, yeah, I think, I think we're both kind of on the same page on this.
[01:02:55] The, the, the human aspect, like the, the kind of the drive to be drawn towards the human output simply because it's human might, yeah, might, might prove to be kind of unnecessary. Right. Like the AI systems might get to a point to where it is truly and completely impossible to decipher the difference.
[01:03:20] And then you're just kind of banking on the assurance of the person who you're following. Like if you know that she exists and she does the thing, then you're supporting her to support her. You know what I mean? Versus. Yeah. Versus any real kind of obvious, like recognition that there's a difference between the two things. Cause we might get to the point to where there is no difference. And yeah, that.
[01:03:43] The way I've talked about it when it comes to AI generated stories of things like financial reports and baseball scores is the way I've said it is that text is just another form of data visualization. It's a way of imparting information and visualization. And visualization is the wrong word now, but audio is also just another mechanism for information communication. Yeah. Right. Right.
[01:04:08] So if it's good enough, if it doesn't lose my attention, if it gets across all the layers, it needs to get across, not just the words themselves, uh, the computer could well take over much of this. And that's okay. Right. We see this happening in newspapers. Now there's stories they're putting up with AI voices. And if you'd rather listen to the news story for five minutes, there it is. I don't think it's become huge, but enough papers are doing it at low cost that I think it has ongoing value. Yeah. Yeah.
[01:04:38] I think you're right. Well, we have reached the end of this episode of AI Inside. Always so much to learn and to talk about, and I've just really enjoyed it. So thank you, Jeff. Always. Thank you, boss. Hopping on each and every week to talk about AI for an hour. Jeffjarvis.com is the website that you all should know by heart at this point because you go there every day. About every day. Every week. Every day you go to that website and you order a new copy of Jeff's book. Right?
[01:05:08] Right. The Web We Weave, the Gutenberg Parenthesis magazine. We will certainly let you know when Jeff's new book on the Linotype. I've got to finish it first. I have it. As of this weekend, I have a first very rough draft. Excellent. So I'm working away now. Now I'm going through thousands of printouts to see what I left out. Then I'll stick that in. And then I'll say, oh, what a mess this is. And then I'll take a rough edit. Then I'll edit and edit and edit and edit. So it's a little while yet.
[01:05:38] That's an accomplishment, though. First draft. That's kind of like, okay. I really enjoyed writing this one. This has been a whole bunch of fun. But it is, as I think I said last week, the horror hallway that keeps going down. The horror hallway. I like that. I love doing this show with you, man. Thank you so much for hopping on each and every week. Thank you, everybody, for checking out the website, which is AIinside.show. That's where you can go to subscribe to the podcast. Find everything that you need to know about past episodes.
[01:06:08] They're all there. RSS feeds, socials, everything. And then finally, patreon.com slash AIinsideshow. If you want to support us on Patreon, you can do that. We have some amazing supporters on Patreon, including some executive producers, which I'm vamping as I try and pull this thing up. There we go. So we've got the little scrolling thing. If you support us on Patreon, you can get ad-free shows. You can get a Discord community access.
[01:06:37] You get at the executive producer level, which are all the names that you see at the bottom of the screen. You get a t-shirt, an AIinside t-shirt that I think is just a really neat t-shirt. And you get the pride in supporting the show and having your name read out. Dr. Do, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina. I hope that radio station, by the way, has gotten so many new listeners to read this each and every week. I think that's so smart.
[01:07:05] Dante St. James, Bono Deiric, Jason Neffer, and Jason Brady. Y'all are awesome. Thank you endlessly for your support and for enabling us to do this. We will be back next week. We have a very special guest lined up for next week. I've already announced it, so I don't need to play coy with it. Jan LeCun from Meta will be doing... Jeff and I will be doing a recording.
[01:07:30] It won't be a live broadcast recording, but we'll be doing an interview with Jan later this week. So in next week's episode, we're going to have a pretty special episode with Jan. And yeah, I'm really looking forward to it. It's an amazing opportunity to talk to someone who is pivotal to this moment. So next week's show will be live, but we will have a prerecorded insert into it. So please be here next week. Yes, indeed. Thanks for asking. Have the normal live show next week.
[01:07:58] We're just going to play the interview and then probably bat around a couple of stories, whatever seems right. But we will be here doing a live show next week. So thank you, everybody. AIinside.show. We will see you next time on another episode of AI Inside. Bye, y'all. Take care, everybody. Bye.



