White House: Open Source AI is A-O-K
August 06, 20241:03:27

White House: Open Source AI is A-O-K

[00:00:02] Wir bei Vertex wissen, dass die Geschwindigkeit des globalen Handels zunimmt, was eine Steuerverwaltung komplexer macht.

[00:00:08] Und Ihre unternehmenseigenen Systeme sind nicht dafür ausgelegt, diese Steuerkomplexität zu bewältigen.

[00:00:14] Hier setzen wir mit unserer Plattform an, die Continuous Compliance ermöglicht.

[00:00:19] So erhalten Sie mehr Transparenz, mehr Genauigkeit und mehr Vertrauen in Ihre Steuerdaten.

[00:00:25] Sie wollen mehr über Continuous Compliance erfahren?

[00:00:27] Dann besuchen Sie vertexinc.com.de

[00:00:35] This is AI Inside Episode 29, recorded Tuesday, August 6th, 2024. White House says Open Source AI is A-OK.

[00:00:47] This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show.

[00:00:53] If you like what you hear, head on over and support us directly. And thank you for making independent

[00:00:58] podcasting possible. Well, what's going on everybody? Welcome to another episode of AI Inside, the show

[00:01:10] where we take a look at the AI that is sprinkled throughout so many things in this world. Not even just

[00:01:16] the big technology companies, but the little guys. We're going to talk about some of the little guys and how

[00:01:21] they're being swallowed up by the big guys in today's show among many other things.

[00:01:27] I am Jason Howell, of course, joined as always by Jeff Jarvis. How you doing, Jeff?

[00:01:31] I think AI is more like, pardon me, the beast inside Sigourney Weaver in Alien.

[00:01:37] Yeah.

[00:01:39] Trying to get out.

[00:01:40] And it's just trying to get out and take over the world. Well, at least we know, at least I think so. I think I lost track with the Alien franchise, but at least in the early on ones, spoiler alert, but I think the humans end up, you know, coming out on top.

[00:01:56] I think so, otherwise there wouldn't be sequels, right?

[00:01:59] There wouldn't be sequels, exactly.

[00:02:01] Right.

[00:02:01] But I don't know. Maybe that changes later on in the game. That makes AI sound so sinister, like so grotesque.

[00:02:08] As some do.

[00:02:11] Many people believe. I was at the dog park today. I was at the dog park today talking with some people, and those conversations are often, you know, go all over the map. And I was definitely talking to a few people who did not like AI.

[00:02:25] Really?

[00:02:26] Yeah. So they see it as that grotesque alien, for sure. And they're not alone.

[00:02:33] So before we get started, huge thank you to the people who support us in doing this show each and every week, patreon.com slash AI inside show.

[00:02:41] And you can support us directly like Roland Eichel. I think I'm pronouncing your last name correct. Or Eichel.

[00:02:48] Anyways, Roland, you are awesome. So is everyone else who has hopped over to patreon.com slash AI inside show and thrown us some support.

[00:02:58] We truly could not do this show without you. So we appreciate you being here with us.

[00:03:03] And, you know, if we aren't – I mean, you can still do the survey if you like, but I'm not going to promote it anymore.

[00:03:09] It's AIinside.show slash survey.

[00:03:11] But I think if there's homework today, if I can give you an assignment – I was realizing, you know, we're now nearly six months into AI Inside.

[00:03:21] We have some wonderful ratings and reviews. In fact, if I can say so myself on the podcast, Apple Podcasts, I think they're all five-star ratings.

[00:03:32] I mean, if you go there, it says 5.0 out of 5.0. So that's amazing.

[00:03:36] Thank you, folks. Thank you. Thank you.

[00:03:37] Keep it up. If you are not one of those numbers, please go over there. Take the two minutes that it takes to jump over there.

[00:03:44] Leave us a review. Those things – these all are really helpful for us as we search out, you know, sponsor opportunities and really just try and grow the show in general and, you know, raise the awareness of AI Inside.

[00:03:58] So that's your homework. I hate to put it like that, but my kids are starting back to school, so it's top of mind for me.

[00:04:06] Okay. And then we had some homework leading up to today.

[00:04:09] It was this week's news, and we've got some really great stuff to talk about, starting with Character.ai.

[00:04:17] The CEO, Noam Shazir, was just this last week announced to be returning to Google, where actually he worked at Google up until 2021 when he launched his startup, Character.ai.

[00:04:36] And when he was at Google, he was really instrumental to the foundation of Lambda at Google, their kind of conversational AI project.

[00:04:45] And so it's kind of a big deal that he's returning. Most of the staff in this deal is staying at Character.ai.

[00:04:53] I wonder how they feel about this. Probably not very good.

[00:04:55] But I don't know. Does this – it seems like that maybe – and I think the Wall Street Journal article that we have in the rundown as well kind of alludes to this,

[00:05:06] that maybe this is showing how big tech is really capitalizing on the technology that is created by the smaller startup AI companies,

[00:05:18] but doing it in a careful enough way that it won't raise that kind of magnifying glass of regulation of –

[00:05:29] Antitrust.

[00:05:30] What is it? Anticompetitive, antitrust acquisitions.

[00:05:34] Instead of acquiring Character.ai, I think it's notable that Google is hiring away the CEO,

[00:05:40] and the rest of the staff mostly stays behind. What do you think about that?

[00:05:45] Basically, the same thing happened with Inflection and Microsoft and Mustafa Suleiman.

[00:05:49] Yes.

[00:05:50] Who got acquired over to Microsoft. Inflection stayed. Inflection got a whole bunch of revenue.

[00:05:57] The investors were made whole. And that was engineered, as reports have it, by Reid Hoffman.

[00:06:04] This one feels a lot like that.

[00:06:07] It does.

[00:06:07] That the CEO is going back to Google, but Google also licensed Character's technology.

[00:06:18] And once again, the investors supposedly will be made whole.

[00:06:22] And Character continues as a company.

[00:06:25] And it got revenue.

[00:06:27] And it got kind of an endorsement from Google.

[00:06:29] And it's not an acquisition.

[00:06:33] And you're right, maybe, maybe has less antitrust smell to it.

[00:06:39] However, FTC, I think it was FTC, or government, is looking at the Microsoft inflection deal.

[00:06:47] So they might be on to this in some way.

[00:06:50] I'm not sure.

[00:06:51] You think, forgetting the predictions and the politics of it, do you think this is legitimately not antitrust?

[00:07:01] Or do you think this is a flexing of muscles by the big guys, Microsoft and Google?

[00:07:05] Oh.

[00:07:06] I mean, I think my gut tells me that it is definitely flexing muscles of the big guys at Google.

[00:07:12] Cool.

[00:07:13] I mean, what it really does is – and I don't think that this is absolutely a good thing for the industry in the kind of development and kind of innovation around AI.

[00:07:27] It does further strengthen and embolden the biggest of the big, right?

[00:07:34] Because they have the capital with which to buy off the talent.

[00:07:41] And character.ai is left with the remaining employees.

[00:07:45] And, you know, sure.

[00:07:46] And I didn't research the name of the new CEO.

[00:07:51] I probably should have written that down.

[00:07:52] But, yeah, I think all it does is it helps the big tech companies strengthen their fortress around AI.

[00:08:02] And that's probably not very good for the competitive nature of that particular part of the industry.

[00:08:09] And so I guess in putting it like that, that does sound a little like they're creating an environment that's less competitive for them.

[00:08:17] Right.

[00:08:17] On the other hand, so there was a kerfuffle that's not really directly AI-related where Reid Hoffman, to mention him, has been raising a lot of money for the Harris campaign.

[00:08:28] And he also has been criticizing Lena Kahn of the FTC.

[00:08:32] And the far-left progressives have been going after Reid Hoffman on social media this last week or so.

[00:08:37] How dare you?

[00:08:38] You're trying to use your money.

[00:08:40] And Reid appeared on CNN.

[00:08:41] He got questioned this way.

[00:08:43] And he wrote a post about this saying, I have my opinions.

[00:08:46] I have my expertise is the way he puts it.

[00:08:48] And I separate the two.

[00:08:50] I've not talked to Kamala Harris about firing Lena Kahn.

[00:08:53] But I think Lena Kahn's bad.

[00:08:55] And he could kind of have that opinion.

[00:08:57] But the point he was making was that if acquisitions get shut down as antitrust, then the ability to start companies and get investment gets severely hurt.

[00:09:12] And it's interesting.

[00:09:13] If you can be somebody at Google and you – I mean this guy whose name I'm suddenly forgetting, Trazier.

[00:09:21] I think that he launched Lambda, which was their first big model.

[00:09:26] He's a big deal.

[00:09:27] But then he can go out and he can itch his entrepreneurial scratch and make something.

[00:09:34] And a company can be born.

[00:09:36] And he can come back to Google.

[00:09:38] And so it's really interesting.

[00:09:40] You could look at it as a revolving door.

[00:09:41] But I also could argue – I asked you the question about antitrust.

[00:09:44] I kind of agree, but I could also argue to the contrary.

[00:09:48] This is a way to spark more innovation and entrepreneurship and startups and growth without acquisition.

[00:09:57] And it uses the power of Google and Microsoft to deposit events.

[00:10:02] I can argue with both sides, and I just did.

[00:10:05] Yeah.

[00:10:05] No, I appreciate that.

[00:10:06] That's true.

[00:10:07] I mean I'm really torn on it because to a large degree, this is how Silicon Valley has operated for so long.

[00:10:17] As an entrepreneur, like if you are Shazir and you've made this really wonderful, interesting, different thing at Google, you might be a really amazing engineer.

[00:10:34] And then you do something like that, it's like, well, how am I going to be as a business person?

[00:10:39] If I want to start my own business and I – what does that road look like?

[00:10:43] I think there's a lot of people in Silicon Valley who have followed that path and maybe recognized on the other end of it that they're not as good at business as they are with the technology itself.

[00:10:55] And that's just their pathway, and that's just part of their story.

[00:10:59] And yeah, so I guess it's kind of hard to knock it as far as that's concerned.

[00:11:05] Hey, good for Noam because I'm sure he gets a tremendous payday out of this.

[00:11:11] And who even knows what that leads to five, ten years down the line, the experience that he has with this company creating something else.

[00:11:17] Yeah.

[00:11:18] That could indeed happen.

[00:11:20] Axios, Dan Primack and Axios said this deal is about fundraising fatigue too.

[00:11:26] From the perspective of the startup, you're always out there raising money.

[00:11:31] You just want to build stuff.

[00:11:32] And in this case, it says that Google signed a non-exclusive license for Character AI's models and bought out the venture investors at around $2.1 billion valuation.

[00:11:46] So it's a way that the company gets capitalized and keeps going.

[00:11:50] And so I think that the laws as they stood are not adequate to where we are today.

[00:11:55] We'll see.

[00:11:57] One question I have about this, and Gary Marcus has been writing about this and shouting it from the rooftops for quite a while, is his opinion and his view.

[00:12:07] And by the way, I would love to get him on in a future episode to talk about this.

[00:12:11] And what I think about this is that the bubble is popping, that 2023 was the year of energy and excitement and possibility around AI companies and AI technology.

[00:12:25] And 2024 is quickly turning into the year of the reckoning of AI companies.

[00:12:31] And through that lens, is this one example of that, right?

[00:12:36] Like Character.ai, it's not like it was an unknown brand.

[00:12:39] It's not like it was horrible technology.

[00:12:41] But did it kind of reach a pathway where the CEO, creator, founder, any CEO in that position is going to hope that the product that they create is going to go so wild and so big that they maximize through that opportunity.

[00:12:58] Being bought back by Google, does that say something about the potential slowdown of AI the way Gary Marcus might argue?

[00:13:07] Yeah, but Gary's been arguing.

[00:13:09] He's been getting more and more contrarian about AI.

[00:13:12] He already has been, but he's really – he's got a pin at a birthday party going after the balloons.

[00:13:21] And so this is one more.

[00:13:23] Is this – we talked about this last week on the show.

[00:13:25] I do think that much of AI valuation is overhyped.

[00:13:31] The clear revenue path isn't there.

[00:13:33] Yet, on the other hand, it's pretty amazing stuff.

[00:13:36] Right.

[00:13:37] And it deserves – that deserves to continue to be developed and pursued.

[00:13:41] It's not like there's no there there.

[00:13:43] Right.

[00:13:43] I don't think the there – the there there is not AGI.

[00:13:47] That – I think a mistake is being made that this is being oversold in that sense.

[00:13:51] And Gary Marcus, I think, is completely right there.

[00:13:55] However, it would be dangerous, I think, equally to undersell this.

[00:13:59] I think it's being used in the wrong ways.

[00:14:01] I think it's being promoted in the wrong ways.

[00:14:02] I think it's being invested in the wrong ways.

[00:14:04] However, there's neat stuff here that has capabilities that we didn't have before.

[00:14:09] And it's not just language.

[00:14:10] So we'll see.

[00:14:11] Yeah.

[00:14:13] Yeah.

[00:14:13] We will see.

[00:14:14] Well, speaking of some of the biggest leaders in AI, NVIDIA, likely to delay its pretty major AI chip, the Blackwell B200, at least possibly three months according to the information.

[00:14:33] This is due to a design flaw that was reportedly discovered late in production.

[00:14:40] NVIDIA, of course, is not commenting on the rumor.

[00:14:42] They did not give a comment to The Verge when reached out for that.

[00:14:46] But a big deal nonetheless.

[00:14:47] This is a chip that is meant to be the successor to their current H100 chips that you're going to find driving a whole slew of major AI cloud companies and their technology and everything.

[00:15:03] So very anticipated and a three-month delay.

[00:15:10] At least they're doing what – I'm assuming they're doing what they need to do to make sure it goes well, and that's respectable.

[00:15:17] Better it gets caught now than later.

[00:15:19] Yes, indeed.

[00:15:20] I'm thinking back.

[00:15:21] Was it a Pentium that had a floating point?

[00:15:25] Yeah, the big Intel.

[00:15:27] Right, an Intel problem.

[00:15:28] And so the chip ended up out there in the wild, and people were trying to figure out how to compensate for that.

[00:15:35] You don't want to be in that position in a chip as complex as this.

[00:15:39] God knows what the error was.

[00:15:41] Three-month delay in any sane world is not a huge deal.

[00:15:44] In this world, which is moving fast as can be, when people are waiting on these chips, that's a bit of a problem for some.

[00:15:52] Probably doesn't threaten NVIDIA's placement in their part of the industry, I'm guessing.

[00:16:00] I don't think so.

[00:16:00] Three-month delay.

[00:16:01] I don't think that suddenly gives some up-and-comer the chance to overcome or overtake, but it is notable.

[00:16:08] And it fits into the Gary Marcus argument, that mood that those who are looking for pins to pop bubbles with.

[00:16:15] Sure.

[00:16:16] Sure.

[00:16:16] Sure.

[00:16:17] Well, NVIDIA is not without its errors, and we'll see where this goes.

[00:16:23] Yeah, yeah.

[00:16:25] Somewhat related to this, I came across a New York Times article that talks about NVIDIA also facing regulatory pressure activity in the EU, Britain, US, China,

[00:16:37] all with inquiries into the company's business practices, their dominance in the industry.

[00:16:46] And yeah, yeah, just kind of, you know, they're not immune to that heat either.

[00:16:52] Right.

[00:16:52] And they do have a huge part of the market for these chips because they were way ahead.

[00:16:58] Yeah.

[00:16:58] They were in graphics chips.

[00:17:00] I will never fully understand how, was it sheer luck where NVIDIA was in the graphic chip business and graphic chips turned out to be the right things to run transformer and neural nets?

[00:17:16] And I've never kind of gotten an answer to that.

[00:17:18] I've asked Leo, and we should probably go back to Stacey Higginbotham, who knows her chips.

[00:17:24] Yeah.

[00:17:25] Because was it dumb luck by NVIDIA's part, or were they strategically from the beginning aware that they were going to be at the heart of this?

[00:17:33] I've never understood that.

[00:17:34] Yeah.

[00:17:35] Yeah.

[00:17:36] Yeah, that's a really great question.

[00:17:38] And if it is dumb luck, wow.

[00:17:40] What dumb luck?

[00:17:41] You just hope, you just hope if you're running a business like this, that something like that comes along because it really has been very definitive for the industry.

[00:17:51] Yeah.

[00:17:52] Yeah, pretty crazy.

[00:17:54] I guess this first block is filled with regulation stories.

[00:17:59] As our world is these days.

[00:18:02] Speaking of U.S. regulation, the White House released a report focused on open source AI and specifically just taking a look at what the White House and what government feels like it needs to do at this stage as related to open source AI technology.

[00:18:21] Essentially stating, in the case of this report, that there is no current need for restriction on companies making models open and wide for usage.

[00:18:33] So think meta and its llama open kind of strategy.

[00:18:38] This originated from Biden's executive order on AI last year.

[00:18:45] So this is kind of a part of the result of that, not comprehensive entirely, but it's a part of it.

[00:18:51] Yeah, and I'm glad here that they're standing by open source.

[00:18:55] As we discussed on the show, there was some discussion, didn't turn out this way at this point, discussion in the EU about whether open source should be restricted because the argument is that if guardrails are built into models, then open sourcing them means people can get around those guardrails, which they can do anyway.

[00:19:11] And they're less safe.

[00:19:41] Which is a huge contribution to the market on its own.

[00:19:47] But open source also matters greatly, I think, especially in discussions around safety and trying to get more eyes upon it.

[00:19:52] And to think that anybody should get rid of open source at this stage would be just wrong and foolish and short-sighted, I think.

[00:19:59] So I'm glad to see this.

[00:20:01] I'm trying to understand.

[00:20:03] So then what is the difference between – so what is Leo saying, that it's not open source in the way that we're used to seeing open source code?

[00:20:14] And because it's AI, it's done differently?

[00:20:18] I think – well, tell me if I'm wrong here, Dr. Android.

[00:20:20] But I think if you see it in Android terms, Android is made available for free, but it's not open source as Linux is.

[00:20:29] I see what you mean.

[00:20:30] But you can't take it and fork it and contribute to it and do all of that.

[00:20:34] You can, yeah.

[00:20:34] But you do get it free with some conditions.

[00:20:37] Hello, FTC.

[00:20:40] And that itself has a huge impact on the market.

[00:20:46] So Meta's AI is – I think Leo's right.

[00:20:49] I think that's similar.

[00:20:51] Huge impact.

[00:20:52] Don't take anything away from it, but it's not fully open source.

[00:20:55] Got it.

[00:20:55] It's not Linux.

[00:20:57] Right, right.

[00:20:58] Okay.

[00:20:58] Very interesting.

[00:20:59] Because open source purists are purists for a reason, right?

[00:21:02] Yes.

[00:21:02] This is the deal.

[00:21:03] This is what it means.

[00:21:04] If you're going to be open source, this is what it means to be open source.

[00:21:06] It's all or nothing here.

[00:21:07] You can't just be sort of open source.

[00:21:09] You've got to be fully open.

[00:21:11] Yeah, yeah, yeah.

[00:21:13] No, that makes a lot of sense.

[00:21:14] Interesting.

[00:21:15] Yeah, I think it's a great thing.

[00:21:16] I think it was refreshing, I guess, when I read this.

[00:21:22] And in a time when I think there's so much FUD, so much fear and reactiveness around AI that I guess I was a little surprised to see them come back and say,

[00:21:33] no, hey, right now there's nothing to be concerned about.

[00:21:36] Let it be.

[00:21:38] And I would agree with that.

[00:21:40] And, you know, having talked about regulation in other ways, we will, when we get up to Google as well.

[00:21:45] I saw, I didn't put it in the rundown, but I saw an interesting thread today about the European AI law.

[00:21:52] And there's, among those who would not favor heavy regulation, there's already argument that it went overboard and it didn't really do it right.

[00:22:02] It's got to figure it out.

[00:22:03] I want to look more carefully at that and see what we think about that.

[00:22:07] But it's important when you do have regulation, generally the EUA law and the White House AI policy were pretty well received.

[00:22:15] But it's important to look at what impact they have and whether there are unintended consequences, whether there's regulatory capture and so on.

[00:22:25] Let's see here.

[00:22:26] I did not put this in the rundown, but we were talking about it a little bit in pre-show.

[00:22:33] And I brought up the Google, U.S. versus Google.

[00:22:36] Since we're on regulation.

[00:22:38] Yep.

[00:22:38] Yeah.

[00:22:38] Since we're on regulation, we might as well throw this in here real quick.

[00:22:41] And I mean, so essentially a federal judge had ruled that Google does in fact – or has in fact monopolized illegally the search market through the deals that Google made with Apple to pay for its search to be on the iPhone, which in itself sounds more like a search slash mobile and smartphone story.

[00:23:07] But you were saying in pre-show that this has some interesting kind of correlations and questions that it brings up around kind of this moment in AI where we're seeing AI kind of move more into the search direction, similar to what we were talking about last week with search GPT and, of course, Google's generative search product.

[00:23:31] So, yeah, talk a little bit about that.

[00:23:33] So I always think – I know sometimes I sound like I'm libertarian.

[00:23:36] I'm not – I'm a – not to get political here, but I'm a Kamala Harris Democrat, right?

[00:23:40] So I'm – and I'm happy about the VP's pick.

[00:23:43] So that's where I am on the spectrum.

[00:23:47] But when it comes to regulation of technology, I think that it's inevitably stuck in a problem whether it's either going to be too far behind or too far ahead, but usually too far behind.

[00:23:59] And too far ahead, if we think we've got to regulate AI and stop open source, that would be far too far ahead.

[00:24:04] But too far behind is Microsoft and the EU – what is it now, 15 years ago – when there was an arc for Microsoft.

[00:24:13] They seemed all powerful.

[00:24:14] Nothing was going to stop, and they're going to keep going up.

[00:24:16] The market was trying to affect Microsoft.

[00:24:18] But at that peak is when the EU said, we've got to go after them.

[00:24:22] And it became, to my mind, fairly meaningless.

[00:24:25] You could argue about – we're proving the negative here about whether that mattered or not.

[00:24:30] Same with Google and Search.

[00:24:33] Search has not been – you know, Google some years ago was 98% advertising on Search.

[00:24:38] That was what the company was entirely.

[00:24:39] Now it's AI, it's Android, it's all kinds of other things.

[00:24:44] And on Search alone, it's getting challenged now by AI search, I think next agentic search, and also other consumer habits.

[00:24:55] So yes, there's perplexity or there's anthropic or things like that people could use on Search or Microsoft saying they're putting Search on.

[00:25:02] But there's also user habits like young people using Search on TikTok.

[00:25:08] And that that challenges the hegemony of Google.

[00:25:12] So the regulation comes to my mind a little late, and the market – maybe the market wouldn't do it adequately.

[00:25:18] Maybe they wouldn't do it adequately.

[00:25:21] But there's issues.

[00:25:22] Now the other issue, which is not really an AI issue, but it's a concern about this decision, is that I'm really worried about Mozilla.

[00:25:30] Because Mozilla got a big bucket of money from Google to be the preferred search there.

[00:25:35] So did Apple, and so did Samsung.

[00:25:40] And to my mind, that wasn't antitrust.

[00:25:42] If it were antitrust, it would have said Google.

[00:25:44] Google would have said, we're not giving you a damn thing, but you've got to use us because we're so damn powerful.

[00:25:48] Instead, they negotiated considerable payments.

[00:25:51] And those payments, I'm going to guess, are probably going to go nowhere now.

[00:25:55] And that's going to affect the bottom lines of especially Apple, also Samsung, but really Mozilla.

[00:26:02] And I don't see any consumer harm here.

[00:26:05] This is what I sound like a libertarian.

[00:26:09] But I think it's important to kind of look at the rationale here.

[00:26:13] Now, European antitrust regulation is different.

[00:26:16] It can just go after big for being too big.

[00:26:18] The U.S., it's about consumer harm.

[00:26:19] And I don't think there's consumer harm here.

[00:26:22] I think that I've always had a choice.

[00:26:24] I can use Bing any day.

[00:26:26] It's not as good.

[00:26:26] For sure.

[00:26:28] I can use DuckDuckGo.

[00:26:29] I'm tired of their ads, and I wish they'd stop.

[00:26:31] I'm not going to use them.

[00:26:32] I like Google.

[00:26:33] I use Google out of choice, not out of force.

[00:26:37] And I think that the companies that included Google as their search default were making a good consumer choice.

[00:26:44] If the consumers hated it, they wouldn't have done it.

[00:26:48] But now…

[00:26:49] Because they still have the ability.

[00:26:50] They still have the ability even when presented with…

[00:26:53] Absolutely.

[00:26:54] They're still a chance.

[00:26:56] But now that…

[00:26:57] And I have problems whether a search…

[00:27:01] Chat search is going to really take over for regular search.

[00:27:07] But who knows?

[00:27:08] And it's a question out there.

[00:27:09] And so here comes the court while there's still a big question mark on the horizon that the market needs to grapple with, I think, first.

[00:27:17] So we'll see what happens on appeal.

[00:27:19] At a point where the search product itself might be not imperiled but certainly threatened to a much larger degree than it has before.

[00:27:34] And now you've got these AI search companies coming in.

[00:27:39] Already they were coming in to challenge.

[00:27:40] Right.

[00:27:41] Does this then strengthen that product and, yeah, possibly flip things around?

[00:27:48] Maybe it's just the natural flow.

[00:27:51] Maybe the AI search product, once we all use it and everything, maybe it is better.

[00:27:58] Yeah.

[00:27:58] Maybe it is.

[00:27:59] And then the other part of it too is that, again, this is not part of this decision or part of this show.

[00:28:04] But the advertising market…

[00:28:05] Google is under attack in the FTC, but the advertising market…

[00:28:09] Yeah.

[00:28:09] …I was approached by a law firm to whether or not I would testify as an expert on the topic.

[00:28:15] And then I said, well, this is what I've said publicly.

[00:28:17] And so they said, never mind.

[00:28:18] Because what I've said publicly is I think this is where Google is most vulnerable, is they do have tremendous power over the ad market.

[00:28:25] They do own both sides of it.

[00:28:26] There are issues there.

[00:28:28] If you want to go after Google on antitrust purposes, that's more the place in search.

[00:28:31] Search is yesterday's battlefield.

[00:28:34] Mm-hmm.

[00:28:34] Mm-hmm.

[00:28:35] It was like going after Microsoft for the browser.

[00:28:37] Who cares?

[00:28:40] Right.

[00:28:41] Yeah.

[00:28:41] Too little, too late.

[00:28:43] Yeah.

[00:28:43] Not quite as important or relevant at that point.

[00:28:47] Interesting stuff.

[00:28:48] I'm sure you and Leo and crew are going to talk all about this tomorrow.

[00:28:53] That's Wednesday, August 7th on This Week in Google.

[00:28:58] But you should definitely look out for that because I imagine that'll take a chunk of the show.

[00:29:02] Being a former producer, I'm sure that's going to be right at the top.

[00:29:06] And then finally, before we take a quick break here, the Atlantic staffers, it's around 60 journalists in all, signed a letter to try and elevate the fact that the company shouldn't head into its AI licensing deal with OpenAI.

[00:29:27] And potentially others by prioritizing the bottom line, by pushing the human efforts out.

[00:29:34] But really, in essence, this reminds me a lot of the Hollywood strike, writers' strike.

[00:29:40] And I'm sure The Atlantic is not the first news publisher to face this sort of reaction from their writers.

[00:29:51] But they're essentially saying, hey, don't write us out of this entirely.

[00:29:55] It's really a big pushback on the part of the work that they're doing to say that they don't believe that AI should deemphasize the outlet's kind of use of human writers.

[00:30:09] So I thought this announcement was going to be out until later.

[00:30:12] So this is a surprise to you, too.

[00:30:13] I'm in a minorly involved in an announcement about a new company called ProRata.ai, which is an effort to find fair compensation for content owners in the age of AI.

[00:30:25] I'm quoting them right now.

[00:30:26] And so the idea is, rather than doing the big bucket of money, as OpenAI has done with Rupert Murdoch and Axel Springer and Barry Diller and so on and so forth, what they say is, this is a company from Bill Gross, who at Ideal Lab has started more than 150 companies.

[00:30:42] So Bill and company think that they can attribute the value that someone brought to a chat result.

[00:30:52] That if I asked a question and CNET answered it, that CNET could get a proportional payment for that.

[00:31:01] That's their model.

[00:31:02] So they've gone to some media companies this way, and they also have asked authors to kind of sign on.

[00:31:08] Now, I don't know if I can sign on all my books because my publisher does, but I said, sure, use my name and that.

[00:31:12] So I'm in there, too, along with Tony Robbins, Scott Galloway, Seth Godin, and others.

[00:31:20] So it's just interesting to see where this goes.

[00:31:22] You have Tollbit, which is trying to find a way that can charge AI companies based on the use that is made.

[00:31:33] I think more on the acquisition part.

[00:31:36] ProRata is trying to do it more on the output side.

[00:31:39] There's another company called Human Nature or something like that that's trying to do this similarly.

[00:31:45] So a whole bunch of these companies are coming up.

[00:31:47] What happened at the Atlantic and Vox is the company got a bucket of money.

[00:31:53] And the authors in the company are saying, whoa, what about us?

[00:31:57] And so what's interesting to me is these other models might be able to say to places like WordPress or Medium or Atlantic or wherever, if your publication and if that author in your publication is used, we can give you attribution, credit, and payment.

[00:32:13] Now, that presumes there's revenue to be had.

[00:32:16] Who knows?

[00:32:17] There's a lot of questions here, huge many questions here.

[00:32:19] But I think that this is a roiling topic where we don't really know what's going to happen with this whole idea of who deserves money in this world.

[00:32:28] You know I've argued that it shouldn't be on the training side.

[00:32:33] It should be on the output side.

[00:32:35] But we'll see.

[00:32:36] So sorry I popped that on you.

[00:32:38] The press release went up a minute ago.

[00:32:41] No, not at all.

[00:32:42] I'm happy you got that in there.

[00:32:43] But that's ProRata.ai.

[00:32:46] And I will include that in the show notes as well.

[00:32:49] Yeah, thank you for that.

[00:32:51] All right, we got more coming up.

[00:32:52] We just have to take a super quick break.

[00:32:54] And then, yeah, we'll talk about a tool that OpenAI has created that is incredibly accurate at detecting generated text by ChatGPT.

[00:33:05] But you can have it.

[00:33:06] That's coming up.

[00:33:09] All right, so as I said, OpenAI has a tool that is apparently 99.9% accurate, according to them, at detecting text that has been generated by its own LLM.

[00:33:26] That would be ChatGPT.

[00:33:27] It's a watermarking technique.

[00:33:30] And it's been ready to use for a year, which sounds like a great thing, right?

[00:33:35] And actually, I should also mention this isn't the first time that OpenAI has had a similar tool.

[00:33:41] They had a detection tool, I think, released last January 2023.

[00:33:46] So last year.

[00:33:48] That was pulled after seven months because it wasn't performing as expected.

[00:33:53] Turns out, you know, watermarking and detection probably isn't as straightforward as one might believe.

[00:34:00] But in this case, 99.9% accurate, which that seems amazing.

[00:34:05] They're holding it back because they're really concerned about the potential impact that such a tool might actually have.

[00:34:14] One of those being, you know, negative impacts on non-native English speakers, which is a really, you know, it's a really good point.

[00:34:23] Possibly turning off loyal ChatGPT users.

[00:34:27] Also deciding who should actually have access to this.

[00:34:32] Is it a tool that everyone has access to?

[00:34:34] Is it teachers?

[00:34:35] You know, I know teachers would love to have access to something like this.

[00:34:39] Because you come from that area.

[00:34:43] What are your thoughts here, Jeff, on a tool like this?

[00:34:46] To start off with, I'm quite dubious that it's 99.9% accurate.

[00:34:49] Yeah, that's BS.

[00:34:52] It's a high number.

[00:34:54] Yeah, it's the usual balloon popping from OpenAI.

[00:34:57] The second I am very concerned, as you already mentioned, that this is going to affect people who have English as a second language.

[00:35:03] Or, you know, weren't trained in high English.

[00:35:09] Let's put it that way.

[00:35:11] And is that because, so I want to pull that apart a little bit.

[00:35:14] Is that because the output of ChatGPT, let's say, resembles language that someone who is less adept in the English language, let's say?

[00:35:27] You know what I mean?

[00:35:28] Like, why is that?

[00:35:30] There's a paradoxical question there.

[00:35:32] You're right to raise that.

[00:35:33] Because on the one hand, if I sound like the standard, the cliched, then I'm most likely to sound like the output of ChatGPT.

[00:35:45] If I don't, because my English is not as standard, then you would think that it would say, okay, you're human.

[00:35:55] So in that sense, not being the same as the machine may be a problem.

[00:36:04] Now, the other argument I guess I can make there is that if I'm struggling with English, I'm going to go with safer word and grammatical choices, which is what the machine does too.

[00:36:18] And if I feel fully confident in English, I might do more things that are out of that box.

[00:36:28] Yes.

[00:36:29] Okay.

[00:36:29] That makes a lot of sense to me.

[00:36:30] Yes.

[00:36:31] That's where I'm done.

[00:36:31] So I'm working on a syllabus for a course for a university that I'm going to be helping, but I can't say who it is yet, on AI and creativity.

[00:36:40] And Lev Manovich, who we've talked about from CUNY, sent me a paper the other day that's very interesting about AI and creativity.

[00:36:46] And it's complicated, and I haven't fully dug into it yet.

[00:36:49] But the argument is that it has an impact on people's creativity in that similar kind of way.

[00:36:56] And I'll hold that until I go to the paper and go to that again.

[00:36:59] Now, the other thing about this is the cynical view, which you mentioned here, is that it's in open AI's interest for people to use its tool.

[00:37:11] If people are using it primarily to cheat, and if they find out they're going to get caught, and thus they're not going to use it anymore to cheat, then it's not in open AI's interest to expose the cheaters.

[00:37:26] So the cynical view is we don't want to dime anybody who uses this for bad purposes.

[00:37:34] We don't want to give people any reason to not use this, even the bad stuff.

[00:37:38] I think this is all about technosolutionism, that technology is the problem and technology is the solution, and it often isn't.

[00:37:46] But the question, what I do here, I just listened to a podcast called, from a fan of the kind of stuff we do, called Hotel Bar, which is three philosophers.

[00:37:59] Could you say that again?

[00:38:00] I think it kind of.

[00:38:01] Hotel what?

[00:38:02] Hotel Bar.

[00:38:03] Hotel Bar.

[00:38:04] The shtick is it's three philosophy professors who act like they're at the philosophy conference, and they're at the bar at the end of the night, and that's when the real philosophizing happens.

[00:38:13] Okay.

[00:38:13] But they're teachers, and so they were talking about just this on their AI episode, which I listened to from a few weeks ago.

[00:38:21] And the great thing about these professors, Jason, is that they're questioning the institution of the essay as an assignment.

[00:38:29] Why are we assigning this to students?

[00:38:30] What do we expect them to do?

[00:38:31] Are we trying to get them to just speak in standard English and do that?

[00:38:34] Are we trying to get them to reason?

[00:38:35] Are we trying to get them to be creative?

[00:38:37] Why are we doing this?

[00:38:38] And so that if the AI can replicate what we assign them to do, that should make us question the assignment.

[00:38:45] So that's a really interesting thing that's happening in, I think, the smarter ends of the academy.

[00:38:51] Interesting.

[00:38:53] Cool.

[00:38:53] On that podcast, which I'll have to check out, hotelbarpodcast.com for folks.

[00:38:58] Go and check that out.

[00:38:59] Cool.

[00:38:59] They got a nice plug on the show.

[00:39:01] Well, they asked me to come on, so that's why I look them up.

[00:39:05] They're nice people, and so I'm happy to plug them, and I'll tell you what I'm on.

[00:39:09] All right.

[00:39:10] Please do.

[00:39:10] That's awesome.

[00:39:12] We were talking a little bit about Gary Marcus a little bit earlier.

[00:39:15] He wrote for The Guardian to warn about Sam Altman's rise as one of the most powerful people in AI.

[00:39:22] Basically, just calling attention to what Gary says, which is his public persona, which is this very humble, very altruistic innovator, and how that lies counter to the facts that we've seen over time and how the business is run.

[00:39:44] You know, publicly supporting AI regulation while at the same time behind the scenes, OpenAI is lobbying to prevent regulations, to weaken those arguments.

[00:39:56] Many of the former OpenAI employees and board members accusing Altman of withholding information, misrepresenting facts.

[00:40:08] Of course, we had the big Sam Altman fallout that happened, I think it was last year, although it feels like an eternity ago.

[00:40:14] Yeah.

[00:40:14] That really hinges around that point specifically.

[00:40:19] Yeah, and just OpenAI's handling of intellectual property rights, AI safety concerns.

[00:40:24] This is all kind of part of Gary Marcus's point, kind of takedown of Altman.

[00:40:32] Gary's a fascinating character in this AI world because he's an AI expert.

[00:40:37] He does not say that AI doesn't exist or is itself evil or anything like that.

[00:40:43] Right, yeah.

[00:40:44] But he, as I said earlier, he takes pins to balloons in this overinflated world of the AI boys, which I'm really glad for.

[00:40:52] And I think sometimes Gary can go a little bit over the top.

[00:40:57] And he started a paid newsletter and he was all happy that people were coming on to his paid newsletter because he was saying these controversial things.

[00:41:04] So it'll be interesting to see how he modulates this.

[00:41:09] But I agree.

[00:41:10] I think that Sam Altman, who, by the way, is not an engineer himself, is not in that sense a technologist himself.

[00:41:16] He's an investor and a manager and a creative and interesting one.

[00:41:21] But I think he gets, Marcus is right that he gets quite full of himself.

[00:41:25] Last week on the show, we talked about Altman's op-ed in the Washington Post or the New York Times, I think it was, basically saying that America must win AI and, you know, I should be empowered to help do that.

[00:41:39] So I think we need leavening on this yeast in the form of Gary Marcus.

[00:41:45] And I'm glad he's there.

[00:41:46] But it's really turned into a personality thing, too, which I really, in this case, in some cases, that's fine.

[00:41:53] I used to work with People Magazine.

[00:41:54] Okay, I'm all right with that.

[00:41:56] But in this case, I'm less interested in the personality and more interested in the accomplishment, the promises, the BS that's involved in that.

[00:42:06] And so sticking closer to that.

[00:42:07] So it's worth following Gary and seeing where he goes.

[00:42:11] But he's trying to come up.

[00:42:12] He before tried to do bets and fights with Musk, and now he's going after Altman.

[00:42:19] And they both deserve it.

[00:42:21] They both.

[00:42:22] Yeah, they both probably deserve that in a way.

[00:42:24] Yeah.

[00:42:25] Absolutely agree.

[00:42:27] You had thrown in here a Washington Post letter from, let's see here, Future of Life Institute.

[00:42:34] Also, and you mentioned this a little bit earlier.

[00:42:37] What was one of the other people in here running?

[00:42:40] Oh, man, what was it called?

[00:42:42] Tolbit.

[00:42:42] Tolbit was what were also contributed to this.

[00:42:45] Yeah, tell me.

[00:42:46] These were answers to that Altman op-ed I just mentioned.

[00:42:49] And it's not a big deal.

[00:42:50] I was interested that the Post, was it the Post again?

[00:42:54] Yes.

[00:42:55] Ran Altman's op-ed.

[00:42:57] And then the Future of Humanity Institute is one of those Doomer existential risk.

[00:43:03] Right.

[00:43:04] Tescrial places run by Max Tegmark out of MIT.

[00:43:08] And so they quote him and put it up.

[00:43:11] Or that's not Tegmark, but it's Anthony Aguari.

[00:43:13] Yeah, it's Anthony Aguari.

[00:43:14] Who is the executive director.

[00:43:16] They put that up unquestioningly, which one does in letters to the editor.

[00:43:21] But it's just more examples of how this Tescrial ex-risk doomer narrative is getting placement in major media.

[00:43:34] And so it was a response to Altman saying, he's dangerous.

[00:43:39] Define danger.

[00:43:40] Define safety.

[00:43:41] It all gets back into that murky realm.

[00:43:44] So there was that letter.

[00:43:46] There was another one a little bit similar.

[00:43:47] Then the founders of Tollbit, which I mentioned earlier, are just saying, basically, they wisely use this as a chance to promote their company.

[00:43:56] Just say, opening eyes, giving money to the wrong people.

[00:43:59] They should give it to money to our clients.

[00:44:01] Which is, I think they're right.

[00:44:05] Like Bill Gross's pro rata, Tollbit is trying to get a broader sharing of value across media than just a few moguls.

[00:44:15] And I salute that.

[00:44:17] And Tollbit, by the way, is also where Campbell Brown, for those of you who might have remembered her from CNN back in the day,

[00:44:23] then she was the news person at Meta for quite a while.

[00:44:27] And now she's at Tollbit.

[00:44:30] Got it.

[00:44:31] Okay.

[00:44:32] All right.

[00:44:32] That's that story.

[00:44:33] Yeah.

[00:44:33] All right.

[00:44:34] I appreciate that.

[00:44:37] And then, and here's where things get, this story is definitely interesting to me.

[00:44:44] I think AI and how it has the potential to really sharpen and transform kind of the medical industry is something that I'm constantly kind of curious when a story comes along.

[00:44:59] That illustrates this.

[00:45:01] And there is a product called, sorry, there's a robotics company called Perceptive that is reportedly, they have created a fully autonomous robot for performing dental procedures.

[00:45:17] Oh, God, this thing looks terrifying.

[00:45:20] On poor, frightened humans.

[00:45:24] Doesn't that look fun?

[00:45:25] The robot prepared a tooth for a dental crown in about 15 minutes.

[00:45:30] That's compared to a typical two hours across two visits for a human dentist.

[00:45:36] It's a handheld 3D scanner coupled with AI analysis of the 3D data to plan the procedure.

[00:45:43] Watching this makes my teeth hurt, by the way.

[00:45:46] So it's, it's a, um, it's something we're watching on the video, which is the, the drill from the, um, uh, uh, AI machine is going over what I think is the implant, the crown to shape it.

[00:46:00] Um, and it comes out looking rather like 3D printing with, with bridges on it.

[00:46:04] Yes, it really does.

[00:46:05] But the point is you've got a high speed drill operating right there next to your pink gums and it's run by a machine and not a dentist.

[00:46:15] Ah, with the, yes.

[00:46:16] And the, the photo that they have, and I'm, and I'm not even outwardly saying like, this is a bad idea.

[00:46:21] I think that we're going to see more of this stuff, but my, my immediate human reaction is, holy cow, keep that robot out of my mouth.

[00:46:30] I already, I already hate, like, there aren't a whole lot of things that I hate in this world, but I hate going to the dentist.

[00:46:36] I do it and I do it regularly because I know I need to.

[00:46:39] And, you know, dental health is important and the people that are there are amazing and everything.

[00:46:44] But I hate having my teeth played with.

[00:46:48] And, uh, just, just thinking about like laying there and sitting still while a robot goes to town on my teeth.

[00:46:54] It might even be safer than humans.

[00:46:56] I don't know, but there's something about it that just ekes me and the, the photo that they have there coupled with the, the, the headline next to it, which at the very top safety is our absolute first priority.

[00:47:07] It better damn well be.

[00:47:10] So, um, uh, I had my prostate out, uh, some years ago and it was, it was, um, uh, minimally invasive robotic surgery with the Da Vinci robot.

[00:47:22] And you walk into the operating room and you, I'm six foot four and I look up at the robot.

[00:47:29] It's huge.

[00:47:30] It's absolutely huge.

[00:47:31] You kind of just want to salute it and say, be nice.

[00:47:34] Yeah.

[00:47:34] But that's controlled by a doctor, right?

[00:47:38] Who's got joysticks.

[00:47:39] Right.

[00:47:40] So it's not, so it's not autonomous, not autonomous.

[00:47:43] So the big deal here is that although it's, it's still damn threatening looking.

[00:47:49] Is that, is this what you're talking about?

[00:47:51] That's one of them.

[00:47:52] Uh, holy moly.

[00:47:53] That looks like a nightmare.

[00:47:55] Yeah, it is.

[00:47:57] Um, uh, yeah.

[00:48:00] I don't know what I expect a surgical robotics tool to look like, but that looks like total recall.

[00:48:06] Yeah, that does.

[00:48:10] Um, um, so what was the, um, Dustin Hoffman movie with the Nazis and the, and the drill?

[00:48:16] Yeah.

[00:48:16] You can imagine that in their hands.

[00:48:19] Um, so what's different about, about the, the, the, the dental robot is autonomy.

[00:48:24] And we've talked about this with AI all the time when it comes to agents, autonomy requires

[00:48:29] trust.

[00:48:30] Yes.

[00:48:31] Do you trust this machine to do what you tell it to do on its own?

[00:48:37] And I can see a doctor saying, Oh yeah, I know what it's doing, but I'm also looking

[00:48:41] at that patient.

[00:48:42] It's a real live patient.

[00:48:43] We're seeing having this procedure done.

[00:48:45] Is there a, is there an emergency button?

[00:48:49] Right.

[00:48:49] If something gets hit in my mouth with the dentist, I say, ah, and they'll stop.

[00:48:54] How do you get to the robot?

[00:48:56] Whether there's no communication with the robot?

[00:48:58] How do, what is it?

[00:48:58] That's a good question.

[00:49:00] Like, no, shut up human.

[00:49:03] Gotcha.

[00:49:04] Where I want you.

[00:49:05] AGI has arrived.

[00:49:07] Oh my goodness.

[00:49:09] Yeah.

[00:49:09] Yeah.

[00:49:10] You know, again, it could be the safest thing in the world.

[00:49:13] It could be safer than humans, whatever.

[00:49:15] It's just, you know, anything like this is going, I'm imagining going to be met with the

[00:49:20] immediate human reaction of, Oh, I don't know because it's new.

[00:49:24] And we've seen far too many scary movies that go down this road and don't have happy endings.

[00:49:31] It's hard to undo that, man.

[00:49:33] So the company has raised 30 million.

[00:49:35] But I'm not surprised.

[00:49:36] Not either.

[00:49:36] The company raised $30 million.

[00:49:38] Perceptive.

[00:49:39] Right.

[00:49:39] The hilarious thing to me is that Mark Zuckerberg's dad is a dentist, Ed Zuckerberg, and he's invested

[00:49:46] in it because he's got some Facebook stock through the years and has some spare money.

[00:49:51] So stat reports on that.

[00:49:53] So it's kind of a robot dentist convention here.

[00:49:59] That's so interesting.

[00:50:00] Like I knew that his dad had invested, but I wasn't immediately aware that he was a dentist.

[00:50:05] So that actually makes a lot of sense.

[00:50:07] He's got the money to burn.

[00:50:09] Right.

[00:50:09] And technology and dentistry and AI.

[00:50:11] Yeah.

[00:50:12] It all comes together.

[00:50:13] Yeah.

[00:50:13] Yeah.

[00:50:14] Fascinating.

[00:50:14] A number of years ago, I met and became friends with, I think, the primary inventor of laser

[00:50:24] dental technology.

[00:50:26] He's a fascinating man.

[00:50:28] Which when I think about this, it's interesting because now I'm thinking about that.

[00:50:32] I'm like, oh, I'm okay with that.

[00:50:35] But that's still the human in control.

[00:50:36] It's just a different type of tool that the human is wielding versus this robot that you

[00:50:41] just sit in front of and go, okay.

[00:50:43] So my father-in-law was a dentist.

[00:50:46] And now it's these high-speed drills that are whining at you.

[00:50:50] Oh, I hate it.

[00:50:52] But before, I don't know, son, if you're too young for this, it used to just be a belt.

[00:51:00] And so it was much slower.

[00:51:01] Like a sander?

[00:51:02] Like a sander sort of thing?

[00:51:04] No, it would be like a pulley.

[00:51:06] And so there was kind of a wire coming down off a series of pulleys in the room off the

[00:51:12] machine.

[00:51:13] Okay.

[00:51:13] And so it moved only at the speed of the motor running that pulley, running the wire into

[00:51:20] the drill.

[00:51:21] Right?

[00:51:21] It was a very physical process.

[00:51:24] I got you.

[00:51:24] It wasn't as fast.

[00:51:26] So drilling a tooth took that much longer.

[00:51:31] For those of you on audio, Jason just closed his eyes in pain.

[00:51:35] It just, it has a visceral thing.

[00:51:38] I have a visceral reaction to those kinds of things.

[00:51:42] It's so my father-in-law, God rest his soul, was a dentist, a wonderful, wonderful guy.

[00:51:48] And I started dating his daughter and I was due for a dental checkup, but I wasn't so good

[00:51:54] with my dental checkups.

[00:51:55] So he's like, no, no, come on in.

[00:51:56] I'll just do it.

[00:51:58] So I went to another dentist to get my teeth cleaned before I went to him.

[00:52:03] It was like cleaning your room before the date, right?

[00:52:06] Yes, exactly.

[00:52:06] Before the house cleaners come, we got to clean up before they clean up.

[00:52:11] And then one day he was going to help me move some stuff.

[00:52:14] And he said, I don't know how to do it.

[00:52:15] It bothered me.

[00:52:16] He looked at it and he said, oh, it's a simple extraction.

[00:52:20] No big deal.

[00:52:21] Yeah, no.

[00:52:22] So, okay.

[00:52:23] You're getting my daughter, are you?

[00:52:25] No, no.

[00:52:27] Got you right where I want you, Jarvis.

[00:52:29] Just like the robot, yeah.

[00:52:33] So before we move on from this, though, it occurs to me like as a dentist, people who

[00:52:41] are in that position, I hope, I'm assuming, have gone through years upon years of education

[00:52:48] to get to that point.

[00:52:49] They've established their business and all that experience and everything.

[00:52:55] This is like a very like, I guess where I'm coming from is we're used to hearing the

[00:53:00] AI taking our jobs argument for jobs that are not quite as like upper, I don't know what

[00:53:08] you want to call a dentist versus, you know, like a white collar job or whatever.

[00:53:12] There's a certain level of jobs that we've been referring to as susceptible to AI taking

[00:53:17] over that job.

[00:53:18] And yet here, this is, you know, taking away potentially the job of a dentist, right?

[00:53:24] Like, I wonder.

[00:53:25] Or makes them far more efficient.

[00:53:27] I mean, having it has.

[00:53:28] That's exactly right.

[00:53:29] Having it has some caps because I'm of that age and I used to drink too much Coca-Cola.

[00:53:36] Got that one right there.

[00:53:37] Oh no.

[00:53:38] Oh no.

[00:53:38] Yeah.

[00:53:38] I've had a few of them.

[00:53:39] And so it's all process, right?

[00:53:41] Drill it all down and then make the temporary and then mold the thing and then send it out

[00:53:49] to a lab and then get it back.

[00:53:51] There's no reason all of that couldn't be done with 3D cameras.

[00:53:55] So that that's the point of what this thing looks like is that you're putting it in a blank

[00:53:59] and then it's getting it down to the right size.

[00:54:03] Yeah.

[00:54:04] And if it saves you a few visits, maybe it's better for everybody.

[00:54:09] And the dentist can go see more patients in less time.

[00:54:14] The patient doesn't have to go for as many visits.

[00:54:18] Maybe it's good for the dentist.

[00:54:20] It's hard as hell to get an appointment at my dentist.

[00:54:22] I'll tell you that.

[00:54:23] Right.

[00:54:23] Well, thank you for saying that because that is usually my reaction to when people say,

[00:54:29] you know, AI is going to replace, you know, musicians, which is the last story.

[00:54:34] We'll get there in a second.

[00:54:35] But, you know, creativity or whatever.

[00:54:37] And I think where I've settled on that is like, no, it doesn't have to.

[00:54:41] Like, so often the immediate reaction is, oh, this thing does it.

[00:54:46] Therefore, humans will never do it again.

[00:54:48] And it's like, no, maybe this just makes the human more efficient or better, you know,

[00:54:54] frees them up to do other more important things while this does those things.

[00:54:59] And, you know, maybe their time doesn't have to be spent doing that.

[00:55:03] It can be spent doing other really important things.

[00:55:05] Right.

[00:55:06] So, yeah, I'm happy you point that out.

[00:55:07] Yeah.

[00:55:08] Because the value of the dentist is first and foremost diagnosis.

[00:55:13] Mm-hmm.

[00:55:14] And that's still judgment.

[00:55:15] And I think that they're going to be involved no matter what there.

[00:55:18] You know, I always watch my father-in-law with his hands.

[00:55:21] He was always kind of doing this with his hands because that was his living, right?

[00:55:24] It was this incredibly fine work that you had to do for those kinds of things.

[00:55:30] So I don't know what he would think of it.

[00:55:33] Yeah.

[00:55:33] Interesting.

[00:55:34] I'd be curious to know.

[00:55:36] And then finally, Suno and Udeo.

[00:55:40] These are two of the music generation services.

[00:55:43] I've certainly been following them and doing, you know, some work on my TechSploder YouTube channel

[00:55:48] to kind of explore how they work and get a better understanding of what they output as a musician.

[00:55:53] I'm very interested in this stuff.

[00:55:55] We know that the major music labels here in the U.S. have brought, you know, copyright suits

[00:56:00] against both Suno and Udeo in separate lawsuits.

[00:56:06] And right now, both of those companies are starting to give some reactions as a result.

[00:56:15] They both admit that the recordings that they trained on presumably, I think this was Suno,

[00:56:21] presumably included recordings whose rights are owned by the major labels.

[00:56:25] So not outwardly saying, no, we don't train on copyrighted material.

[00:56:32] They're essentially saying, you know, and they haven't admitted that at this point yet anyways,

[00:56:39] but now they're saying that they claim it was no secret.

[00:56:42] So that's interesting.

[00:56:43] So they aren't defending it.

[00:56:45] They're basically leaning into the fair use defense, which, you know, is essentially,

[00:56:51] can they prove that it's a transformative work?

[00:56:54] I believe that they can.

[00:56:57] But just kind of interesting to see this going back and forth.

[00:57:01] And my goodness, is this a hot button in the music industry?

[00:57:05] This whole thing is a real hot button issue.

[00:57:07] I wonder how much of a bellwether it'll be for other things, depending on which way the rulings go in these cases.

[00:57:14] It's so interesting, Jason, because it hits on lots of things in the past.

[00:57:20] The earliest days of the Internet and music, music faced the threat of digital before any other sector of culture.

[00:57:32] Then all the constant fights over the years about plagiarism in song, you know, did Elton John get that riff from that other song, right?

[00:57:40] Those suits that we watch all the time.

[00:57:43] And so those get to me intertwined in this story.

[00:57:47] To me, it's still like with news.

[00:57:49] It's first a question of acquisition.

[00:57:54] Did they legitimately listen to these songs?

[00:57:57] I mean, all you need is one Spotify account and a robot, and I acquired the songs.

[00:58:03] I listened to them once.

[00:58:04] That's all I had to do, right?

[00:58:08] But on the other hand, you have an industry that has learned a lot about digital.

[00:58:12] You also have an industry that has the infrastructure of ASCAP and BMI and proportional rights licensing.

[00:58:22] So I wonder whether there's some really creative things that could be done here, like Parada wants to do with news, like Tolbit wants to do with news.

[00:58:29] Oh, yeah.

[00:58:29] And others, whether there could be, as happened with Spotify, after all the fights and everything else, Spotify is now a necessity to the music industry.

[00:58:41] Yep.

[00:58:41] It's created, yeah.

[00:58:43] Very essential business.

[00:58:45] I think what we're seeing here is the beginning of negotiation.

[00:58:48] Yeah, I agree.

[00:58:49] And this could, once again, lead the way for other sectors if they can come up with a fair mechanism of acquisition and attribution.

[00:58:58] And let us say credit in all senses of the word.

[00:59:01] Well, that could be interesting for the industry.

[00:59:03] But right now, it dukes up.

[00:59:05] Yep.

[00:59:06] Yep.

[00:59:07] Yeah.

[00:59:08] I think at the end of the day, this isn't about we don't want this technology to exist in the way that you're creating it, Suno and UDO.

[00:59:16] This is, if we want a piece of it.

[00:59:20] We want to have a product that does this.

[00:59:23] And we want to be able to monetize that.

[00:59:25] Right.

[00:59:25] And maybe kick a little bit of that money down to our artists.

[00:59:28] But really, we want a business.

[00:59:30] Right.

[00:59:30] Which is the issue with the Atlantic story you mentioned earlier, right?

[00:59:32] Does the Atlantic get that money?

[00:59:34] Does the writer get that money?

[00:59:37] Yeah.

[00:59:37] Does the music company get that money?

[00:59:39] Does the performer get the money?

[00:59:41] Does the writer get the money?

[00:59:45] Yeah.

[00:59:46] And who gets what?

[00:59:47] That's negotiation within that side of the market.

[00:59:51] Yeah.

[00:59:52] Indeed.

[00:59:54] Well, I know I'm going to be following these few particular cases closely just because of the music tendrils that it has with it.

[01:00:03] And so, you know, I'll bring those on as that happens.

[01:00:07] Real quick, before we end things out, I just want to go ahead and throw a big thank you to the Ozone Nightmare who threw us a $10 super chat during the live stream here.

[01:00:18] And I believe this was about 20 minutes ago.

[01:00:21] So I think we were talking about the open source ruling essentially at this point and says, this is exactly what's needed.

[01:00:28] Banning AI is nonsense, but ignoring the absolute crucial contributions of human creators is equally unworkable.

[01:00:34] Creators need more recognition and support.

[01:00:39] And Ozone has done this more than once as the show has gone on.

[01:00:42] Oh, yeah.

[01:00:42] So we're always very, very grateful for that.

[01:00:44] Thank you.

[01:00:45] Yeah.

[01:00:46] Mark of respect.

[01:00:47] Thank you.

[01:00:48] Yeah.

[01:00:49] Yeah.

[01:00:49] Feel appreciated.

[01:00:50] We do appreciate you.

[01:00:52] And I appreciate you, Mr. Jeff Jarvis.

[01:00:55] And I appreciate you, Mr. Jason Hull.

[01:00:57] Hanging out for me week after week and all of us.

[01:01:02] Gutenbergparenthesis.com for folks who want to check in on Jeff's amazing writings, at least the books that he's writing.

[01:01:10] And then, of course, BuzzMachine.com, right?

[01:01:13] Yep.

[01:01:13] Or JeffJarvis.Medium.com.

[01:01:15] Looks pretty right there.

[01:01:17] Got it.

[01:01:18] Okay.

[01:01:19] Excellent.

[01:01:20] Well, thank you, Jeff.

[01:01:21] Thank you.

[01:01:21] This is always a lot of fun.

[01:01:22] And, you know, we could do it one day earlier because-

[01:01:26] My fault.

[01:01:26] It's our show.

[01:01:27] My fault.

[01:01:27] Yeah.

[01:01:28] It's our show.

[01:01:29] I'm going to a conference tomorrow and I'm able to do Twig later in the day from the hotel, torturing them with bad Wi-Fi, no doubt.

[01:01:38] But the timing of this one, right in the middle of the day.

[01:01:40] So, we were planning this week to switch to 10 a.m.

[01:01:45] 10 a.m.

[01:01:46] Pacific.

[01:01:47] 1 p.m.

[01:01:47] On Wednesdays.

[01:01:48] Yes.

[01:01:49] And so, I messed it up the first week out, but that's the plan going forward.

[01:01:54] All right.

[01:01:54] That's all right.

[01:01:55] Yes, indeed.

[01:01:56] Thank you.

[01:01:56] Yeah.

[01:01:57] Yeah.

[01:01:57] And, you know, we're having fun with live, but the majority of people, you do get it in podcast or you get it after the fact.

[01:02:05] If you go to youtube.com slash at TechSploder, you can find all of our episodes in video form.

[01:02:12] If you search your podcatcher of choice, be it Apple Podcasts, Pocket Casts, wherever you get your podcasts, you'll find it in audio form.

[01:02:21] And I would say the audio version is a little bit more polished on the sound production.

[01:02:26] The video version, of course, has the video, but the sound is, because it's done live, it doesn't have some of that post-processing.

[01:02:32] So, that's kind of the tradeoff.

[01:02:34] Pick your poison.

[01:02:35] Doesn't really matter to me where you get it as long as you're able to find it when you want to, and that's how you do that.

[01:02:42] But those of you who support us on Patreon, thank you again for your support.

[01:02:46] Patreon.com slash AI Inside Show.

[01:02:49] You can head there and throw your support, and we appreciate if you do that.

[01:02:53] We do offer ad-free shows, early access to videos that have something to do with AI on the TechSploder YouTube channel, Discord community.

[01:03:03] We've got some regular hangouts, which I actually need to schedule for this month.

[01:03:06] And then we also enable you, if you want to have your name thrown out at the end of the show, to become an executive producer of this show.

[01:03:16] So, Dr. Do, Jeffrey Maricini, WPVM 103.7 in Asheville, North Carolina, and Paul Lang are ongoing executive producers.

[01:03:25] Thank you.

[01:03:26] You are all wonderful, and I hope that we can add even more amazing folks to the executive producer roll call at the end of the episode.

[01:03:36] Everything you need is at AIinside.show.

[01:03:39] So go there.

[01:03:40] You'll find it all.

[01:03:41] And thank you so much for watching and listening to this episode of AI Inside.

[01:03:45] See you next time.

[01:03:46] Bye, everybody.

[01:03:47] Bye, everybody.