This week, Jason Howell and Jeff Jarvis welcome Evan Brown to round up AI regulation efforts worldwide including Utah's disclosure law and the EU's comprehensive AI Ant. Plus, big news for Microsoft AI, Nvidia's behemoth AI chips, and more!
INTERVIEW WITH EVAN BROWN
Utah has a brand new law that regulates generative AI
EU votes to ban riskiest forms of AI and impose restrictions on others
YouTube Introduces Mandatory Disclosure For AI-Generated Content
NEWS
Big changes at Inflection AI & Microsoft
WSJ report on Microsoft/Inflection
Reid Hoffman's tweets about it
Apple Is in Talks to Let Google Gemini Power iPhone AI Features
Nvidia reveals Blackwell B200 GPU, the ‘world’s most powerful chip’ for AI
Jeff & Mikah react to the NVIDIA keynote
A good video explaining generative AI
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 9, recorded Wednesday, March 20, 2024, Microsoft's inflection point. This episode of AI Inside is made possible by our wonderful patrons at patreon.com. If you like what you hear, hop on over and support us directly. Thank you for making independent podcasting possible.
We're going to start things off talking a little bit about AI regulation. There were a number of pretty important things that happened in the world of AI, and we are going to get to some of those stories later. I came across an article by a friend who I've podcasted with and produced behind the scenes at TWIT over the years. Evan Brown wrote an article about some regulation happening in the state of Utah, and I read the article and I was like, dang, I should reach out to Evan and see who wants to come on the show. Evan Brown is a partner at Neil and MacDevott in Chicago. Evan is great to see you.
It's great to see you, Jason. I'm really happy you did reach out. It's nice just to catch up and even better to talk about AI regulation. Jeff, great to see you as well.
Good to see you again. Always yours. Absolutely. Like I said, there were a few different directions that we could have featured at the top of this show, and we're going to get to some of those stories later. But when I read your article and then I came across another article in Ars Technica, about the EU passing the AI Act. We'll talk about that in a little bit. It just kind of seemed like, okay, there's a number of different efforts right now that are happening, and we've certainly alluded to these in previous episodes.
Regulation around AI. Now we're starting to kind of see the tires hit the ground. We're starting to see what this actually looks like, and it could go in a million different directions. It could be done poorly. It could be done effectively without stifling innovation. It could be a million different things. So, you, Evan, wrote about here in the US, the governor of Utah passed an AI regulation bill, has kind of multiple prongs to it. Why don't we just kind of start with what the law entails as it was written in the State of Utah?
Yeah. I mean, I'd say it's about time, right? You know, the law seems to always lag behind technological innovation and been talking about AI and the mainstream for however long now. I always think, you know, with the chat GPT being made public, that's when people finally started paying attention to things.
So, yeah, finally time for the legislators and everybody to start interjecting themselves in this. I guess when we look at Utah, one thing we could say right off the top is that Utah has not been nearly as granular as what the EU has. I mean, we've got the EU establishing this entire framework for, you know, with grand visions of how artificial intelligence will impact society in both good and bad ways.
The Utah approach is noteworthy, first of all, because it's really the first step into this that the states are making. There's no federal law that governs, you know, that regulates artificial intelligence, just like there's no federal law that regulates data privacy. You've got different states doing their own things, California with CCPA, you've got Colorado, Virginia. So with artificial intelligence, it looks like in the near term, it's going to be the various states passing laws to address what they see as the primary concerns.
So what Utah has done really, I think you could characterize it in or put it into a couple of different categories. One is it amends the consumer protection statutes to essentially make it very important to disclose to people when they are encountering the product of generative AI in the wild. So if you're buying a car or signing up for a credit card or going to your dentist, any of these areas that are regulated by the Department of Commerce in Utah, whatever they call it in Utah, whatever their state entity is for that, you've got to let them know. The other thing that it does is sets up a system, a program for innovation where if a company wants to develop in Utah, artificial intelligence, certain regulatory mitigation is what they call it, making it more beneficial for to come and do their maybe avoiding liability a little bit. But the I think the most intriguing thing that we're seeing about this is the the for that this is the first foray into recognition that the public could be confused, the public could be deceived. And so this statute sets or acknowledges that and sets up a framework so that people are notified of when AI is being used.
I when I saw the headline that Jason put it in the rundown, I thought, oh, hey, good, Evan's here. And the second thought was, oh, no, Utah passes a law about AI. Do I dare look and it's actually okay. It's sensible. It's about it's about disclosure as much as anything else.
And I think that's fine. One thing we talk about, you make it with the law, but I wanted to just kind of set a context. One thing I talk about the show a lot is wondering where the responsibility for AI will end up lying that there's at the model level where people try to say that you have to build guardrails and stop all that and then there's a little girl hearing for the Porsche Muck lawyer who used to get some citations and he didn't do his human job. And so that that question of where law is going to head here. I'm just eager to hear your perspective on on where liability and responsibility will most likely lie legislatively.
I don't want to jump too quickly into the philosophical and metaphysical on this, but I think until we get to the point, if we ever get to the point, this is, you know, a huge debate, the hard problem of consciousness, right? You know, unless we can get to a point where at normative level, we think that these AI entities serve as agents that have their own consciousness, their own sentience, their own, then their own culpability, and then, you know, the on the opposite side of that having certain rights, it's completely there. There's no other reasonable way of approaching this than to say the entity, the business, the human flesh and blood constituted entity, the corporation or whatever, who is using these things, utilizing these tools, making them available out there is the one who's responsible just in the same way that if you're FedEx and, you know, the truck breaks down and causes an accident, you know, that's FedEx's responsibility, not the autonomous system of the truck that might have broken down.
So, Jeff, I think it's pretty simple at this point and that that it's the that's the entity. I'm so glad you mentioned the Air Canada thing. I don't know if you've talked about it on the show before here, but we probably give Air Canada a little bit harder time than what it deserves.
You know, in that situation by trying to say that this chatbot was responsible, but from my perspective, there's no certainly nothing new about the use of artificial intelligence technologies to make it so that they have their own responsibility here.
Yeah. So how does that compare with the EU law that that you've written about?
Yeah, well, I mean, the same thing. I mean, in the EU law, I mean, of course, the thing about the EU law is is breaking. There's a lot of things about it. It's this huge statute. I mean, I think that the the biggest thing there, EU law as far as I know, doesn't this clear demarcation of what do they call it, you know, like the ultra like that unacceptable the the AI that's just really bad and should never be put to any use like for autonomous weapons systems and deep fakes that are used for fraud all the way down to the very unharmful things like, you know, using to create a picture of a fuzzy a video of a fuzzy animal. So I don't know that there's necessarily anything in the EU law that would change this model, this framework that we are accustomed to in any company's use of high technology in the marketplace to think that there ought to be some separate liability for an artificially intelligent entity separate from the company that's that's putting it out there.
Yeah, that's one of the things that kind of occurred to me while I was reading up through both the Utah, the Utah bill, the Utah law and then the EU is and we talked about this in the show before Jeff is just the fact that these laws seem to be targeting, you know, obviously in some cases very bad things and some cases is kind of like a does all these things we have to talk about all of it but you know, in the case of like misinformation deep fake porn, you know, there's certain things that you know you're talking about weapons systems, you know, that could get really dangerous and very harmful.
But AI isn't the only way that things like this can be done. So I'm curious to know your perspective on this these laws, they're really focused on this new technology new and air quotes because some people would would push back on the idea of this being new technology but but you know everybody's suddenly very aware of this new technology. That has the potential to do all these bad things and so these laws are being written to address that on the technology level on that specific level versus kind of like the broader perspective which is, you know, I could use Photoshop to do some of the things that AI is doing and yet with this these laws specifically call out AI like what what is the why the difference when the net effect of these bad things is this is the same I guess the question.
I think it's kind of funny or maybe quaint or something as if AI is some, you know, separate category really I mean, I don't think I'm not a technologist I'm a lawyer and and so but it seems to me that there's just a continuation of a spectrum in complexity of development here starting.
I mean, well before 1956 but you're talking about AI is a new technology wasn't the term coined in 1956. Right exactly. And so, candidly I think the Utah law does a pretty good job of not so much focusing it on the technology. Yes, it talks about it speaks specifically about the use of generative AI, but really what is at heart at of concern for any consumer protection regime system is the the safety, if you will, the the the dignity of the individual who's being involved here. What we don't want is for people to be deceived for any number of reasons one is we don't want there to be subversive thoughts implanted into the minds of people that are anathema to our civilized way of life. More tainly, even though it's quite significant is we don't want people to be deceived in commercial transactions. So, I think it all depends on your perspective Jason, it's not so much. I don't see really anything that the Utah statute does to govern how the technology is actually implemented but what it does is deals exclusively with how do we prevent there from being negative effects from somebody encountering information or content or, you know, being given some idea about what they perceive and not knowing that it is generated by human and thereby reflective of some kind of natural human human sentiment.
Are there at some level, first of them, issues? Let me let me explain that odd question. In a sense, a biography is a deep fake. The movie, the social network was a deep fake of Mark Zuckerberg. And it was trying to convince people that so the real story went a lot of the facts about it were not real.
But that's okay because that's speech and comment and one has a constitutional right to lie in this country as well as the president former president is proving and as the indictment of him said he had the right to do. So, it seems that we're heading down a path not we're not down that far down but down a path where if you use this technology, your rights might be somewhat different from if you do it in as Jason was saying Photoshop or your pen.
Don't you think it'll take some time to sort that out. I mean the way it would work, I think, Jeff, you know, you spend an awful lot of time thinking about the first amendment I know so I'm eager to hear your response to your own question on this. But I mean the way it seems like it would work out is that there's some kind of regulatory system, either some state or there's a federal statute that requires there to be certain disclosures made in connection with the publication of AI generated content and a grieved party then who is forced to make this disclosure putting a label on your YouTube video in YouTube studio content studio. You know, they, they, that wouldn't apply because that's not the government so I don't want to lead us astray on that but there's some regulation that requires this disclosure like like in Utah here. The plaintiff in that situation the aggrieved party who feels like their speech is being restricted file suit let's say it's a declaratory judgment action saying I'm not in violation of this the court then would look at this in through a first amendment lens and evaluate the question of whether this required restriction that the government is imposing. Is narrowly tailored to meet a compelling government interest so then there's a couple of very sophisticated further nuanced questions within that you got to look at this and say is this restriction. All that needs to be done to solve the problem is it narrowly tailored is it not more restrictive on speech than what it needs to be and then the other part is it a compelling government interest is there really this much importance that it's so important that we're go ahead and restrict speech that we will make this restriction is the is the government's interest so compelling. That's why you know certain speech is is not protected by the first minute defamation.
It's not the first amendment violation for the law to hold someone liable for defaming another because there's there's this compelling interest in it not being something that we do customarily so that's how it would play out. Jeff if you don't mind I would love to just turn it around with you to see how it'll play out.
Well I'm a New Yorker and a professor so I answer questions with questions because it also by the answers. Well yeah what would be would it just be interesting this is one of the things we do on the shows we end up talking about things up in the clouds because it's fascinating.
To be. Yeah exactly go back to Jason's point earlier about Photoshop. So when a tool is specific is specifically called out here are the use of artificial intelligence to trigger is the need to disclose and I get up not against that. But there is no such regulation saying that you have to do that with Photoshop or with old styles of dodging in a dark room or with a voice actor. I don't think so does it become I guess we're trying to ask is does it become discriminatory against the technology of the tool.
You know you give me an idea there I mean maybe it's that that this tech this might be an Arthur C. Clark situation where it's become so advanced that it's indistinguishable from magic. You know Sora will blow anyone's mind. Oh yeah yeah yeah. Unlike unlike the the mind melt you may have experienced when you first use. Aim AOL instant message you know I'm trying to make some drastic comparison there so could it be Jeff that you know that with this it's part at least the perception of it is now because it's so new. Our minds are unaccustomed of thinking in exponential growth we're fine with linear stuff but man this has been so explosive. You know is this just so wonderfully new and provide us with so many wonderfully significant opportunities that have two sides you know you can you can make these wonderful.
Worlds come alive through Sora but you can also make these health scapes come alive through Sora and then have a corresponding negative impact on somebody else's life being deceived. So different than what we've ever experienced before that it's a different in kind not just in magnitude maybe there's some of that and it's going to just take us a little while to to to to to normalize to acclimate to that and by that I mean the laws and both you know our normative approach and the laws governing it to sort of become back in equilibrium maybe there's that I don't know.
Yeah it's interesting I so I one thing I often call upon is that our communications are protected in the first class mail. And it's specifically protected in first class mail and that it's a mistake in my view to call out the technology because they're not protected similarly in our DMS and our emails and so on. And that when a technology is called out either for protection or for further responsibility. It moves below the level of principle and I don't know that's it's a few minutes I hope we'll get also to what's going on around copyright and I'm fascinated there too about how.
You know we. The first event was carved out for broadcast and there are reasons for that we all know that but the technology became a player and in the case of broadcast it was newspapers who lobbied for radio to be regulated. Because it was a competitor and they were trying everything they could to disadvantage that competitor and they brought government to bear to do that. And so that you know I'm not trying to defend a I or the Internet why I'm trying to defend the freedoms they enable but when technology gets called out specifically on either side it just seems so limiting to the law just as whether it's good or bad law.
Yeah it takes I mean you do a wonderful job including in your recent testimony before the Senate Judiciary Committee right. You know of bringing that historical perspective to it I mean and there's there's there's Fudd isn't there just with any new development and one I like thinking about is. And correct me if I'm getting the narrative wrong say I haven't looked at this in a while didn't John Phillips Suza have big objections to pre recorded music because he thought nobody then would play live. That's right. Music in more than I think I live performances but I mean that seems so antiquated and anachronistic to have that kind of approach now when it comes to music but is that a different kind of approach than what we're having toward toward A.I.
In the A.I. context I mean I think that we've got to avoid this thing that we've been batting around for years and that's this temptation or this tendency toward Internet exceptionalism this idea that you got to pass a bunch of new laws to address the development and innovation and roll out of things. My view is that it's best to as much as we can there's the sixty four thousand dollar question where's that where are the contours to that but to the extent that we can we rely on the common law principles that have developed over hundreds of years millennia. And you know the case law applying analogous situations from the analog world so we did there you know into into the digital context it's only when we recognize that there are particular. Situations where for policy reasons legislators ought to step in and do something different we do that and we do it carefully and we do it lightly recognizing it won't be without controversy best example I think section two thirty. In the mid nineties recognized as a policy issue that we can't treat intermediaries on the Internet the same way you treat libraries and bookstores for third party content or else there will be no investment no growth for the Internet so. When it comes to A.I. one thing one one area where I think there's some talk about legislating something different but we ought to be measured in that is how do we deal with like voice clones. Which is you know like deep fake my favorite one is Johnny Cash singing I'm a Barbie girl.
Yeah solid yeah yeah. And that is the right of publicity which has been around for for years and you know this is a creature of the various states law but the most of the states say that an individual has the right to the exclusive right to determine how their name image and likeness is used for commercial purposes. So that right of publicity law goes a long way in regulating how we ought to treat voice cloning or or even audio visual deep fakes for that matter but there comes something that comes a point where there's it's a little bit different because it may just be purely exploitative of the person being imitated rather than a commercial use like.
Maybe Johnny Cash has a better claim for right of publicity misappropriation is a state would because you know he's a commercial artist but if it's just a deep fake of somebody that you're making for nefarious purposes sort of in the nature of revenge porn or any of the other ways that one can try to be really mean to another person there's not that commercial aspect. And so our our task then becomes does the do the traditional laws that deal with you know harassment or infliction of emotional distress or invasion of privacy go far enough. Maybe not but they go probably farther than what we would think at at first blush without needing to make new regulations.
With the Utah and I know we're kind of reaching the end of our time with you but with the Utah law and then we've also got you know this this EU law that we're talking about kind of different approaches right and also if you had to kind of take the EU law and put that into like a US perspective that would be more or less analogous to like a federal like a widespread like wide reaching federal law that is meant to regulate a high and what we're seeing here in the US more so as you said earlier. Is kind of on the state level does the you to law YouTube the Utah law or in mysterious ways you never know where it's going to go. What do you think this this Utah law says about other efforts that are happening around the country. Are we going to see more like that like does that sets any sort of a precedent for how these laws might be shaped in other states or or or we could see you know more restrictive more specific to AI and not the impacts. What are your thoughts there.
I think it's a good example for a measured approach and it also does a good job in at and in attacking or maybe that's too strong of a word but in addressing you know the key issue that's this consumer protection notion we don't want people to be deceived out there so in that sense it serves as a pretty good model now I think you could easily foresee other states taking a much more aggressive approach toward the technology. In a couple of different ways one is perhaps being more express on what types of things are restricted you know just specifically talking about deep fakes which and I don't mean to suggest that you know some states haven't addressed deep fakes. At all like Wisconsin for example is a recent one that enacted a deep anti deep fake legislation a law for dealing with like campaign speech and stuff like that you know for for political candidates. So I could see that where it being you know more specifically regulated and perhaps even broader more than just the the consumer protection context but in dealing with you know domestic and family relations might be one area where or you know all those laws in the criminal code that deal with harassment toward someone else I could see that being an area where there could be a regulation done in addition to what to what you taught is done here so no doubt will be seeing a button on the right. So I think that's a bunch of different approaches but probably like the right of publicity in other areas of state I'll probably normalize over over time.
Interesting stuff well Evan it has been an absolute privilege and honor to have you on the show for the first time and to be able to talk with you again about.
More times we hope yes. Absolutely legal issues coming up constantly.
That's that's which I hope so it's the privilege and honors been all mine really enjoyed it thanks for having me on.
Thank you Evan Brown again partner Neil and MacDevott in Chicago you can go to Evan dot law to read his writing and to find him for any of your any reason they might need to find Evan go for it.
Thank you. Thank you. Pleasure. We'll talk with you soon.
All right bye bye and we've got more news. We ever yeah we have some really big news coming up so hang tight. All right. This was a big deal. Microsoft AI is now a thing. It is official and has a new, shiny new CEO.
It was going to have a CEO, let's not forget, before in Sam Altman. That's right. That's right. For about an hour, he was going to be the CEO of Microsoft AI and then wasn't.
My health fast things change. Yeah. And now, Mustafa Suleyman, who was the deep mind co-founder a number of years ago, more recently co-founder and CEO of Inflection AI, which is another of the AI startups, trying to go toe to toe with the behemoths like Microsoft. And now Microsoft bottom up. Inflection is a $4 billion AI company. Like I said, it's a startup, but backers like Bill Gates, Eric Schmidt, Nvidia, Microsoft, Reed Hoffman. That's right. Although some people being a little critical of their impact, one million daily average users, which in the grand scheme of things, I suppose that's not a lot.
Million always sounds like a lot to me, but in the scope of these businesses, I suppose not much. But you were the one that really dropped these stories into the rundown. I'm happy that you did. Tell me a little bit about kind of where your mind is at with this. What interesting about this.
So you have Suleyman and Karen Simonian, who are carrying, who are moving from Inflection to Microsoft to head up a division there. So it looks like they got kind of stolen. But they then hired Sean White, formerly, I think, of Mozilla. He was R &D at Mozilla to be the CEO of Inflection. And Inflection, I'm not sure I understand this exactly, but Inflection is pivoting somewhat. Inflection started Pi, which Reed Hoffman touted as a way to be far more human in its communication with us. And a lot of people like Pi, they're going to be, I think, more of kind of an AI integrator, as I understand it, but I could be wrong about this.
Whereas the heavy duty AI development will go to Microsoft. And Reed Hoffman tweeted about this because he was clearly, Reed is a pretty amazing puppeteer across Silicon Valley because he is connected with everybody. He knows everybody.
They trust him. And so it looks to me like kind of a win-win for everybody here. That Inflection pivots to something that's probably going to do better at a more reasonable scale against good CEO. Microsoft gets to have its new AI division, which gives it some independence, I would imagine, from its dependence now on open AI.
So I just found this really interesting all around and kind of how it happened. And so you have Reed Hoffman blogging saying, I'm grateful to the early investors, including his company, Greylock, who believe in this vision. The agreement with Microsoft, he writes, means that all of Inflection's investors will have a good outcome today. And I anticipate a good future upside in the future.
And then again, Suleiman and company get an incredibly powerful perch in the future of AI. So I just think this is just something to watch. That's why I put it up high in the rundown.
Yeah, no, absolutely. And I mean, people responding pretty strongly about this as kind of another one of those earth-shattering kind of moments in the development of AI right now. One thing that's interesting to me is this, when we take a look at why am I suddenly blanking Inflection and their chatbot pie, now that is a chatbot that I have not had any personal experiences with, to be honest.
I tried it twice, but I forget. Yeah, yeah. I mean, there's a lot of consumer AI chatbot plays out there right now. And then you've got the Microsoft and the Google and these major behemoths in the room. And a lot of people are looking at this and being like, hey, you're a co-founder of a company that was valued at $4 billion, this AI startup.
And still, there is something enticing about going to Microsoft to work with them. Who knows what's going on behind the scenes? Does this say something about consumer generative AI, as some people are alluding to, that like, well, maybe there's not as much of a there as people have thought for the past year? Does this signal anything? I don't know enough to know.
Yeah, I mean, we don't actually know. Should stop me saying anything right there, but I'll keep going for a minute. I think there is a kind of systems integrator role of taking a model and adapting it to a given company's needs, a given application, a given set of data. And I think, again, I'm not 100% certain, but I sense that's kind of where inflection sees some of their skills or their need. And again, it comes back to this three tier world where you have the model makers, you have the application layer, and then you have the user layer. And we've seen a lot of attention to that model maker layer. And we'll talk more about this when we get to Nvidia. But there's more, I think, needed at the application layer. Companies don't know what they're doing here.
They have specific needs, there are specific opportunities. So I don't know where this pans out in the end. Making consumer bots, I think is to my mind, it's the right question, Jason. It'll be a while to get to an answer. I think that chat GPT and all that is more of a demonstration project. Now that you mentioned it, that, oh, here's what we can do. You give this general and it can do stuff.
By the way, it's going to be wrong three quarters of the time. But it's fun to play with. Well, that's not terribly useful in the end for most purposes, other than making up stuff.
So you've got to constrain it, constrain its data, test what it can do at a more useful layer. And so I wonder if you almost have an OEM structure here where you're right, open AI, and this is coming live before your very eyes. I don't know if this is true or not, but I think open AI, Microsoft now, Google and Meta are going to make models. But those general models aren't necessarily, they're not going to hit AGI. They're not going to be generally useful to every possible task. They're going to have to be made useful to tasks.
So that's, if that makes any sense. Yeah. Yeah. That makes sense. Yeah. It'll be interesting to see. I'm also super curious to see how inflection, kind of how they do navigate the waters moving on from this commercial or this consumer chatbot play more into kind of an enterprise behind the scenes role. And yeah, there's just, I just, I came across a number of articles as I was kind of reading up on this story where people were really questioning like maybe the real money here, and actually it makes a lot of sense, maybe the real money isn't in the consumer play. It's really what's happening behind the scenes, powering the businesses and, you know, what they have gone on. Right.
But you know, you're AWS and you make a fortune on the cloud at a B2B level. But if there's nothing serving the consumers to go on it, you're not going to be making it money AWS. So you need the multiple layers here. Yeah.
For sure. For sure. And then there's Apple. Okay. So earlier this week, I was on the Apple, the Apple vision show with Eileen Rivera and Sarah Lane. And they invited me on. I'm not normally on Apple shows because I don't have an iOS device.
I don't, did you feel a little, a little disloyal here, Jason?
No, no, not at all. I always look forward to opportunities like that because Apple is so important. There is just not the mobile device that I use on a regular basis, you know, have a max throughout the home. So I'm an Apple user, just not a, not an iPhone user necessarily. But I did realize in, in coming onto the show with them, they invited me on to talk a little bit about Apple and AI. And I was like, man, I don't think we've had Apple at the topic list once. And I'm like, is it because they're not making news? Or is it because I'm so, I've got blinders on and I'm not noticing. But it turns out there is some news.
And so I thought we should talk about it. Apple possibly partnering up with Google for its Gemini AI platform, potentially coming to the iPhone in some way, shape or form. Apple is working on a number of ways. It turns out with the next version of iOS that, that AI is going to be integrated into the OS in a number of ways that I think we're, we're kind of starting to get used to, you know, these things like summarizing your photos and some of the, the, the, you know, the other camera enhancements and, or sorry, sorry, not summarizing your photos, summarizing your documents or your chats and putting them into a nice summary. These, these tasks that LLMs are more and more so being used for and Google has been you know, definitely doing this on the Android platform, really integrating its AI into the services that you're already using on your device. And so it would be really surprising if Apple didn't do that at some point, because it's kind of like they all want to be where the action is. And sometimes Apple plays a little bit of the game of, of not arriving immediately when there's a, when there's a big trend, but arriving when the time they feel is right. And seems like that time is now. What are your, what are your thoughts on this?
I just wonder what all the applications are for AI. I mean, we're thinking about Google has advanced so much with the tensor chips and putting tensor into our phones and bringing more AI into the phone. And that's not at the application layer and sense of the apps you buy, that'll come, but it's about Google offering Google services using AI locally, Google offering Google advertising locally through the sandbox. So Google has really moved very far in putting AI into the device.
And so what I was thinking about with the story is that I don't know what Apple will do with it. They've, they, they kept me left behind, they've got to do stuff, but they don't have an ad network to work with. They don't have a search engine. They use Google's. Maybe this will help them use Google services better on the, on the iPhone too. But I can imagine that kind of wrangles a Apple.
So I just don't know what the uses are that Apple intends. Of course we did a story. We didn't mention this, I think on the show a little bit ago, that Apple moved away its people from its car plan. Oh, that's right. AI team. So at least we mentioned them. So they, they obviously are pretty resources in AI and hell they have to. Yeah. But I haven't seen a signal yet of, Oh, that translates to this. Yeah. I could see it with creative stuff, but Adobe is probably going to do more in that, in that range than Apple would.
I don't know. Yeah. They've got to have it represented in one way, shape, or form. And, you know, like if you're using their email client, do they use this LLM to help you write better, you know, better emails or some sort of integration like that, which we've seen Google doing a lot of that. You know, whether people are actually using it. I mean, I don't, use an AI to write emails, but I certainly do to, to write show notes.
Well, you do more than you know, because it'll suggest the next word to you and all that.
Well, that's true. Yeah. How, how, how far back do we want to, you know, how much do we want to open that door to say, okay, well, that is, you know, generative AI working because I do use those systems to correct my grammar and stuff like that. And sometimes it's more than just a word.
Sometimes it's a couple of words. And yeah, that's all generative too. It's all using kind of the same system.
So that's a really, really fair point. I think another thing that's really interesting about this is we are already in a position in a moment where regulation, the eyes who would, you know, look upon big tech and regulate are doing so a lot in the EU, definitely here in the US, a little bit behind the EU, but this, this heat, this, these regulatory implications are a big deal right now. And if you got Apple going into cahoots with Google again on this growing, you know, this, this technology that ever, that a lot of the regulators potentially already have an idea as being a bad thing or, you know, with a lot of pitfalls, you got to be careful. I could just see this being like a, you know, a real perfect storm of like, okay, you thought it was, you thought it was tough before. Prepare yourself because that regulatory pressure is coming for you.
But, but then again, for, to speak for Apple's sake, if they don't have, they're fine, they can add stuff to their email. But if they're not the search engine, if they aren't the ad network, but they want to get the benefits of those things, they're better off working with Google. Yeah.
At least for now. Yeah. Yep. Had to know that these features were coming to the iPhone at some point. And it looks like possibly this would be the year. And also, I should also mention that the report did say that the same sources were also made clear that open AI, there was some discussion around possibly open AI. So it sounds like it's kind of loose up in the air, but really the article by Mark Gurman focuses on the Google relationship more than the open AI.
So we'll see how it all pans out. You did some live coverage with Mikah Sargent and TWIT of the NVIDIA talk just a couple of days ago by Jensen Huang. And it was kind of a big deal. It was also very long. I noticed it was like two and a half hours.
It was over two hours. And it was him alone on stage. Wow. A person came on stage to hand him a chip. That was it. Otherwise, it was completely Jensen Huang. Fairly impressive. He made jokes about not rehearsing.
We tend to treat these things as showbiz. His jokes pedaflopped. Yeah, thanks. I had a good life somewhere. It was impressive as hell. Of course, Micah and I both confessed that we didn't understand much of it.
Yeah. But what struck me, Jensen, was the scale of it. The size of this new Blackwell chip that supersedes the Grace Hopper chip. What it can do, how they're tied together, how in the racks there's a new communications chip that lets them all communicate in speedier ways that emphasizes the exponential power of what's possible. There were a lot of exponential hockey stick graphs in this.
And as at one point they're using animation or they're using the AI to draw a picture of a server farm. And it goes on and on and on and on and on and on and on and knowing how powerful each piece in a rack is and how powerful each rack is and then how powerful each row is and so on and so on and so forth. It becomes too big. Boy inspiring.
Yeah, it becomes too big to understand, I think.
I get why some people, and I still am going to make fun of the doomsters, and it's going to destroy us. But I start to get a little bit of an inkling why they get freaked because of how big and how powerful this is. And the other problem is, I constantly quote the stochastic parents paper, that even with AI as it was two years ago or three years ago, they were, I think, rightly complaining that the boys of AI were trying to make their models too big, size matters big for the sake of big.
And that makes it impossible to audit what goes in them and what comes out of them and to understand how they work. And we need smaller models. And Jan Lakota has been talking about smaller models. But here is a machine that goes for gargantuan models, gargantuan sets of training. And Wong at one point said something about, well, you need more data to fill this. Well, that creates an expectation, almost an ethic of saying that we have to gobble up everything we possibly can and there's not enough. And so we're going to make up a whole bunch of stuff just to feed the machine. And it becomes a self fulfilling monster. I don't mean monster in a moral panic way, but just in terms of a gigantic mall. It's very hungry.
Yeah.
Yeah, incredibly hungry. That freak could possibly feed it enough. You know, that freak is wrong because I don't want to make fun of people who say things like that. But it gave me pause about where that direction is going. Yeah. The other interesting thing, minor stuff was he mentions, for example, transformer a lot, never gives credit to Google for that. Steven Levy just today, just right before the show put up an interview with the eight original authors of the Google paper that created transformers that created all of this. So I found that interesting.
There were areas where they kind of didn't credit things. It was like I was listening to an to an Intel inside speech, Nvidia inside everything. And Nvidia has bigger chips. You've got to have all this stuff and you've got to buy it from them. And you've got to be bigger and bigger and bigger.
You got to replace it. And that's the economic engine that drives them. And that speaks to huge companies owning AI.
And I really still want open source small model, controlled model efforts here to work. And I can see this coming fight of gigantic versus human scale. And that's what God that's what I took away from the Nvidia talk.
Yeah. Yeah. I mean, I'm I'm right there with you when it comes to the like the truly highly technical aspects of this. It's it, you know, I'll fully admit it's lost.
I mean, that's not my specialty. The numbers are just insane. Like so far out of the realm of anything that I could even comprehend or compare to other than big number, bigger number. You know what I mean? There's just so far out there that there's no way for me to put it into 30 years.
What's for you 30 to 50 billion quadrillion floating point calculations. That's nothing.
Like my wallet, you can do that with your slide rule. Yeah. Right. You know, and it's also Michael was saying at one point, well, you could hear them fighting against Moore's law, but they've come up with their new laws. So that now this is so much bigger than it was.
It's it's so it's not dependent upon, you know, the number of transistors in the chip, it's dependent upon, I would think I'm making this wrong, how all of these huge chips communicate now.
A lot of its water cooled, which is kind of interesting. So yeah, it was a fascinating thing. I'm glad I sat through it. I didn't understand a lot of it. He said some other interesting things. He talks a lot about the omniverse, which is just one of their terms, accelerated computing, which sounds like effective accelerationism, being an AI, not a chip foundry, but an AI foundry. This is one interesting thing that I don't know that I fully understood, but I think that that Nvidia is also creating models.
And when does it turn into competition with its customers? And then he talked about this, this, this verb. Well, after you do this and this and this to your model, you then guard rail it as if that's a, that's a stage. And then at the end, he went into robots at considerable length and said that this will be a rule.
Everything that moves will be robotic. That's pretty good. If it can exercise for me, good. I'm fine with that. But so that was that's my report from two hours of a half understood Nvidia. Yeah, fair. I was waiting for him to get to the watch. I thought we'd have the Nvidia watch. I never had it. I was disappointed.
Nvidia, like a smart watch.
I think you ought to have, you know, a Blackwell chip with all these floating points on your wrist, I think.
Yeah, someday, someday we'll have that. When the actual system, that will be the miniaturized version of the actual system, which will be exponentially larger than anything we can comprehend even right now and into the future. Interesting stuff. Well, you mentioned open source AI and the importance of that. So then I'm sure you noticed that as we discussed last week, following Elon Musk's tweet that he should open source and plan to open source GROC, that's XAI's large language model, it has in fact happened. The open source release of GROC one now officially available on GitHub called out explicitly that it hasn't had any correction or fine tuning quote for any specific applications such as dialogue, which I mean, I think is, you know, that's the, what is it?
That's the role that Elon Musk has been emphasizing about why his LLM is different than others is that it's uncensored and it's not woke and any number of things along those lines.
So yeah, I can't wait for actual real geeks to judge it.
Yeah, kind of put it through its paces. I have no means to do so. Yeah.
One thing that's interesting here, it includes the weights. So those are the connections in the model that actually make the decisions, manage the input and the output to, you know, create the text and everything. But it doesn't, it doesn't include other things like the training code, it doesn't include the data sets, other open source models like Pythea, like Bloom do, but this does not.
So it's open source, it's Apache 2.0 license, commercial use minus the data set and minus the real time X data feed, as you could imagine, I'm sure. So there you go.
Yeah, I've looked briefly at social media about it. I haven't seen any judgment of it yet.
I haven't either. And I was looking for that too. I was super curious to see what people, you know, think right out of the gate, but I'm sure that's coming. And when it does.
Why do I have the sense that it's the Wizard of Oz?
Like there's nothing behind the curtain. We'll see. Other than Elon Musk. Yeah, busted. Well, we'll certainly find out. And then just real quick, finally, before we round things out here, you put in a link to a YouTube video, which I didn't, you know, granted it's, I didn't watch the entire thing, but I watched kind of the first little bit to get a sense of it. And tell me about this generative AI in a nutshell.
So Lev Matovich, who's a scholar, I respect immensely at CUNY at the Graduate Center, who is a digital humanities expert and really understands the field and understands how he's applying it to art and does a lot of interesting work. So he just put this video up and said this is a good basic video that describes generative AI.
And I ended up watching the whole thing. It nothing at all earth shattering here whatsoever. But if you find people who want to just get a very simple primer on what is generative AI, I found this to be clear and understandable. And so I think it's just a service to our, probably those folks who are watching the show know more than this video already, but you know, people who don't, and you want to describe it to your dad, I find that this was a pretty good video for that 17 minutes long, I think.
Yeah, I was going to say it's not that long either. You know, it's very approachable, 18 minutes. So it's not going to be the kind of thing that you put on like the first five or six minutes that I watched was very engaging. He put it into terms that are really easy to understand comparing to Dynestein and as he's doing right now. But so it's not so long as to belabor it.
So someone who wanted to understand and everything would get bored and start to tune out. It's done really well. So that's a great share. It's called generative AI in a nutshell, how to survive and thrive in the age of AI. Look for Henrik. Is it Nieberg? Knien?
Knienberg? K-N-I-B-E-R-G. Nieberg. He pronounces Einstein, Einstein. Yes, he does. He's German.
Yes, indeed. Cool stuff. Well, as seems to happen every single week, plenty to talk about, and we hardly even scratch the surface because there's just always so much news.
Yeah, we do our best to talk about the really important stuff, the stuff that intrigues us anyway. So I hope that you enjoy the picks. I hope that you enjoy Evan Brown.
I know I did. It was great to have Evan on to talk a little bit about regulation of AI and kind of where that's headed. And I'm sure we'll have another opportunity to bring Evan back in the future.
Lots of issues of copyright and such we'll be talking about.
Indeed. Never ends. What do you want to point people to, Jeff, Gutenberg Parenthesis?
GutenbergParenthesis.com. Yep. That's that. When you get there, I get the next book out. Book out. But for now, discount codes for magazine and Gutenberg Parenthesis are there.
Yeah. When you get there, Gutenberg Parenthesis. Don't forget to scroll down because that is the book that this site is named after. And then of course up here, it's kind of a bonus. Magazine. That's a little book about magazines.
A little one. Yeah, I love that. Well, thank you, Jeff.
That was awesome. Great time hanging out with you, talking about AI. If people want to follow kind of the work that I'm doing, well, you probably already know half of it.
YellowgoldStudios.com is the YouTube channel that we stream live every week, the show AI Inside. But I'm also doing kind of some reviews of different products and everything. I'm playing around with different ideas here. And I've got another review coming up at the end of this week.
So you can look out for that. Just go to YellowgoldStudios.com and that'll take you right to the channel. As for this show, AIInside.show is the website for the actual podcast. So if you want to subscribe to the audio podcast, find all of the episodes posted there. It's all laid out and very easy to understand approach.
And yeah, I quite like it a lot. AIInside.show is where you can find all the information. Actually, if you don't want to go to YouTube, you don't have to. I embed the video version into each post on the website.
So if you're not a YouTube fan, that's okay. You can still get the video at AIInside.show embedded into each of the episodes, trying to make it easy for you. And then finally, if you want to support us directly, absolutely, we would love to have you at patreon.com/AIInsideshow is the place where you can go. There's a couple of different tiers for the different levels of support that you can give, you know, ad free, of course, at the general AI level, we've got extra content that is given out on the super AI level. And there's a kind of a narrow AI level of $5 a month that's just more general support like, hey, I like what you guys are doing.
So patreon.com/aiinsideshow. Thank you so very much for your support. Thank you for watching. Thank you for listening. Jeff and I will see you next time on AI Inside. Bye, everybody.