Jason Howell and Jeff Jarvis discuss the week's AI news, including Sam Altman's call for $7 trillion in AI funding, Google's launch of Gemini Ultra 1.0 chatbot, proposed regulations on AI safety, dismissal of copyright claims against AI, and the need for humanities education in the AI field.
NOTE: Connectivity issues resulted in a lower-resolution file for part of the show. Apologies!
NEWS
- Sam Altman wants $7 trillion to boost AI chip and GPU production globally
- ChatGPT gaining ability to remember user preferences and data
- OpenAI building web and device control agents
- Google's Assistant is now called Gemini on Android devices
- Google announces Gemini Ultra 1.0 model to compete with GPT-4
- California bill proposes AI safety regulations and requirements
- AI companies agree to limit election deepfakes
- Most claims dismissed in Sarah Silverman copyright lawsuit, leaving only 1 direct copyright claim
- Beijing court rules AI-generated content can be copyrighted
- NYT op-ed argues humanities education is key to developing AI leaders
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 4, recorded Wednesday, February 14, 2024, Altman's $7 trillion dream. This episode of AI Inside is made possible by our wonderful patrons at patreon.com .com. If you like what you hear, hop on over and support us directly. And thank you for making independent podcasting possible.
Well, hello, everyone, and welcome to another episode of AI Inside, the weekly opportunity for, at least I can speak for myself, for me to learn even more about artificial intelligence than I knew the week before.
That's my goal, anyways. And hopefully you're along for the ride and you're learning along with us. I'm Jason Howell, joined this week, as always, as usual, anyways, with by Jeff Jarvis. Hello, boss. How are you doing?
All right. Can't complain. I'm actually really excited for this episode, as we were kind of leading up to this episode. We're going to be doing things a little bit differently today than we had the first three episodes.
And I just realized, like, it's been a little while since I left Twitter and had the opportunity to just kind of like freeform through stories and stuff. And that's what we're going to do today with it. There's lots and lots to discuss, always. I mean, too much stuff to get it all. But I think we have a nice little kind of curated list. You and I were kind of throwing some stories that were catching our attention throughout the week into the dock. And we've got some, yeah, there's some pretty important stuff in here to talk about.
Some stories kind of getting maybe not an end to the story, but, you know, certainly a progression and some fun, interesting stuff, all sorts of things. So we don't have a guest this week. So it's Jeff and I talking the news. Super excited for that. We'll see, you know, who knows, maybe we'll do this like once a month, something like that. So it gives us the opportunity to just kind of hang out with each other and with you and talk about the news of the day.
And then just a little programming note. Next week, we will not have a live episode, but we will have an episode of the podcast. I'm going to be in Park City on the slopes for five days. Snowboarding with my family and some friends. I am so excited. Are you a snow person? Do you like the cold weather in the snow? No.
What's that? I'm too clumsy. I have size 12 feet. That's already skis.
And I was going to say, you don't need equipment at that point. No, no. Well, it's a blast. Given I don't hurt myself. I also hate heights.
So no, not.
Oh, oh, it's just it's so wonderful. Like literally, this is one of my favorite times of the year is being able to go on a trip like this and just explore.
It's all about the exploration and the adventure. OK, has nothing to do with artificial intelligence. Maybe someday our skis and our snowboards will. But real quick, before we get into the news, we are a new podcast. As you know, I've been asking for the last couple of weeks and you have been delivering. Thank you so much for giving us a review on Apple podcasts.
Love to continue the trend. If you've been listening for a couple of weeks and you're like, OK, I now have a feeling about how things are going and I want to share my thoughts. Please do so. Head over to Apple podcasts and share a review or any other podcatcher. I guess there are probably other podcatchers that allow you to share your review, but we would really appreciate it. And then if you want to support us directly, you will sometimes hear me talk about that. Oh, sorry, that's the wrong one. Patreon.com/aiinsideshow. That is the place that you can support this show directly like.
I am going to do this every week. I'm going to call out one of our paid patrons and it just so happens. I didn't realize this until I looked for it today.
Daily Tech News Show, which is Tom Merritt's podcast, was the very first paid patron. Oh, that's wonderful. Come bless, Tom. So Tom's a great guy, super supportive. He's been helping me behind the scenes, kind of get my act together in a lot of ways. So I'm not that surprised to find that he was the first person to become a paid patron.
Absolutely. And he's so good at what he does. It really blows me away.
Now that I'm doing what I'm doing, I'm like, oh, my goodness, this is this stuff's hard and he makes it look easy. Patreon.com slash AI inside show to support us and we will call you out in a future episode, shine a little light on your kindness. And yeah, we thank you for your support. OK, with that, it's time to get into the news.
We got some fun stuff to talk about. And I think maybe we start with Sam Altman. It's hard not to start with Sam Altman. He really is kind of like the poster boy for modern AI news and everything that's happening, CEO of Open AI on the hunt for more money. Jeff, he wants seven trillion dollars. That's what's that is. I mean, that's a lot. That's a lot to expect for anything. I mean, it's kind of crazy. What?
Yeah. What do you think about? I mean, what he's saying is we want to we want to boost the building of AI chips and GPUs worldwide so that we can do as a company and so that the industry can become what it deserves to be.
And I think, you know, I've seen AGI floating around all over the place as far as kind of like the ultimate destination. I don't know how true that is. But what do you think? I mean, seven trillion dollars is an insane amount of money. That's I mean, it's hard to even like visualize and compare that that amount of money.
Well, I think it's incredible, but that's what Altman has more than anything else. He is Dr. Hussba, you know, we consider that he's not an engineer. He doesn't come from the technology side in that sense. He did start up world and so on. But the audacity that he has is quite amazing.
I do think that he's going to be on a popular bent here because there's a need for competition in these chips. Nvidia just passed Google in total value to a market cap of one point eight three trillion. It trails only Microsoft and Amazon now, I think it is.
So that's pretty amazing. And but it's built out of the fact that Nvidia has a chokehold pretty much on the market. And whatever Altman tries to build, we'll have customers. The question is, is he the guy to do it?
There is a long lag time in terms of building this stuff, the infrastructure to build the chips. I think the Congress and the White House will probably pat him on the back and say, good, you know, build the USA. And so I think that Altman just proves to be prescient and smart in his PR timing as usual.
Yeah, yeah. I mean, not, you know, this is obviously the cost is high on something like this, like the monetary cost of something like this is high. And I think the point that you that you made as far as like the amount of time, the lag time between, you know, say the beginning of this effort and where we actually start to see any sort of change. You know, we're talking years, I imagine, before any of that kind of outcome starts to materialize. But it also doesn't really take into consideration the energy cost of of this sort of thing, having that amount of production for these kinds of things.
I mean, you know, not to mention natural resource draw. I mean, it's just it's just kind of crazy to think about the the scale and scope of what Altman is saying here, not to mention, like, like you said, is Altman the right person to be drumming up the support? I think he certainly believes so.
And why wouldn't he? He's at the heart of everybody's kind of conversations around this, you know, modern AI moment. He's really the guy that everybody wants to wants to be on the same page with because of what they've been able to do at Open AI.
So, you know, this is a strange thing to say for a show called AI on the Side. But but I think we have to wonder, you know, when I started with a newspaper company on the Internet in 1994, they made separate companies because they weren't sure this Internet thing was going to last, which now, of course, is amusing as hell, especially when you consider how bad newspapers have been online.
So I'll repeat the foolishness right now. Who knows what this AI thing is going to last? Obviously, it's going to last. Obviously, it's been around. It's going to be around. There's going to be demand.
But I keep on wondering whether generative AI and LLMs are all they're cracked up to be. Is there, in fact, a huge business there? The point is, will that generate the demand for these ships that everyone is anticipating?
I think probably yes. But I think there could be a downside surprise here where it doesn't have as much economic impact as people are hoping for. And if that's the case, then investment in that kind of development would go down, then the demand for the ships would go down. Nonetheless, if I were betting and if I were investing, and if I had a spare trillion dollars, I'd probably put it in this.
Spare trillion dollars. No big deal. Oh, we can hope that.
I got a whole lot of the trillion I have, Jason. I can't give it up.
Yeah, true. Fair enough.
Well, I'm hired now.
Well, I got to do what I got to follow in your footsteps because I'm nowhere near even a billion or a million. So that's all relative. And I think what you're talking about as far as the kind of importance, the long-term importance of LLMs is kind of illustrated, I think, in the next couple of stories that we have here. ChatGPT getting memory is, and I think I kind of grouped some of these chatGPT and open AI stories kind of together to keep them in line with each other, but rolling out memory. And when we're thinking, I think the next couple of stories that we talk about are all about agents and personal agents. And some of this has to do with open AI. Some of it will have to do with Google and its news.
We'll talk about that in a little bit. But I think at the core of it is, as a user, if I'm going to use these services, I want to know that it's going to be able to provide for me the things that I want. And as a user, I think the most benefit that I stand to gain from it is if it knows a lot about me. Now, that might make privacy advocates wary, of course, if we're given all of our information to the purpose or to the extent that these services know us almost better than we know ourselves so that they can be really useful to us when we need them to. But that is, I think, setting the privacy stuff aside. That is, I think, what general users would like out of systems if they could keep that privacy thing adhered to in some way, shape, or form.
I don't know if both of those things are possible kind of side by side, or if we have a really great solution. But at least here, ChatGPT is gaining the ability to remember things about you, either things that you tell it, like remember that I like the color blue, and then in future conversations or threads that you have with ChatGPT, it'll keep that in mind and apply it when it's appropriate. Or it will learn these things about you. That'll be helpful to future queries. And I don't know, I think when we're talking about personal agents, I feel like that's a necessity. What do you think?
Yeah, I'm surprised that that happened already. I think the people- Yeah, me too. ... were actually in a discussion with one of these bots, and they have a task. They're trying to do something.
And then they've got to start all over again the next time. Bard, now Gemini, drives me nuts because it speaks to me in different language. It'll suddenly switch to Hindi and Thai, not just in German. No kidding.
It'll switch all over weird languages for no reason. It has nothing to do with the subject matter. And I tell it, speak to me in English because I'm a stupid American.
That's all I speak. And it'll, okay, I'll do that that time, but the next session, it forgets me. And it does it again. So there's the customizability. There's the tasking, being able to continue a task over time, being able to be additive to the learning that occurs, I think is important. You're quite right that it's going to be a privacy freakout. Oh my God, it knows things about us. And Google just put out a warning, don't tell anything personal to these machines.
Don't do it. And they're right to do that. And companies have learned that they shouldn't put data up there for fear it's going to end up learning to get out there. I think there's a few things. One is that there has to be a wall that says, if you tell the machine this, it doesn't use it for learning. Otherwise, it doesn't use it elsewhere. Otherwise, people are going to have no faith. It's just like when Facebook started various of the things they started, it's going to be a problem. Yeah. Number two is that the user has to have control over this. I told you, I speak Hindi, I really don't take that out of your memory, stop that.
Or I was doing a project, and I made believe I was something or other now forget all of that and don't remember it for me. That's just going to be convenience and power for the user. And those are things that social media and media should have learned long since for everything from cookies to social feeds.
Users should have had more transparency about what was being remembered, how it was used, with control over getting rid of it. And I don't mean this, I own my data, because our conversation right now isn't owned by me or you, it's owned by us.
If you cut me out, it's, well, maybe a better conversation, but it's just you then. But when you do tell the program something for a purpose, I think you should have transparency onto that and the ability to change it, which is going to give people a lot more trust and faith in this. So we'll see how well they do it implementing this. It'll be very interesting. And you're quite right that this is going to be vital when it comes to building agents, because it's going to want to know that I travel on the aisle seat and I hate going over bridges. And I want their first flight. If it's going to be an agent for a travel agent for me, it's going to have to know those things to be useful.
Yeah, not keep asking you every single time and forgetting and, and all those things. We will, you know, expect that of it at some point. And at OpenAI, I also had news that it's building few different kinds of agents, as we're talking about, one for web tasks. So things, things like expense reports, travel bookings, like you were saying, those kinds of things. And then other for device control. So things like productivity tasks and everything all integrated together. That step moving towards AI becoming even more of a personal assistant, which could end up being really the story of 2024 when it comes to AI, right?
Like we're going to talk about Google next. And that's really kind of where all this is seems to be heading, at least in my mind, is cool. The last, you know, a year, year and a half has been about using our imagination and realizing, Oh, AI is powerful enough to do these things now. Or people or the systems that are being built around this technology finally seem capable enough to do some of these things that it couldn't do before, that it, you know, didn't do very well before.
Now we're there. How do we integrate it into our lives and make our lives better and, and or more effective and more time, less time spent doing all these things. How can we use this as that like, you know, that that personal assistant ideal that can really empower us as human users to do things, things in our life, not just that it's capable of doing things that we didn't believe or understand that it was capable of before, but now we are suddenly more capable or freed up to do other things as a result. And this really seems like this is the story of 2024 at this early stage as far as what we have to look forward to.
Yeah, I think it's like the smart home, which I just every time I look to try to hook these things up, I'm not Stacy, I think about them, I give up, I don't do it. But one can imagine that if I could speak to the system in English, and it could teach me things, you can say, Well, you're wasting a lot of energy here or that kind of stuff. I start to see where Agentry makes sense.
I think it's less with devices than it is with tasks. I mean, the research for this show, to be able to to pull interesting information for us from stuff out there. Being able to watch for alerts on things. Being able to do certain easy tasks.
I just took a particular trip tonight to Washington, DC. So, you know, getting in the train and doing all that, one can imagine whether it's my agent or whether it's someone's server agent as service. This is going to advance.
And it's I think it's going to be on these kind of very practical levels where AI is going to enter our lives more than on the really high crazy levels. Yeah.
Yeah. Now, once again, here, it requires knowledge of Azure's history and dataset to be truly useful. And I think that that right there segues pretty perfectly into Google's news, which is that Bard, as we've come to know it, is now Gemini.
As Google loves to change names for no good reason. Oh my goodness. Yes. It's it's it's the easy joke at this point.
I mean, good. I mean, Google just cannot make up its mind on its nomenclature around its product name, everything. But, you know, so Assistant, if you were using Assistant on your devices, that's essentially being replaced by Gemini. They actually released an app for iOS and for Android for Gemini. And if you install it on Android, it replaces the Assistant kind of gesture or the ways that you would summon Assistant on your phone. And now summons Gemini. And I think there are some features that Gemini does that assist or doesn't do that Assistant did. So, you know, you might either love the change or hate it because now suddenly things don't work the way you expect them to.
But this is really the direction. Do you know how to install it? Have you installed it?
Install Gemini? Yeah. Oh, yeah. Yeah. I installed it. I mean, it's just an app through the Play Store. And once you install it, then it kind of overtakes the the the actual kind of ways that you summon Assistant.
If I'm not mistaken, I searched on Gemini. There it is.
Yeah, like I can show, you know, this is my home screen of my phone. If I do the swipe from the corner in instead of getting Assistant, now I get the Gemini pop up. And it's, you know, of course, it's transcribing everything that I say. And it would be really funny to see what it actually comes back with from my last sentence. But anyways, Gemini being their two replace Assistant in all places is not quite there yet. So you still have your Assistant in your earbuds and in your car, if you have Android Auto and that sort of stuff. But that's just kind of one part of the announcement. They also announced Gemini Ultra 1.0, which they called the largest and most capable AI model.
It competes directly with GPT for when you're talking performance benchmarks. And the and this is this ties back to what we were just talking about as far as knowing a user's online history and data set, it's going to work with YouTube maps, soon it's going to be working with workspace apps like Gmail, Docs, Sheets, everything. I mean, even with just the app, which I realize isn't ultra, but just the app, I asked it about my trip that's coming up next week, we're going to Park City, Utah to go skiing. And I was like, I'm looking for details about my trip. And I know that I've got, you know, multiple emails, multiple different people, there's like three different families going. Some of those conversations are about you, oh, this much for it, because you're only staying four nights versus six nights for the rest and everything. And it pulled back this summary from like all of these conversations that I've had an email with all these different people that was incredibly useful.
It was like, oh, wait a minute. So this is the beauty of a system like this, knowing all these things, which is going to make some people nervous. But instead of me having to open up my email and do a search, open up a notepad and move pieces of data over and everything, it just kind of did it for me. And it went perfect, but it was pretty darn neat as a starting point.
Jim Collison So a very cool Al and asks in the chat, why doesn't Gemini and notebook LM merge? And when I saw Stephen Johnson, the notebook LM that he showed me at the time was on both versions of Gemini, the one we'd already seen over the last, otherwise known as Gemini, I guess, but I don't know if I forgot where it was before Ultra. Gemini regular was in there. So that's what he was showing us, but he said he had also seen it with Ultra and how much more amazing it was. So yes, it is going to be, I wouldn't say merge, it's Gemini is going to power notebook LM.
And I think that's going to be interesting to watch. So the deal is, you can sign up for Gemini today in the US. I'm sorry that I think it's who is it? It says you can't get it in the UK.
Sorry, Mike. I know wrong accent, but you get a free for two months and then it's 1999 a month thereafter. I'm still at the point and I need to do it for the show, but I'm not sure that I have uses to put it to for 20 bucks a month. I mean, transcripts, yeah.
Yeah, I mean, we're really at this point, they're all like 20 bucks a month. This is what I've noticed in my short time of being self-employed is that every service that's going to solve your problems is $20 a month and it adds up really, really quick. And that is definitely the case for these AI services like Perplexity. That was part of the reason why I bought the Parrot because it came with a year of Perplexity Pro and I was like, okay, well, then it's kind of like I'm just, it's essentially like I'm buying a discounted version of this particular AI system and getting some hardware for free depends on how you look at it.
But they all cost that dollars. And I guess the question at the end of the day is, does that 20 dollars give you a system that integrates with the things that you're already using so well that it makes the other services not nearly as appealing? And I could see that working for a lot of people when we're talking about Google and it's sweet of apps that people do rely on well.
Is Google doing an effective job of implementing or rather integrating these services, the AI services into their products in a way that is easy, that creates less friction for users to do some of these advanced things that they might want to do once they realize they might want to it. And it sounds like that would be part of where you're at right now. I think it's a challenge that a lot of people have. I know last night on all of that Android, Ron Richards was kind of saying the same thing. He got the Perplexity, he got the Per as well and Perplexity service and he's like, I have no idea what to ask this thing. And it's like it takes a while to kind of get that part of your habit, your ritual of opening this instead of opening, say, a Google search box. Yeah.
And well, I mean, it depends too on what you're used to. The one time in my career that I had an assistant when I was the founding editor of RedoTable Weekly, I didn't know what to do. Like they said, you had to have one.
And she's very nice and very smart and great. But I was so used to doing things myself. And when I worked for Steve Newhouse at Advance, there's now the chairman of Advance, it's a huge company. They had to give him an assistant to train him how to have an assistant.
Jeff Jarvis, Jason, Howell, Gutenbergparenthesis.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios.com, yellowgoldstudios. Whether it's getting it right.
Yeah, I think at the end of the day, it's going to be really interesting as we head into this year, especially with Google, because I am an Android user, and I am very curious to see how this develops into their new hardware near the end of the year. But I'd be really curious to see how much of an impact this has. I have to imagine things like this have more of an impact on a user than, say, the thermometer that was included on the Pixel 8 Pro.
You know what I mean? When we're talking about features, this seems at least a little bit more useful. But can we kind of trust what it comes back with? I think at the end of the day, I'm just always with any of these results, assuming that I need to double check it, like always kind of checking the work of the AI. That's just part of my process.
Only smart. Yep. Yeah. Which we should do with human beings and social media too.
Totally. It's on us. Yeah, right. Exactly. It's not a horrible thing to ask that people take responsibility for things that are really important in their life. And I think there's the desire to relieve ourselves of the shackles of needing to do some of that work. But I think at the end of the day, it's important that we continue to kind of keep our humanity involved in the process.
And we're actually going to talk a little bit about that a little bit later. Guardrails. You put in a couple of stories here about guardrails, and I'm super curious about this. Tell us a little bit about this AI bill.
So let me, yeah, I'll take a lead on this one. So there's an AI bill in California because politicians got a politic and regulators got to regulate, trying to argue that there should be safety regulation for AI, which sounds appealing, right? So state Senator Scott Wiener, a Democrat who represents San Francisco, would require, according to the Washington Post, companies training new AI models to test their models for unsafe behavior, institute hacking protections that develop the tech in such a way that can be shut down completely, according to a copy of the bill I'm reading there from the post.
Well, that sounds good, but I think it's a fool's errand. Because what it presumes is the model maker can anticipate every bad use that any maligned jerk could try to put to their program. And I think it's an impossibility. Sure, you want to eliminate the obvious. You know Nazis don't know AI, don't make it Nazi simple if you're asked to, so on. But you know that someone's going to fool it even there and say, I'm making a movie about how Nazis are terrible, and I need to do this or that.
And the guardrail might go down and people are going to see that as a challenge. So they will be used badly. And to me, it's as if you're saying to open AI and Google and so on. That you if you're responsible for everything that happens with your machine you make, then you're not going to want to make it. Or you're the only company that can afford the insurance for all the liability involved. So it's regulatory capture. And it's like saying to Gutenberg, assure that nothing bad can come off this machine you made.
And I don't think it's possible. Now, I know we could get into a guns don't kill people. People do argument about tools and so on. But we know what guns are designed to do and what they can do. The point about AI is it is it is not general intelligence.
I won't say that. But it is a more general tool. It can be made to do lots of things. That's the point of a programmable machine.
So when I was at the AI governance summit for the World Economic Forum in San Francisco a few months ago, there was discussion about where the regulation should occur at the model level, at the application level or at the user. And at the model level, I think it becomes impossible to put guardrails in. That would mean they shouldn't try. But if we if we count on the idea that they're going to be able to prevent all possible bad uses, we know today that we will be disappointed and they will be in trouble. And we're walking down a garden path at the application level. Maybe a little better if I use one of these models to create something and I make it such that it drives a car when it shouldn't, then I'm responsible.
At the user level, I think is where a lot of the responsibility is going to have to go. And that's the case when I covered the case of the of the Schmuck lawyer who used chat tpt for his court filing and he got bad citations. And the lawyer's lawyer said to the judge, thank you for showing us the danger of chat tpt. The judge said, I didn't set out to do that at all. The problem was not the technology.
The problem was the lawyer and the irresponsible use of the technology. Right. Right. So I think that a lot of this is trying to push off our same thing happened with social media. Oh my God, Facebook's responsible for everything bad people do there. No, the people who do the bad stuff there are responsible. The problem is that it's at such a scale that government or the companies can't deal with it all. And so they want to find somebody to blame. When I testified before the Senate, Blumenthal said that, well, we have to have no section 234 AI because people should be able to sue them as if that's going to solve all of our problems. Somebody does something bad with this machine and you sue them. What does that really accomplish?
You're not really going after the bad actor. Same exact thing with social media. We haven't learned this lesson again. So I just, I found this interesting that this bill was out there. I got into a discussion about it with a New York Times reporter who was going to say, wasn't this a good thing? And I said, no, this is, let me try to explain why I think this is the case. And so we see another story here. If we go to this one as well, that the AI companies have agreed to limit election deepfakes, but fall short of a ban. Another story from the Washington Post. Well, I understand why that's happening because they can't get rid of that. You know, I could do a deepfake of Joe Biden saying something.
I could also do a wonderful video for the Daily Show of Joe Biden doing something nice. How does the machine know what's bad and good? It has no sense of meaning. We know that already.
It has no sense of fact or of meaning. And so they can try to put in some protections and guardrails, but that's going to be really difficult in the Pakistani elections that just occurred. Imran Khan, who's in jail, but whose party got the most votes, used a deepfake air quotes video of himself to thank his voters. And there's hand wringing about this, but he used the tool to his intent where everyone knew what was happening. He couldn't make the video.
He's in jail, but he got the message across in a way that was appropriate to a video age. All of this is to say that these are tools, whether they are paint brushes or printing presses or typewriters or mimeograph machines or the very powerful AI. If we think that we can get the machine maker of the technology to solve problems that's just as foolhardy to think that we can get them to to, I mean, to create, if we think they're responsible for causing problems, all the problems, it's also foolhardy to think that they can solve them. And I'm not trying to let them off the hook, but if you're going to put them on the hook, put them on the right hook.
Does it when I'm when I'm reading some of these, I, you know, I, and I think you even mentioned this, like I can't, I can't help but want to like I believe in part of this. I believe in the desire to want to at least make an effort to make something safer while also recognizing that there's only so much that you can do, that there will always be someone who is fighting to stay ahead of that curve and to do something with it that hasn't been thought of yet. And the thing that comes to mind for me is the security industry.
And I mean, what what is the what is the correlation between where we are right now and what the what has happened with the security industry? Because there are always people out there. There are always bad actors out there that are looking to find some sort of an insecurity to exploit. And that kind of feels like the same thing here has, you know, have have governments come out in in full force on the security side of things in the same way that they are attempting to do with AI. And has that even been successful, I guess, is is the question that I asked.
He raises an interesting question, Jason, because because if if the AI company says, OK, we try to put in guardrails, here's the guard rules we put in. All you're doing and you're transparent about it, all you're doing is telling bad actors, here's the free space.
Yeah, here, right. Or your challenge. Go around this because you've you've also given me a list of what we shouldn't do. So I'm going to find a way to do it to troll AI and just the world and the world.
We've given you the boundary now work within or with or outside of those boundaries.
And yeah, that's not going to work very well either because it's an invitation to trolling, it's an invitation to hacking. So we're going to have to get used to this idea and we may not like it as a society and maybe a problem is a society, but the technology is here. And there's any going back. You ban AI, but that's not going to do us any good.
And good luck getting around anywhere with Google Maps without it. So we've got to just be realistic about what this means. And we have to recognize that the Internet is a human network and the people who use AI are humans. And that's where the problems are going to come in. It's not a technological problem. It's a human problem. And it's as complicated as any human problem through history.
Indeed. We also have a couple of stories in here about copyright. I was talking a little bit earlier about some cases, you know, some stories that have been bubbling up for quite a while. And though we haven't seen the end of the Sarah Silverman case, it certainly seems like we've seen the end of, well, at least in the case of this story, five out of six of the claims in the case are being dismissed, which essentially leaves it as what is what is the what is the the remaining claim is.
The remaining case is the one that's in the French name. Right. Open AI wants that to be heard because they want to have a court rule on this. So everything's gone. In terms of the six direct copyright infringement, vicarious infringement, violation of the Digital Millennium Copyright Act by removing copyright management information.
These are a bunch of semi-colons in this list. Unfair competition, negligence and unjust enrichment. Open AI asked to dismiss all counts, but the first, that is to say, direct copyright infringement. And that's the main complaint is reading allowed. Now, I think that's going to hinge upon how did you acquire the thing to read? If you stole it out of the bookstore and read it. No, if you barred it out of the library. OK, if you use books three and they just got a perloined copy.
That's the problem there is not so much the copyright, but the acquisition of it. Hmm. You know, the other issue, so I'm doing a lot.
I'm doing a paper about the California Journalism Protection Act for the California Chamber of Commerce. And I've been doing lots of research. I have a whole bunch of papers of printouts, things I'm going back. And I'm reading back the whole history of radio and newspapers. And newspapers hated radio, hated it and tried to argue that they owned the facts and that and to some extent they succeeded. The Associated Press did in the hot news doctrine. And so we're replaying that argument right now.
So the question is, does open AI, do AI makers have the right to read and learn stuff and under what circumstances do they? I'm holding an event in New York with the Common Crawl Foundation. Rich Granta was our first guest on the podcast on April 30. That will explore these questions and they are questions still. We don't know. This is this is new territory. But it's interesting that the courts so far have said that these things have turned out very well for AI on this and this path that they're not liable so far for reading and learning from things. What they turn out may be a different question. How they acquire stuff may be a different question. On the other side of the coin, they've also been told that they can't copyright the products of what they do because they are produced by machines, not humans. However, in China, we see a different precedent. Yes, it just occurred where Beijing court ruled that AI generated content can be copyrighted, which is which is amusing for Americans because China ignores American copyright. So the AI has more rights in China than American producers do.
But we'll put that aside for another day. What it says is that courts all around the world are just starting to grapple with this and deal with the questions of ownership and and rights to learn and authorship and responsibility. It goes back to what we said before, Jason, about whether the model is responsible or the application is responsible or the user who uses it is responsible. The same thing happened with print, where at first the printers were held responsible and they were beheaded or be handed for what they had printed. But along the line, the author became responsible. And that's that Foucault argues that's when we had the creation of an author. So in AI, who's going to be responsible party for reading, for creating, for learning? These are all open issues and fascinating and just why we're doing this podcast.
Yeah, absolutely. And this moment in time. Yeah, I think it's interesting. The plaintiff in that Beijing court case created an image using stable diffusion. Then sued a blogger for using that image without permission. That was in a post on by due. And the case, the court essentially said all the plaintiff's prompts and parameter adjustments constituted an aesthetic choice and personalized judgment. And that the image was protected protected due to its originality and the intellectual input of the human creator.
And that's yeah, that's that's the thing that's really going to, you know, there's going to be a lot of light shown on examples like this in the coming months and year. As far as like, what does it mean when you create something with AI? Is it original enough? Is it enough that I as a human didn't draw that picture? But I came up with words that told the system to create that picture. And then I came up with more words to tell that the system to make changes to that picture and in an effort to get it to a final stage.
That was very different from where it began, all because of my the words I spoke or typed or whatever. Yeah. Is that enough? Is that transformative enough? You know, does that constitute a an original thing that I've created?
We'll see. And who gets the credit? Does the does the programmer of the of the AI or the person who prompted it?
Yeah, yeah, indeed. Or no one. I know I know we're running close to time for you. And some of these stories we might not get to. But I definitely want to get to this op ed in the New York Times that you put in here. I'd love to hear you kind of set this up because I thought it was fascinating.
So this is a nice way to wake up this morning. So I've been working on a new degree program, which I haven't really announced or done anything about yet, but basically in Internet and AI and humanities, arguing that we have to bring the humanities and social sciences back into the discussion that the Internet is not a technology. It's a human network and that the issues we have to deal with are human issues.
I said it already in the show. So we need other disciplines that know about this anthropology, ethics, history, sociology, community studies, design, arts and so on. So I woke up this morning and I read an op ed in the New York Times by Anish Roman, who's a vice president, I think, at LinkedIn, a workforce expert and Maria Flynn, who's the president of Jobs for the Future, who are arguing very persuasively here, and I thank them very much, for the idea that especially as AI can program, all of this emphasis on teaching everybody to be programmers and to get computer science degrees may not be strategically the wisest way to go at the same time that humanities degrees and programs are suffering, college after college are getting rid of the humanities. And it's the wrong way to go because as they say in here, the issue, the technology is going to be the easy part. The hard part is going to be dealing with humans, is going to be communications and relationships and understanding all that. So how do we train the future leaders of the Internet, which means the whole world on it?
I think we've got to emphasize humanities and social sciences. And I was really glad to see this. And what's great about it is that LinkedIn. Anish Ramon has the data from LinkedIn to show what are the skills that employers are asking for and communication, he said, is already the most in demand skill across jobs on LinkedIn today. And even experts in AI are observing that the skills we need to work well with AI systems, such as prompting, are similar to those skills.
And Jason and I, in one of our early efforts of this show, talked to an AI executive who said that English is the most important programming language in the world today, and I hope other languages join that. So I think that we move from a technology-based world to a humanities-based world. But everything we're doing, all the educational priorities, the policy priorities, the funding priorities are still going to stem in a post-Sputnik world.
And the irony to me is that AI reverses Sputnik and the Internet reverses Sputnik and says, it's not all technology people. We've got to do other things. So I commend you to this New York Times op-ed from today. Thanks for the chance to say that.
Yeah, absolutely. Like I was inspired by it because, you know, I thought like our interview with Sven Sturlfalo on episode two, the op-ed really did a good job of exhibiting kind of like the hope and optimism around AI serving us instead of replacing us. And I think that's something that we need more of.
If we look at it through that lens, then we get to a really cool place. I also, when I was reading this, you know, and it's talking a lot about these like human skills that are required, that are demanded of us and that are kind of in some ways honed by the work that we're doing with AI right now. And it just made it just kind of reminded me the more that I work personally with LLMs to do certain things, the more I actually realize that I need to treat that conversation as if I'm talking to a person in order to get the results that I'm looking for. And what really occurs to me about that is by doing this, I'm actually learning how to manage other people. I'm actually learning kind of into a certain degree. It's like a tool that allows me to explore what it's like to work with someone.
Now, I realize it's a machine. It's not a person, but it's some of the same tools are used in order for me to get that LLM to do what I need. That would be used if I had a person that was my personal assistant, that I was like, that is here to help me as like that person is waiting for me to get them a clear communication about what I need, how to effectively translate my needs to that person, to that human. And this is a tool that's really in some ways helping me kind of practice those skills in that natural English language approach. So that's kind of what kind of inspired me about that. I bet I thought it was really great read.
I want to add this quote from that he that they quote from Manoosh Shafiq. Now the president of Columbia University in the past jobs were about muscles. Now they're about brains, but in the future, they'll be about the heart. I hope.
Like that. That's I think we need a lot more of that. We need more hard in this world. Jeff, so great to do this. And actually, I got to say this was this was a nice change of pace. I've enjoyed the interview shows, but I was really looking forward to doing this with you and just kind of talking a little bit about the news.
And I don't know about anybody else. You can you can certainly let us know. And we actually do have an email address that you can send emails. If you go to AI inside dot show, there is a place on the on the page for you to send some feedback.
And I think compact at AI inside dot show sends an email to us as well. Let us know what you think because this episode was a little different. It was just a little bit more of a kind of discussion around news items. And I don't know.
I really enjoyed it. So thank you. I did too. I think there's so much that's that's deep to get into. And what was great about the way I mean, I we threw some still one thing said and I'm not organized and Jason's organized. So he he gave it a an arc because there are these issues that kind of cross these events. And we're going to come back to them again and again.
And it's with things like copyright and responsibility and safety and trillions of dollars are all going to come back into this. So it's this I'm really glad we're doing this show, Jason. And I think it's it's fascinating stuff that I learn every time. Me too.
Me too. That's what it's selfishly. That's that's my overarching goal here is of course I want to do a show with you. And I want to do a show that people enjoy. But at the end of the day, the most important thing for me is I want to know more about artificial intelligence. And this is the vehicle with which I'm doing that.
And along with you, I learned so much from you, Jeff. What do you want? What do you want to plug? Where do you want people to go to see what you're I
guess, just gutenbergparenthesis.com, where you can get codes for both my books, Gutenberg Parenthesis and Magazine.
Excellent. Thank you. This is a lot of fun. You can find me at yellowgoldstudios.com. That's just the link that actually takes you to the YouTube channel, where you can find the video version of this show. I have some kind of have the next phase of my plan in the works where I'm going to start doing product reviews and and stuff directly for the YouTube channel. So yellowgoldstudios.com or you can just go on to YouTube and search for Yellowgold Studios.
All one word and you will find it. And you'll see some of my reviews coming up. I've been messing around with my studio and getting things in place, and I'm really excited for it.
So check that out. We do the show usually Wednesday. This week, we're doing it on a Wednesday, normally at 11 a.m. Pacific two p.m. Eastern, which you can watch live if you go to that YouTube channel that I just told you about. And next week, however, we do not have a live show.
It's going to be pre-recorded. I'm going to be out of town, like I said earlier, for the week, but it will publish on Wednesday. You'll get it in your podcast feed and there will be a video version on YouTube as well. You can subscribe at aiinside.show. You can support us directly by going to patreon.com/aiinsideshow. We really appreciate you when you do that. You literally are helping us to keep this show rolling and come out of the gates strong. And then, you know, just go to your favorite social network and just do a search for AIinsideshow and you will probably find us on those platforms. If anything, you'll find Jeff and I directly.
And that's really all there is to it. Thank you so much for watching and for listening to this episode of AI Inside. Jeff and I will see you next time. Take care.