Jason Howell and Jeff Jarvis discuss Elon Musk's legal battle with OpenAI, Nvidia's CEO on making programming obsolete, calls for responsible AI innovation, and more!
NEWS
- Discussion about Elon Musk suing OpenAI for breaching contract over its stated mission of openness and building responsible AI
- OpenAI releasing emails from Musk revealing his desire for control and more funding
- Anthropic rolling out new versions of Claude LLM (Claude 3 Opus, Sonnet, Haiku)
- Nvidia CEO Jensen Huang's statement on AI making programming unnecessary for everyone
- Americans for Responsible AI Innovation launching and calls for regulation
- Open letter from researchers for a "safe harbor" for independent AI evaluation
- Trust in AI companies dropping according to Edelman report
- Bad bots: Issues with Amazon's Rufus shopping bot and H&R Block's tax advice bot
- Amazon's $1 billion industrial innovation fund for AI and robotics
- Humanoid robot developments from companies like Figure and Magic Lab
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside, episode seven, recorded Wednesday, March 6th, 2024. The AI Chicken and Egg Problem.
This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly and thank you for making independent podcasting possible.
And hello to everyone. Welcome to another episode of AI Inside. I'm one of your hosts, Jason Howell. Trying out the downstairs studio for a change so I don't look like I'm like a hostage in the corner of a bedroom, which is basically what I am in my normal podcast studio upstairs.
Joining me as always, Jeff Jarvis, how you doing, Jeff? There you are. There you are, your nerd cave.
That's right. This is a much more pleasant cave to be stuck in. And I wish that I could do all my podcasts down here, but it is totally open to the entire house.
And at least the recording of this show isn't when the house is filled with kids and dogs and dinner making and everything. You know what I mean? So I have to pick and choose. But anyways, it's good to see you. We are. Yeah, we're gonna we're gonna fly solo today. Just the two of us talking about some of the big news this week, of which there has been a lot of big news in the world of artificial intelligence. Turns out every single week is a big news week in AI right now. That's just how fast things are changing and moving in this industry and in this field. Just real quick, I'm continuing to see the reviews. I guess I'll continue mentioning this until things slow down.
But we love those reviews because it's really helping people to discover AI inside. So thank you for doing that. And then of course, if you're supporting us on Patreon, we really, really, yeah, we're so appreciative of that.
Patreon.com/aiinsideshow. That is how you can support us directly like Martin. Martin was, I think, possibly the fourth, fourth or fifth person to support us right out of the gate.
When we launched the show like a month and a half ago. So Martin, thank you. And thanks to everyone who do support us each and every month. Patreon.com slash AI inside show. All one word.
Put that all together. All right, with that, let's start talking about some news because I mean, right away when I saw the news about Elon Musk suing Altman and OpenAI over breaching contract, I was like, well, I obviously know we're going to be talking about this to some degree next week. Musk filed this suit saying that OpenAI's stated mission was that of openness, building responsible AI, and kind of putting crosshairs on the company, because at least in part because of its deal with Microsoft, that he says Elon Musk says puts it in direct opposition to those goals. It's obvious that OpenAI is no longer in this for the good of humanity.
They're in it to make money. Yeah, I mean, there's more details here. But like, I think when I when I heard about this, and the more I looked into it, the more I felt two things that in some ways I hate to say it, but I think Elon Musk has it has a point to a degree here. And two, I don't think anything's going to come out of it. So I mean, I don't think there's going to be any any sort of like win on Musk's side if a win means OpenAI is found guilty of something. Maybe there's a win on Elon Musk's side. If there's a bunch of discovery that, oh, you know, reveals a lot of secrets about how OpenAI does business.
So OpenAI has released a lot of emails from Musk, which guess what, they can be embarrassing for his case and hurt his case. That's true. Yeah. So they're saying that that they realized at some point that they just needed to be have more resources.
So that's what they're saying in the responses. That's why we change course. Companies change courses. That's what's what they say that Musk wanted to merge OpenAI with Tesla, which doesn't exactly sound like staying open. They say that Musk wanted to be completely and utterly in control of OpenAI.
So they put up actual emails from Musk to Greg Brockman and Simon Malman. I'm trying to think here. It sounds good.
Assuming the adjustments. I'd favor positioning the blog to appeal to a bit more general public. There's a lot of value in having the public root for us to concede. We need to go much bigger number than 100 million to avoid sounding hopeless relative to what Google and Facebook are spending. I think we should say we're starting at 1 billion in funding commitment.
This is real. I will cover whatever anyone else doesn't provide. Well, if you need that kind of money, is that for an offer profit?
Is that for a research project? Or were there in fact ambitions there to be bigger? You know, I was saying, I watch MSBC a lot because that's what I am. And I'm thinking they have all these legal correspondences these days.
And, you know, wasn't that long ago when you hardly ever saw a legal commentator on the news every day and every day they have three or four or five on all the time. And who would have thought in AI that a lot of what we're going to be covering is suits. Yeah.
You know, we've got a lot of scenes that way. There's more news later with the New York Times versus OpenAI. There's Elon Musk versus OpenAI.
OpenAI is the darling. And so everybody's going to go after them. And also the target. Yeah. Exactly. Exactly. So, you know, the weird thing, we've talked about this before, Jason, with some of these stories, it's really hard to guess who to root for.
Well, yeah. And especially, yeah, something like this, like at this point over the past, you know, a year and a half since Musk took over Twitter and, you know, there's been this like this drumbeat of Musk, bad Musk, you know, and I will completely agree. I feel like he's made a lot of decisions that I'm like scratching my head and like, I don't know, like I'm a Tesla Model Y owner. And there are parts of, there were times, you know, not too long ago where I'm driving around in it. And I'm like, God, I feel weird kind of driving this because there's a lot that I don't support there.
Yet at the same time, there's at least, at least to a certain degree, I guess he's made some points that make sense. But, you know, as far as OpenAI and its open kind of source charter, but I think these other details really kind of point out the hypocrisy of this. Like, who does it serve? Who does the information serve at what point in this? Everybody's guilty.
You know, at the same time we have Elon Musk being with Donald Trump and maybe propping him up, and it just, it makes the head explode that, you know, we're not talking about right now, we're not talking about the technology at all. We're talking about personal politics between all these rich boys. And I'd rather get back to the machines and what it can do for us. But this is the news we have, and it's the news we've got to talk about because it's going to have an impact.
And I wonder whether, you know, I also see stories up there saying, oh, OpenAI has a lot of legal problems. Who knows what's going to happen? Is there honeymoon already over? Who knows? That's a beauty. Yeah.
No, well, there's always a great big target to take down, I think is, you know, there's always a, oh, you're a little too successful. So now we've got to come after you to kind of, you know, break you down a couple of notches. What about Musk saying, because I thought this was kind of interesting, I feel like it was an out there statement to make, but saying that OpenAI has reached AGI, artificial general intelligence with chat GPT-4, I think was the point that he was making. I just thought that was kind of thrown in there because, you know, because it's so much in media, AGI is, you know, used as like this thing to be kind of frightened of or afraid of. And that's a hot button kind of keyword to get in there, to attach to the case so that people become even more, you know, even more attached to the outcome. Yeah, I panicked about it, I guess.
It goes to what you talked about in previous shows about TESCREAL and long-termism and all that. It's part of their belief and part of what these both anthropic and OpenAI say is their goal is AGI. I still think that AGI is complete BS. And you know, here's the weird thing. Why are we making us and our brains the standard for computer intelligence?
We are the most intelligent beings that ever existed, never will.
You know, smarter than us, but of course that makes them even smarter than the machine because they made the machine that they made smarter than us, so they're all powerful. But this idea that it's going to be able to do what we do is an odd standard. You know, you don't say that you're going to make a car that has to walk. You make a car that goes somewhere and it does it the way it does it. And of course, we're going to get the robots later, we do walk, but we're like...
Yes, we will have walking humanoids. So the whole notion of AGI is this weird anthropomorphic egotistical goal. And I think AGI is complete BS. What is different, and we talked about this last week, I think, is that it is a general machine. It is not purpose built to do one thing. It is not a gun that shoots a bullet.
It is not a car that gets you from here to there. It can do a lot of different things and that's what freaks people out and that's where the power is. But that's not... And even if it gets to the point where it can add, which it can't do now, even if it gets to the point where it can do facts, which it can't do now, it's still going to be limited by what it's told to do and how it's told to do it. So it's just odd to me that that's what drives the geeks as their primary goal. So that both Anthropic and OpenAI say that they are building AGI, damn it.
Well, that's our ultimate goal because it's out there and it's something I suppose to work for. And it has in itself, it's a proof of something that a lot of people feel has been unachievable up until now.
And what does that do? If we can achieve AGI, then that means that we are supreme. We are the ones that deserve all of these things as a result. Well, you mentioned Anthropic, and I just, I happen to be a pretty solid user of the Claude LLM. I kind of started using it when I was at TWIT and primarily because actually, when I think about it now, Claude is really the LLM, the generative AI that I learned on. Because it was the one that I first used when we were experimenting with AI for things like show notes summaries and things like that while I was at TWIT. And so it's just the first one that I really used with any kind of distance and amount of focus and really trying to get in there and understand it.
So I suppose I have a soft spot in my heart for it just because it's familiar, but Anthropic rolled out three new versions of its Claude LLM. Claude 3 Opus, which is its most powerful, then there's Sonnet, which is the one that if you're using the free version on the website, that's what you're now using. And then Haiku, which is the least powerful.
I didn't look up to see where Haiku appears or shows up, but you can upgrade, of course, to the pro version to get access to Opus. And the company is basically saying the new version is two times more likely to present correct answers. Because we'll never get to 100% correct answers, but I think that's funny. It's probably the truth, but whatever. I think it's hilarious that we'll, you know, over time, we're just going to have to protect that thing. So we'll never be perfect.
If it gives correct answers only three out of 100 times, then two thirds more right answers is five.
Correct answer. Yeah, we need some more context than that, don't we? Yeah, absolute numbers would be helpful. That would be great. And maybe it was in there and I completely overlooked it, but there you go. It can also analyze images. So now it's capable of doing that. It cannot do what a lot of these other systems now are able to do, which is generating images. They say at this time, there's no demand for it.
Their strengths, you know, lie in other places and so generating images isn't one of them. So yeah, I don't know, have you, you know, of all the systems out there is Claude one that you've spent any time with?
No, I haven't. I've got, and I'm, I'm derelict being that I'm on an AI show. I've got.
There's too many of them. I mean, at the end of the day, there's so much of this stuff out there. Like, unless we lived and breathed it, we couldn't possibly work with every single thing, you know, to any deep extent.
Yeah. And, but I think it is important to start to compare and contrast across them. Do you notice any difference in them? Because, because the interface, there's no difference in interface, right? You ask, you can answer. It's a box.
So does how could if you were blindfolded? Yeah. And you asked the same question of GPT and Gemini and Claude. Is there anything that would signal the difference among them? Is there anything that's, which is also to say, is there anything that's special to the, to the brand and the capabilities? Yeah.
I, I feel like I could determine it, even though they probably would all produce similar things. So the, so the real sharp example that I have is take a transcript of a podcast, export that to a text file, import that into the LLM, you know, attach it as a file and basically say you are a podcast producer and you want to create detailed show notes that chronologically list all of the major topics of this episode in order, you know, with bullet points. And then also, you know, you have to come up with some really wide kind of a topic keywords, because I hate keywords.
It's like the bane of my existence, even though they're very important, you know, tags, whatever, keywords, tags, whatever you want to call them. And so I would put that into chat GPT. I would put that into Claude. I haven't really used that much with Gemini to be honest, so I should chest that out. But I feel like what I would see as the result would be in, in general, my experience has been that Claude has been more comprehensive chat GPT has been more likely to give me kind of a shortened output that like, okay, broad strokes, I guess it's all there, but I want something that's detailed that really kind of goes into it without going so far into it that I'm creating like a manual or something.
But, you know, something that's still a summary, but keeps it succinct yet thorough. And I feel like Claude did that almost all the time as expected. Maybe it was just that I got so used to how to work with Claude that I was able to get better results.
I'm sure someone who is really versed in chat GPT could probably get something similar, if not better than what I was able to come up with. But that was my, that was my experience.
What's interesting to me in this conversation is I'm going to go back to the question of brand, that, you know, if you read enough The New York Times versus the Washington Post or the Wall Street Journal, you can tell if they cut off the bylines and the headlines, which is which. And so there's a brand identity that comes from the voice and the coverage and the judgment and that kind of stuff. And there's Coke versus Pepsi, right?
There's a taste. We could go on and on, what makes a brand? I wonder how brandable generative AI is going to be. Yeah, they're all going to try to act like a human being answering your question credibly, which they're not there yet. And so as much as this becomes basically OEM white labeled and behind the scenes, you know, we have stories later about companies using chat. And you don't really know which chat is behind it.
And there's nothing to tell you that. And so I wonder what makes one company special versus another among the geeks. It has more compute. It does more of this. It's three times more of that. Okay, fine.
Data set, you know, all that kind of stuff. Right.
But to the, to us mortals, I wonder whether it's all going to become kind of mushed together. Yeah.
Well, and it's interesting that you mentioned that too, because so I've talked on previous episodes about having a subscription to her. I can hardly say it perplexity pro. And, you know, perplexity is it's kind of like you're in a certain sense, it's kind of like your Trillium of AI. It's got, it's got a little Claude. It's got a little chat GPT.
It's got stable diffusion XL, you know, it's DALL-E, I think is in there Dolly three. And so you can use it as a conduit for all these different things. And because it's a premium service that I'm paying for, like, for example, with the Claude three opus inside of perplexity pro, I think I get something like five answers per day using opus. So I get access into some of these premium things. But in using it, like, this is where my mind is at right now, I get better results using Claude on the site than I do using Claude through perplexity. And what is, what is the difference there? I'm setting it for that setting.
And yet I, I get better output when I go to the site versus getting it through this thing. Like, what is that identity? I guess is I think you've nailed some, you know, a big question here is what is the identity that I'm looking for and why is it or is it not delivering on it?
Well, the loyalty that will probably come at some point is you'll sign into Claude and Claude knows you and it learned what you liked the last time and the last time. Yeah. Right.
And so you're back on a Wednesday afternoon. Do you want me to run those show notes for AI inside?
Exactly. That would be actually really cool. Right. Yes, here it is. Do it again. You know, so I think there'll be an effort to, to, to create switching costs. Yeah. So I'm surprised that hasn't happened much. We had a story a few weeks ago that opening, I was going to start remembering you and what you did, which also could be freaky.
Like, oh, I forgot I asked that embarrassing question. But yeah, it's, it's an because these companies right now are not really consumer companies. And, but at some point it's going to matter how we as consumers react to them. So anyway, I find this all interesting. That's what I love about this show is that we kind of pull up from the ground a bit and look at the implications and when interested ways.
Yeah, indeed, indeed. Now, this next story, I'm so I saw that you put it in there and I was really happy that you did because I had seen it kind of in passing and meant to put it in there and completely forgotten.
So then when you, you put it in there, I was like, okay, spot on. So Nvidia CEO, Jensen Huang, there's a video making the rounds right now from he had an interview, I think last month at the World Government's summit. And he delivers a statement that I would say is probably a pretty, you know, for a lot of people, pretty controversial. I think to a certain degree, he's kind of spot on. So I'm going to attempt to play this and hope that the system works for me here.
I want to say something and it's going to sound completely opposite of what people feel over the course of the last 10 years, 15 years, almost everybody who sits on the stage like this would tell you it is vital that your children learn computer science. Everybody should learn how to program.
And in fact, it's almost exactly the opposite. It is our job to create computing technology such that nobody has to program and that the programming language is human. Everybody in the world is now a programmer. This is the miracle of artificial intelligence. The countries, the people that understand how to solve a domain problem in digital biology or in education of young people or in manufacturing or in farming, those people who understand domain expertise now can utilize technology that is readily available to you. You now have a computer that will do what you tell it to do. It is vital that we upskill everyone and the upskilling process I believe will be delightful, surprising.
Now, he obviously has a lot to benefit from this being the CEO of NVIDIA, right? So we know where his party lines lie, I think to a certain degree. But yeah, I think he's, you know, what it reminded me of was the interview that we had that we always referenced. Alex Babin, right? Alex Babin. There we go. Yes, that's Babin, that's right. From the old version of AI Inside within the club twit walls where he said English is the programming language.
The hottest programming language on planet Earth right now is English. Yeah.
Yeah. It really does. And as I've said on the show a couple of times, I'm working on developing a new degree in the internet AI and the humanities. And so I set this around all the people I'm working with because it's kind of a beautiful quote. And it makes us rethink what are the skills we need as students, as people in jobs, what skills and more context should we be giving people? I've argued that in a world of social, of the web and social media, those are not technologies but our human networks. We need the human skills in AI. Well, it's it's it's a technology, obviously, but it's one that speaks our language. And so we can command it with our language and it can respond in our language. And that certainly starts off with saying communication and articulateness, the ability to say what you want becomes an important skill. Usually, but also knowing when it's wrong, when it has no context, knowing where the bias is, things like history and anthropology, are useful, ethics and how you use it is important. So this really becomes a jumping off point, I think, for rethinking our relationship with the machines.
Yeah. And I mean, I can understand the controversial nature of making a statement like this, because essentially, essentially, the long term view of this is, you know, coders, people who code, that's a skill that eventually is unnecessary is, I think, to a certain degree, what what Jensen is saying here, because we will have a system that will be built out strong enough to be able to do the things that coders can do, given the person on the other side of the microphone or the glass, understands how to ask the right questions. And that becomes the way that we inform the creation of these things. And it really seems, I don't I don't know, maybe it is hyperbolic to to assume that that's the destination that's that's going to happen.
No one knows 100%. That certainly seems like I mean, these things are really good at coding, and they're not perfect. They're not perfect with anything.
But they've shown a lot of promise. It really seems to point to the possibility that this could be the way coding happens in the future. And there's a whole world built around, because there are many, many people who do this for a living, and this is their livelihood, you know, built around the fact that we as humans are the ones that do this particular job. And this, you know, has the potential to threaten that, or it just encourages new skills. But someone's lost along the way. And I think that's the real challenge.
Yeah, it's it's what I heard at the World Economic Forum event that I've mentioned multiple times, is that we have to recognize that the impact is going to be uneven. Some will benefit more than others. And we have to account for that. It's not as if we can say the machine is all good, machine is all bad. Of course, it's not it's not it's neither.
And it's going to affect some jobs and some senses of self worth more than others. But the general skills here, I just celebrate the idea that it speaks our language. And then we can do that. There's another story that that I put in here, kind of along the same line. Vinod Koschla, the renowned venture capitalist, wrote in the information about how AI will change our relationship with computers.
So same thread here. And what he argues is that not only can we speak to it in our language, but also he said that the revolution is in how an apps will adapt to us, quoting here, no longer will we need to learn to navigate through apps like Uber or complex systems like SAP or Oracle. Thus far, we've always adapted to software learning is an intricacies, which reminds me of anything you have to do with Microsoft, I always think I'm getting to build Gates brain remembering layered menu so to communicate with machines. So it leads back the other way he says where not only is there new types of hardware, like the rabbit that are designed for different kinds of interaction, but the machine adapts to us. Now there's a there's a there's a risk in there because as we've seen when people have the machine do bad things, machine is told to do what we want. And so it tries to give us what we want. That's the only thing it knows.
Unless there's a guardrail in there where it says I can't do that. So so there really mirrors on our desires and our own egos. There's something I quoted in my next book about about the reverse Turing test. That's not so much that the machine fools us that it's human, but the machine makes us reveal our humanity. And so I think that that this is all really interesting is we have this this notion of this set machine that was designed to do something. And we had to figure out how to use it.
Whether that was a steam engine or an automobile or a telegraph key or whatever, right? It had its rules. Now it turns around and it'll do what we want to a fault. That's really interesting to think about how we fundamentally reimagine our relationship with technology.
...that they don't otherwise have, they're going to use it. Whether that's an individual doing a task easier or more likely a company saving money and maybe getting rid of jobs. It is going to happen. The story that I'm going to tell in the next book I'm writing after after the web we weave is a history of the line of type, which most people don't know what that is. It was a machine that replaced setting type one letter at a time with a line at a time. And I'm going to bore you all in the future with this machine because I love the machine and I love the story. One element of the story was that the type centers, the people who for four centuries had set type one letter. Did I say this in the show before?
I know I've heard you talk about it before. I can't remember if it was you should always stop.
I always always my students. It's all right. You hear me. I'm fascinated. Don't just think it's old Jeff going on. All right. So I'll repeat to this extent. Just tell it that the type centers when the machine came in realized it was inevitable. There was no stopping it. So they said we have to be in charge of it and they lost jobs for a decorator to and then they saw the printing would explode and with it, they got far more jobs than they'd ever had or ever dreamed of before because the industry over the hump. Yeah. And they they were wildly powerful and prosperous for a half a century.
It told the 1960s when cold type came in this time. They forgot their lesson this time. They tried to fight it and they lost. They killed six newspapers in New York and lots of jobs. Robert Murdoch and whopping in his in his moved secretly to a whole new production structure in London and got rid of all of the old jobs and thousands of jobs along the way because there was that fight. So the question for people today is how can I be in charge of this technology? How can I take the domain knowledge I have, which is which is what Huang said and make that the value and use the tools well to do that? Yeah.
Yeah. And we have enough. We have enough clues pointing to this. How do we react and respond to the clues? Like I just I just pulled up a Coursera course on, you know, that's all about a prompt engineering. It's kind of like an introduction to prompt engineering.
And I'm I'm thinking like, you know, probably at this point, it would behoove me to start actually like doing some like formalized educational kind of exercises around this stuff to to go from a passing knowledge slash fascination with it and really get a better sense of how these things work and how the how the field is advancing around it. I don't know. You seem to smirk at that.
No, no, no, I don't know. I don't know. Did I did I talk to the show last week about having gone to a meeting with a bunch professors and an English professor who was talking about prompts? I don't think I did.
I don't think so. Okay. So I went to a meeting with a bunch of professors at the university where I'm going to be working, which I haven't announced yet on this new program. And we were talking about all of this by the end says and the same professor was just wonderful because she said that she had a had a lesson plan for that day with class. It was just when Soma came out and she was fascinated by the prompts because they released the video images from OpenAI plus the prompts.
So she told the class, we're going to write prompts now. And they all got into it. They just loved it and they were they were recognizing it as a creative activity. They were recognizing that they had power and agency over the machine. And they weren't used to some of it yet, obviously, but they they saw what's possible. We now know this kind of a grammar of machines. And they recognize the skills they needed to do it.
And so the English teacher tells them it's English and you're in an English class and it's for good reason now and is for better reason now. I don't. So what I'm thinking, Jason, is I don't know that you really need. Education in prompting.
You're an articulate man. You know what you want. I think that this is exactly part of what Coastal was saying. You don't need to adjust to the machine anymore. You have the power to make the machine adjust to you. And that's that's kind of fascinating. Obviously, there's there's tricks and there's ways to do it.
There's ways to get around guardrails and all of that as well. Yeah. But if you know what you want well enough.
Yeah. And if you're not being completely unrealistic, like expecting the machine to know facts, then who would expect that? I know that I think you can do most everything. I would I would bet that you will be showing us all how to do it soon enough rather than working with it.
Well, that's I appreciate that. Yeah. Cool. I mean, hey, that's that would save me a lot of work. I will say before we kind of move on here is that there are some times though when I see people's prompts on how they do these really complex, complicated things to do these things and it is a language in and of itself.
Yes, it's English, but it's like it's a programming language within the programming language where they've realized these certain shortcuts or these certain short in words that they're privy to because they've done run a million tests on these things to try and really kind of get the sharpest output possible. And, you know, so as with everything, it goes as deep as you want to go. And there are people that take it real deep.
There's there's and again, if I said that show, stop me because I've been talking to friends about it. Gina Chua, who is number two in editorial at semaphore was telling me a few weeks ago at an event that she used, I think it was chat, GPT that she used to make a mechanical mechanical Turk. She was getting rid of the people of mechanical Turk to have it do a task of looking at something and making a judgment about it. And it was about identifying hate crimes because there's no standard language. There's a lot of variance in the language, the things that are subtleties in it and she spent, you know, a day or two going again and again and again and testing it against the data she had, where it got good at some real sudden subtleties about, you know, a circumstance that if someone is of a certain race and these circumstances happen, it's a hate crime.
But if it's a different race and these circumstances happen, those would be irrelevant and it recognized that, right? And that came from her ability to prop and she's gotten really good at it. She's doing lots of stuff.
We should have Gina on the show in the future. So you're right. In a sense, if you use the tricks, you're adapting the machine. But in the long run, I still think it'll adapt to you.
Then the dad will be the end game is that's not a.
Speak a syntax. Yeah. Right. Right. It's not the thing or this way in the earlier discussion. The goal is an AGI to ever replace us. The goal is to have it do what we want when we say we want it. Yeah.
To work the best with us that it possibly could. Yes. Yeah. Agreed. Agreed. I am trimming things down just a little bit for time, but we have the Americans for responsible innovation. Let's launch on Wednesday.
You put this in here. I'm curious to know your take on this a bar bipartisan group looking for ways to regulate AI informed by how other industries with similar safety potential safety risks have done this, which actually directly reminds me of a conversation that we had on the show a few weeks ago where I wondered if AI like I like I questioned. I recognized I think in that episode that there is a need for scrutiny around AI and that maybe it's not being done appropriately or whatever, but that need does exist. We shouldn't just, you know, 100% just be like, all right, AI, just do whatever.
We don't care what happens. Like there should be some sort of scrutiny about this, but are there other industries upon which we can compare that scrutiny to with AI that makes it more responsible or more effective? And I think the comparison I made was with the security industry. And this kind of reminded me of that. What are your thoughts? No, so this is a bunch of people.
I think we see this kind of knee jerk reflex to regulation. Surely we must regulate this. Be careful. Well, everything we all do is all regulated already. Let's start there. We're all under laws. True.
And it's when the laws are insufficient to the task that we do consider adjusting them and we can only do that on the basis of actual harm and actual data and research around this and experience with it. So I think it's a little early in my view. I'm not a libertarian. I may sound like this for the minute.
I think it's a little early to think we have the regulatory regime to keep us safe on an AI. So this is a bunch of people. I'm sure very smart who came together.
Eric Brunelson is one of them and the advisors, a former staffer for Senator Klobuchar is is here. So I think we'll see some legislation going on this thinking that they have the path for policy recommendations and they want to start an AI auditing oversight board modeled after a successful public company accounting oversight boards, which sounds okay, but oversight of what? Under what standards?
Under what rules? They want to establish a AI suppliers group like nuclear suppliers. They want to establish federal acquisition regulations. They want to increase funding for the Commerce Department. So it's all sounds. Nutrition labels. Nutrition labels, right? What? What? It contains sugar and and lye. So, you know, so.
Yes. 3% of your daily lie quota is contained here.
So it has all of that in there. And then it tries to it has these lines that again may sound okay to some people like this one, which drives me nuts. One of the recommendations allow content creators to quickly and easily opt out of having their work product included in data sets used to train AI models. Hello, First Amendment. Hello, right to read, right to learn. You know, I can I can read and learn.
You can read and learn, but these machines can't. I think is wrong. Saying that there must be disclosure when when when humans are being imitated. Well, actors do that every day. You know, one of the standards that we're operating against here. So this I think it's well intentioned. I think it's a former moral entrepreneurship in a way. It's hey, we see an opportunity. Let's make an organization and let's get raise a lot of money and keep busy with it.
I think it's kind of the wrong way to go. The other thing that I put in here, which is related to me is that we had some number of where to go. You'll find it. A bunch of researchers put an open letter. There it is.
Right. The next one, an open letter for a safe harbor for independent AI evaluation. What they're saying to the AI companies is if we go in and we try to red team or we try to do research, you're going to see you. We may be liable for things. We may be caught for making it do bad things because we're trying to make it do bad things to see what bad it will do. We need data.
This to me is much better path right now is let's enable and empower the researchers to be able to dig into these things and test them in all kinds of ways and see what they can and can't do should and shouldn't do what they can be made to do. And there's people in here I disagree with Julia Angwin. I criticized last week.
I think is overboard on regulation. René DiResta is smart about this stuff. And Gary Marcus is in here. Brendan Nyhan, who is a brilliant researcher.
Justin Hendricks, who we should have on. These are smart people. And what they're saying is empower the research. And to me, you can't do the regulation. You can't create the statutes and the terms of the regulation. And the auditing terms and all of that unless you have this research first, unless you know what's possible.
So I think this is a good and right letter and right way to go. But we're going to find a lot of this debating right now where people are going to try to grab turf and say, I'm going to save you from this horrible thing.
Yeah, yeah, indeed. Yeah, and there was also another story in here that I added I think at the very last minute that kind of ties into this about trust in AI and companies behind the tech dropping according to Edelman Axios had this, this had a writing an exclusive about the report saying that trust in AI companies dropped 53% down from 61% five years ago.
In the US, Axios says trust has dropped 15 percentage points from 50 to 35%. And I think, you know, obviously, obviously, that's what's driving so much of this behind the scenes. But here's the question.
Where's the chicken and where's the egg? Media all the time will put they put out an agenda saying AI is dangerous that they've been hammering on them. First, AI was wonderful and amazing. Then pretty quickly, tech lash comes, AI is dangerous, dangerous, dangerous, dangerous. Then they do a poll, they say, oh, people think it's dangerous as if that's not a reaction to the media coverage.
Yeah, right. Yeah, that's true.
I don't think people were necessarily thinking their whole days about AI. But when asked now, what are they hearing about it? It's dangerous. Yeah. I don't trust it. Yeah. They shouldn't trust it to do certain things. Good point. What do you trust it?
Do I trust? I mean, I think as a blanket. No, I don't. I don't trust anything as a blanket. I mean, do I trust using AI for certain tasks that I believe it's effective for within kind of like the confines of the space that I allow it? Then yes, I trust it enough for that. But I trust it enough on top of the steps that I'm willing to take to be sure that it's right. Which I guess is in total and complete trust. Yeah, exactly. At the end of the day, I do not absolve myself of any of the agency or responsibility when using anything that has to do with AI.
At the end of the day, it is incumbent upon me as the user of those products and everything to be sure. That's why we're about to talk about bad bots. That's why when you implement, say, like TurboTax and HRBlock did, an AI kind of knowledge base into tax planning on the site and it provides information that according to Jeffrey Fowler is incorrect filing statuses, erroneously described IRS guidance on cryptocurrency. I mean, it's, you know, that is the responsibility of the site that hosts the AI, right? Like we are the ones that can intervene to say this is still a system that can get things wrong, that probably will get things wrong. So at the end of the day, we are responsible for making sure we make it right. Amen.
We've talked before in the show about what I will now call a matrix of responsibility, model maker application user. And in this case with HRBlock, in the case of Air Canada, we talked about two weeks ago, it is at the application layer. It's the company that used the chatbot and it's stupid, especially when you're taxes. HRBlock, what are you thinking? By the way, you can get a subscription to the Washington Post, Jason, for an up-bench product. Yeah, apparently it does. It just popped up the box.
It gave me five seconds on the screenshot. It's like, yeah, that's it. That's it. If you want to read this headline, you need to pay more.
And this is a case where, too, that's just simple ethics. You don't need regulators to tell you that you should reveal it's a bot, that it can make mistakes. Totally. It shouldn't take any advice, or maybe you shouldn't do it in the first place because you know it can't do the job appropriately.
Yeah, and it's silly for any company that thinks that, you know, it can do something like this and not disclose those things and, you know, run into issues and throw the blame on the, you know, on the technology that they're using and not themselves. I'm sorry, at the end of the day, you're the one that's responsible. And then the other bot, Bad Bot story, which I like, you know, as I was, you put that in here as like a subheading and I was like, oh, man, I feel like we need like a bumper or something for Bad Bots. But that takes the show down a whole, yeah, Bad Bots, Bad Bots, what you gonna do? Amazon has a chat bot named Rufus that was announced in February, basically a shopping bot. Yes, still in development.
So at least there's that. But Shira, is it, is it Ovide or V? I don't know. Shira Ovide from the Washington Post got some time with it. And, you know, understandably, it's in development. This is not a public facing product to my knowledge, but, you know, she said it just, it was not good at all. It conflated a lot of, you know, actual needs like confusing kitchen, composting materials with backyard composting materials. Overall, it offered, you know, very confusing answers to what a shopper might actually be looking for.
Some of it felt like it was just pulled off of a list. So if you're, you know, if you're Amazon and you're providing a bot, I realize you got to go through the bad before you get to the good, you know, the really useful stuff. And Amazon just has this amazingly vast amount of products, you know, through which, you know, it has to do its best to guide its customers to the right thing and everything. But hopefully the bot isn't just, you know, some cheap little, you know, thing that you could just open up a browser and do a search and find the same answers. You know, hopefully it actually gets some sort of understanding as far as what you're shopping for. Yeah.
Yeah. That's the level where you can test. Amazon knows itself. H &R blocked those taxes and they should test. The model maker doesn't know that people are going to use it for taxes and doesn't know everything about taxes. At the application layer, there's a lot of responsibility.
And finally, speaking of bots, real bots before we close out the show, three different examples. I mean, we were just talking about Amazon. Amazon has a $1 billion industrial innovation fund that it's been using to hopefully, at least from their perspective, work with AI and robotics companies with the goal of making, quote, more efficient, safer, making their workspace, their workplace, more efficient, safer for our associates, increase the speed of delivery to our customers.
And fewer associates, probably. Yeah. Yeah. Likely. I mean, one could imagine, right? Yeah. Obviously there. And then a couple of like humanoid robot stories, which is a figure funded by Microsoft, NVIDIA, Jeff Bezos, now has a deal with OpenAI to deliver next gen AI models for humanoid robots.
And I think if I can pull up the video here, let's see here. Oh, nope. That's an ad. Yeah. Okay. There we go. Here we go. I'll go ahead and mute it.
Well, I don't think you could hear it anyways. But anyways, bipedal robot approaching a stack of like cartons, you know, getting some Boston Dynamics vibes here. Mm-hmm. They're not the only ones in the race here, but reaching it. I will say things move in a little slow, but it's pretty impressive, pretty remarkable to watch the mechanics of something like this happen.
If you could work 24 hours a day, who cares if it's a little slow?
That's true. That's a great point.
Yeah. It kind of doesn't happen. But all the panic about AI and regulation and all that, it just occurred to me as I saw these stories in the rundown that we're going to get a lot of tech lash about AI via the robots. The AI on its own can be made scary enough, but robots and science fiction are scarier and the two combined when they start showing that ability. I bet we're going to hear a lot of backlash.
Well, we were kind of talking about this a little bit in the pre-show, which is that really, you know, it's easy to look at like the march of the robots as like this funny kind of science fiction future thing that, you know, we all want. And it's just going to be, you know, dorky and never live up to our expectations and everything. But honestly, I think at the end of the day, this is the destination. The destination is can we create systems, first of all, that do the things that we want for them to do in the ways that we expect? And then it's, OK, well, then how do we take that from a glass, from a screen that can do these things into the real world? And I think that, you know, the humanoid robot that's informed by a large language model, you know, the processing power of artificial intelligence as that gets built out, AGI and all the other keywords and buzzwords you want to throw in there. You know, that I feel like is really the destination, whether people like it or not.
Yeah. In terms of accountability, it's not trying to come up with a history of Germany through the years. It's trying to do a task. Can it roast marshmallows, which is what we're seeing on the screen right now. That's a magic lab. It does or it doesn't. Yeah. Yeah.
Magic lab has its own humanoid robot called the magic bot, which if you're watching the video version is what you see here. It can roast marshmallows. It can fold clothes. It can grown dance. Which just, you know, what reminds me of the Tesla feature, you know, where the car does the song and light show. And it's just the pinnacle of cheese. But anyways, I guess, hey, we got to make our robots dance too, so that they hate us.
I bet when we come up with more robot stories as the months proceed.
I think so. I really think that that's going to be a large amount of where this is headed. You know, maybe not in the coming months. I mean, well, maybe so. I mean, that was three, right?
This particular week. But I think this is the ultimate destination. Three is a trend. So all you need is three. That's that's all we need.
The market. It's set it in stone. Jeff, thank you so much. Always fun getting to talk with you about anything. But also, you know, we get to talk about artificial intelligence every week and I'm super grateful for that. So thank you for hopping on today. Appreciate it.
Gutenbergparenthesis.com. Anything else you want to point people to? That's plenty for now.
That works right on for me. Yellowgoldstudios.com. That's just my YouTube channel where you can find the video version of this show if you prefer to watch as well as other things. I'm putting up some product reviews, thinking about doing some sort of an AI product review here in the next week or two. So yellowgoldstudios.com. We do this show every Wednesday. So normally, usually we record live every Wednesday, 11 a.m. Pacific 2 p.m. Eastern. So if you go to yellowgoldstudios.com, that'll take you to the YouTube channel where you can actually watch it live as well. We're broadcasting this live to the channel right now. But I'd say the majority of you probably subscribe to the podcast and that is perfectly fine.
We'll take it any way we can. AIinside.show has all the links that you need to subscribe to the show. And then finally, of course, if you want to support us directly, you can.
We really appreciate it. Patreon.com/AIinsideshow. That supports this show directly. I mean, from day one, we had people who were willing to throw us some cash on a monthly basis to support the production of this show and to build it up and make it bigger, better over time. We can't thank you enough and we hope that more of you will join us over on Patreon for some extra perks and that sort of stuff.
Patreon.com/AIinsideshow. And really just search for @AIinsideshow anywhere, you know, across social media. You will probably find us for Jeff Jarvis and I. I'm Jason Howell. Thank you so much for watching this episode of AI Inside and we will see you next time. Bye everybody.