In this episode of AI Inside, Jeff Jarvis and Jason Howell explore the tempering of expectations around generative AI, the promise and concerns of open-sourcing models, and the unique ways AI is being leveraged in journalism, elderly care, and even digital recreations of deceased celebrities.
NEWS
- OpenAI's internal investigation into Sam Altman's ouster as he returns to the board
- Google implementing safeguards restricting election-related information on its AI
- Expectations around generative AI capabilities and impact being tamped down
- Open-sourcing AI models and the debate around it
- Elon Musk's vague promise to open-source "Grok"
- The use of AI in journalism for analysis and data processing
- Midjourney's new character reference feature for consistent image generation
- AI-powered companion dolls for elderly companionship
- AI application for bereavement assistance
- Digital recreation of Marilyn Monroe's voice using AI
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 8, recorded Wednesday, March 13, 2024, the value of open source AI. This episode of AI Inside is made possible by our wonderful patrons at patreon.com. Hey everybody, welcome to another episode of AI Inside, your only weekly source for artificial intelligence news. Top news is always the obvious companies.
I think the really interesting stuff is going to start happening with open source use of models. And the media pay attention to the big companies and they quote the same old white boys all the time. And I think there's going to be interesting stuff that bubbles up. So yeah, we have to search for that more.
Yeah, agreed. And we're going to be talking about open sourcing of AI models here a little bit later in the show. So that's a little bit of a tease. Just real quick, thank you again for leaving your reviews, downloading the show, subscribing. It's all super important. Please continue to do that. Of course, you can find the show at AI inside dot show and you can support us on Patreon. That is the way that you can. Yeah, if you can, you can throw us a couple of bucks and that actually powers the show from behind the scenes directly, not relying on things like ad insertions and all that kind of stuff. And in fact, if you're a patron at patreon.com/aiinsideshow, you get access to an ad free feed. So if you've noticed ads starting to appear in the downloaded free podcast version, hey, I got to pay the bills somehow, but you can get rid of them.
Just go to patreon.com slash AI inside show like Mark Biggy. I don't know if that's how he pronounces his last name, but he put it in parentheses, spelled it out Biggy, even though his last name isn't spelled that way. I'm not going to spell it out for you, but Mark Biggy, thank you so much for your support and everyone else who supports us directly inside of our Patreon. Couldn't do it without you, like literally, we could not. So thank you.
Okay, with that said, it's time to talk a little bit about the news of the week. And yes, now I'm kind of wishing I had taken the open source AI stuff and maybe shuffle things around. But my brain is kind of locked into into the order that it's already in. We'll start that next week. OpenAI wrapped up its internal investigation into the events that led to Sam Altman's ouster last year, which was just a crazy chaotic sequence of events. And so they, as a company, had to spend the time to really kind of look and see what happened. It was a very, you know, short amount of time that Sam Altman was out. And then he was brought back in. And they, they have the summary, kind of the breakdown of what they saw happened that led to the board removing Altman. And they said it was a breakdown in trust also that his conduct quote did not mandate removal. So upon review, they're saying, Okay, that probably should not have happened. I can't help that like I can't help that there be a part of me that's like, Yeah, but like he's in there now, how much influence does that have over the findings?
You know what I mean? It's kind of like when a company hires a hires someone to do a report about how they're being responsible. It's like, Well, you're paying them. So isn't there something going on there? Like there's got to be some sort of influence
in the immortal words of Gomer Pyle. And this reference probably won't mean anything to 90% of the audience. Surprise, surprise, surprise, surprise, surprise, surprise.
It is back in full power and on the board. And they've added two other people, three other people to the board, which is interesting, since it was the board that got rid of all them before.
And so the board is reconfigured now. I know one of them, Fidji Simo, who was for many years in charge of lots of things, including video at Facebook, slash meta. Now is the CEO of Instacart. I think she's doing a really good job. I trust her. I think she's a smart, decent, ethical person every time I've seen her and creative. So that made me feel good to see her there. The other two board members are Dr. Sue Desmond-Hellmann, who's on the board of Pfizer and the president's council advisors on science and technology.
So that sounds pretty good. And Nicole Seligman, who's a civic leader and lawyer on various company boards, Paramount Global, Intuitive Machines and so on. So, you know, boards are funny because if you're a smart CEO, you get your own people on the board. And after Altman learned his lesson last time, you can bet that he got sympathetic ears on this board. So we'll see the same odd structure exists. Altman is still TESCREAL'd up. The company still says it's going to do AGI.
So God knows where it goes. But this is the end, at least for now, of that saga. And he did express a little bit of remorse when he posted on Twix.
That's what I'm calling it these days. Regarding an event that happened prior to his removal. And I don't think that he named it directly, but from what I read there was an attempt on his part to remove Helen Toner from the board. She had published a research paper that was critical of OpenAI's product launch velocity, if you want to call it that, kind of their speed to market with some of this stuff.
And Altman seemed to reference this in his Twix post, but didn't specifically call out Helen in the response. So it just kind of leaves you kind of interpreting, I suppose. But he said, I think I could have handled that situation with more grace and care.
I apologize for that. So probably part of it. Yeah. Yeah, right. Oh, that's true. That's a good point. Because I mean, it was right there in his primary post. And
they're still negotiating whether some of the rebels are going to continue with the company or not. So things aren't fully settled down yet. But Altman's in charge. No question about that. Yeah.
Yeah. No question about that. And the March to AGI continues. Legally.
I was on the Le Batard podcast earlier this morning. I used bad words we don't use here, but we can use there. And they were asking me, this is a sports podcast. I'm the last person to appear on a sports podcast ever, but it's always a lot of fun. Yeah, interesting. When he said we're going to talk about journalism, Stugatz, one of the co-hosts, walked off because he didn't want to talk about journalism because it's boring.
So that's fine. And then they started off to surprise me, asking about AI and artificial general intelligence. And I just said it was all BS and crap and macho, organ swinging and other other words forth and that. And they were laughing their heads off because they didn't expect that kind of view, but that's the kind of blunt talk that we like here about AI. Yeah.
Yeah. Indeed. How did you get invited on a sports podcast? Oh, I don't know.
I've been on a couple of times. It's really fun to go on it. I wish I were on that often, but even if I do force co-hosts off the chair because they can't stand journalism.
It's all part of the fun of these things. Absolutely. That's so interesting. I'll have to check that out. Another week, another Google Gemini safeguard this time, restricting election related information. And this is not just here in the US, obviously. Here in the US, we've got a pretty significant, to put it lightly, election coming up here in a handful of months. Also in India during its peak election moment, I think that happens next month in April. Yeah, we're not April yet.
God, I lose track of time so easily. Anyways, they say out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election related queries for which Gemini will return responses. So these are questions related to politicians, candidates, political parties. If you do a search on any of that stuff, you're going to get a response that says, I'm still learning how to answer this question.
In the meantime, try Google search. So yeah, just kind of going full nuclear with removing any sort of anything related to elections. Is this a good idea? I mean, we talk about guardrails all the time. I mean, yeah, I'm becoming a bit of a broken record on it.
But I found another post, which I added into our rundown from the AI snake oil newsletter, which I think is very good. Yes. From Arvind. Yeah, thank you for alerting me to that. I'll write you an on list.
And Syash Kapoor. And they put it, I don't know the language well enough. The way they put it, I think is very succinct. And they said that AI safety is not a property of AI models. And what they mean by that is that the model on its own is general and can do anything it's told.
And so the idea that you can build safety into that is a bit of a fool's errand. Where it matters is at the application level, which we talked about the show before, where what's what's done with the model. But to try to think that the model itself can can do this.
So they gave examples. One is spam or not spam, but fishing emails. If you try to get the AI to not write a fishing email, well, the AI is not going to know that it's fishing because the fishing occurs outside the email, what you link to. And the AI is not going to know it. And if you tell it, you can't make fishing emails, then it won't make any emails. And we want it to write emails. Right. Right. So that's a good example. Very good example.
Yeah, it really is. Yeah, that's spot on.
And then knowing whether there's malicious use. You really can't know that they argue until afterwards. And they also argue about red teaming is that if it fools you into thinking we can create everything that could possibly be done bad with this. And again, you can't. So where does the responsibility lie? You do need to try to figure out what's done with it. You do need to try to understand. But we've got to recognize just realistically, that this is a general tool like a printing press.
Sorry for my Gutenberg moment, author of the Gutenberg Parenthesis on sale now. You can all drink now. But it's a general tool. And, you know, you can make a change. Well, it's like a computer.
I mean, a computer is a general tool. Exactly. They can do, you know, you can do countless unlimited things with computers. Many good things and many not good things.
It would be as if we expected Microsoft Word to say to stop the middle of typing. No, you can't type that. Nope. Right. Not allowed. You can't type that. And that we hold Microsoft responsible for what all of us type with it. It is that. Right.
So right now. That's not a comfortable thought. But it's reality. And you know, what are we going to do? Outlaw AI will define AI. What's AI versus so much of computerization that's out there now? It's I think this comes out of stupid media coverage and reflexive regulation. And we're not having the intelligent conversation we want, which is yes, there's risk. Yes, there's issues. And we've got to recognize that head on rather than fooling ourselves that we think we can we can be safe behind it. I think that's what Ben Thompson was trying to say in what you I found the best use of AI for me to date is to take a full-blown Ben Thompson columns. Honest to God, I put it into Gemini and haven't summarized it for me.
I'll follow you one bit for that, Jeff. I've done that in a number of different instances.
I got a life. I got I got I got I got things to do.
So I love his work and he is amazing and he is brilliant. And he's a really interesting thinker and very, very comprehensive. Oh, yeah. And yet, like I came across this really just kind of like connecting the dots of like where Google is at right now when it comes to this moment in AI and how they are as a company, you know, you of course do this week and Google every Wednesday, actually a few hours after we do AI inside for Twit.TV. And when I was producing that and more actively involved behind the scenes on that, a topic that would commonly come up and probably still does is the performance of CEO Sundar Pichai and how he like, I guess I'm kind of curious like in this moment where so much seems to be writing on major tech companies, contributions and projects that have to do with AI.
And when we're talking about Google, this is the you know, this is a huge, huge part of its business right now and stands to be an even huge part of the business going forward. Yet we continue to kind of see these these like kind of political backlashes. We start to see, you know, the the just the user backlash of oh, what you know, why is why is the the AI, you know, producing these images that that are historically inaccurate or are offensive and, you know, shame on you Google and how this comes down to, you know, does it come down to an ineffective leadership ability on the part of Sundar Pichai? Is this the kind of thing as as Ben was kind of, you know, illustrating in his writing, that potentially leads to a change in leadership at Google? Do you think this is big enough to to encourage that?
I read his column very quickly. So tell me if I if I got the wrong bits here. Is is he is Ben also saying that that the don't be evil promise leads to a certain timidity at Google? Is that kind of what he says or did I get that wrong or right?
You think? Yeah, I mean, based on what I have read, because I have not read every single word. But yeah, I think in general, I think we're honest here. Quit quit laughing at me. I'm just being honest. Yeah, exactly. But I do I do think that that's if Ben's not saying that specifically 100%, I think a lot of people are, which is that the ethos that the company has shifted. It's so obviously not the company it used to be.
And now it seems to be directed from a different kind of guiding light. And is, you know, how is that impacting the health of the company going forward? And does the CEO ascribe to that? Does this is the CEO effective in that role? Or does it need to yeah, kind of return to its roots?
I think that the blame might end up on the wrong foot there. Because if the company is made timid, and I think that is true, I think they are timid these days. And I think when something happens, like, oh, my God, these pictures aren't, you know, they won't show white people, then they take things down, they get scared. I think that was an overreaction on Google's part.
I think it was a sign of chronic timidity to do that. But when you are, Eric Schmidt always said that when you are the biggest, you're going to get the most attacks. You're going to get blamed for everything about the internet and now about AI. And so does the blame for the timidity lie with Sundar and the company?
Or does it lie outside with media and politicians who want targets? And would we rather Google act like Google or would we rather it act like OpenAI and say, we're building AGI and take all the stops off or like Mark Andreessen, effective accelerationism do nothing to stop us. I don't think they could do that. I think they don't think it can get away with that. They're under such pressure and such regulatory reflex. Google is a really hard place between two rocks where I think he finds himself. He's got to be daring and bold in innovation, but he's got to be timid politically.
And it's hell. And I just try to get somebody to appear on a panel for an event I'm doing at the end of April on AI and journalism. And the person I desperately wanted, oh, I can't, my comms department won't let me. Right. So the comms department power in these companies is surprising because that's about risk management. And now we deal with a machine that has nothing but risk. AI, in one view, is pure risk because you cannot control what it would do. You cannot predict everything it will do. You cannot know what everyone's going to try to make it do. So it is a pure risk machine in an ever more risk averse environment of the biggest public company around. So I'm not sure what we should expect from Google at this point.
I don't know if they're screwed by their own success, which is what I guess I'm saying. And I've debated about in my next book, I'm plugging my books, the Web We Leave, I argue that it's time for the geeks to be demoted. And I say if you wanted to come up with a company that would organize all the world's information, should it be a coder, a programmer, or should it be a librarian or a journalist? If you want to start a company that gets everybody together and their friendships, should it be a geek with a little social awareness or should it be a party planner? And so I think at some point the geeks recede and we understand these tools differently and we understand their leadership differently.
But I think we're a ways from that. So basically, I don't want to say that the president of Google is in an unenviable position because I would take his, is it worth the trouble? Yeah, he probably is, well so. But it's a hard job. It's a hard job.
Yeah. And you can't please everybody in that scenario. And well, he's obviously proving that. No matter what you do on either side, you're going to upset someone. You're too reactionary or you're not doing enough. You're destroying the world or you could be doing so much better. Why aren't you at the top? You just can't win. I'm still pissed.
They're not making pixel books, but that's another issue.
You will eternally be pissed about that because the pixel books were pretty good and they're still not making them. So I totally agree. Totally agree. I thought this was a little interesting. I'm super curious to hear your thoughts on this or we were talking a little bit about elections just a few minutes ago. And AI, Jason Palmer just beat President Biden in American Samoa's Democratic Caucus.
Now, why are we talking about that right now? Well, he is a long shot candidate. He entered the race actually last November, never actually set foot in American Samoa. Yet he used AI technology to interact with voters in the area.
And ultimately that led to him securing an 11 vote victory. And what this came down to was communicating with voters via SMS via email using an AI chatbot that they called PalmerAI, answering questions from voters, replying in Palmer's voice and likeness had like an avatar that they trained on his image. And you know, rightfully so, they began with my name or the or the emails began with my name is Jason Palmer AI.
And AI work for Palmer for President. And then they would end with this AI powered system can respond to you. So being upfront about the fact that this is an AI driven chatbot, essentially, they say they put in very strict guard rails into this particular, you know, implementation of AI, which I imagine you kind of have to if you're running for president, especially like that's talk about high stakes when you're talking, you know, with a technology that is so can be unpredictable. They built it on a tailored corpus of information that was pulled from things like his public statements, his policies, his professional history, and then of course, other, you know, election related topics and political politically related topics. And yeah, he spent all in all about $25,000 to build this avatar spent $5,000 on his American Samoa campaign, and use this technology to secure the victory, at least in that area.
And I'm curious to hear your thoughts on this. I mean, it kind of sounds like it was done more or less pretty responsibly. And he was forthcoming with the fact that it was AI. So I'm guessing if there were any people that were fooled is because they didn't actually read what was there in front of them. But what are your thoughts?
Underdogs and porn are going to make the first and creative uses of new technologies. Right.
So if you go back, I've heard the porn part.
I haven't heard the underdogs one. You know, Howard Dean back in the day was renowned for having made some of the first uses of the Internet in his campaign. Barack Obama was very smart about it as well.
And so this guy comes along, he sees a tool and he uses it. A, B, I don't think that the Biden campaign was spending penny one in American Samoa. So he was the one person who was giving them attention, all 300 dollars or whoever. And then C, again, I think that it's just a tool. So he used it to write emails, he used it to answer questions, he used it to get some PR and attention. And that's all fine. But it was just a tool. And I don't think that it made it was a special campaign. It's just that he made smart use of it and good for him. But sorry, Jason, you've already lost. So nice knowing you. Not you, Jason. Yeah.
I was like, oh, that's right. His name is Jason too. Yeah, but I mean, you know, I think you're probably pretty right as far as, you know, the underdogs being the ones to actually recognize that, hey, maybe there is a gain to be made here by doing the thing that maybe that the major candidate is less inclined to do because of like you were talking about earlier, because of the risk factor, talk about, you know, high stakes, it doesn't get much higher than this. And you don't want to, you don't want your avatar to make a horrible gaffe that, you know, basically turns an insane number of voters off to you because ultimately at the end of the day, you are responsible for what your avatar says in your likeness and in your voice and everything.
So, hey, I give props for seeing an opportunity and trying it out. I highly doubt that Mr. Palmer like expected anywhere along the line that he was going to, you know, suddenly make this wave in the United States and become the next president of the United States. Maybe he did. Maybe that's what every candidate needs in order to actually do what they do and put themselves on the stage like that. But that's a pretty, you know, pretty impressive outcome considering, you know, all things considered. It's figuring out the technology and using it to your advantage.
I like it. Okay, you put in a story here and actually we're going to get to it in a second about generative AI expectations and how things might be changing. That's coming up in a moment. All right, you have a story from the information about Amazon, Google, and I think other companies. Like I've seen similar sentiments being shared in just random articles here and there that are kind of talking about this. Maybe it may be a shift in attitudes inside of the companies that just a year ago were 100% all in generative AI.
Next big thing, we've got to throw tons of money, tons of people, and we've got to make these systems as good as they possibly can be because if we aren't there, then we're losing out. And that now maybe the proof isn't in the pudding as much. And so some of those expectations are shifting. Tell us a little bit about this.
Yeah, I think the information piece says that they're trying to tamp down these expectations. And I think that happens in a few ways. One is about the capabilities. Is it really going to be AGI? Is it going to do all these things? Is it going to replace everybody?
Two is the impact. Is everybody going to lose their job from these cases? And there's all of this motivation to use it as a result for companies trying to save money. And then three is the financial impact on these AI companies. And so I think they're trying to bring down expectations so that they can over-perform, not underperform.
And the level of performance was stratospheric. Every job is going to be replaced. We're going to change the whole world. Nobody's going to need to work anymore.
It was ridiculous, absolutely ridiculous. And so what I really sense in this, it doesn't say it in the story straight out, but I think this is all about trying to avoid the crash of the next bubble. Oh, yeah. I think the AI bubble is out there. And if it gets too big in people's minds that when you said it was going to do all this stuff, we bought stock on that basis. And we, and Klarna got rid of all of its employees because of this. And then they find it doesn't work so well.
What's going to happen to the stock prices of these companies? So I think they far overdid it with the expectations before. It's that macho hubris of Silicon Valley is we are masters of the universe. Look what we're going to do. But they should have learned this before. They should have learned this the year 2000. They should have learned this in 2008. And they should learn this now that they don't know what all it could do now. They don't know what the impact is going to be now.
And I think that they have to hope that they can get this new message across. I don't wonder how this story came out. Did this come from the company saying, hey, we're trying to quiet things down now. Did it come from the reporters hearing people say this? I'd be curious to hear the kind of behind the scenes of the story. And how is this message getting changed out there in the Valley and with media?
Yeah. Yeah, it is. It is really interesting. I think what occurs to me is it's a beast of their own creation. Because when a technology like AI comes on the scene or say maybe it's the metaverse or maybe it's any, insert your current technology fad from the past 10 years here, it's like in order for these things to build up steam and build up momentum, you need really important people talking extremely highly about them to start pushing that snowball down the hill. So it gets bigger and bigger. There's got to be the investment. There's got to be the excitement, the potential.
All those things have to be there. And so, and these companies also don't want to not be where everyone else is. So they're all incentivized to play by the same rule book and really pump up these things when they recognize a potential opportunity.
Because if they sleep on it, then they stand to lose a lot in more ways than one. And so it's like they've created this beast. They've pumped it up. They're super excited. And now they realize they need to shrink the beast a little bit to cover their assets.
Yeah, they can go overboard there as well. The story ends with Khan Academy and Sal Khan. Who do you want to tease? Yeah, sure. Go for it. So Sal Khan will be appearing here on AI Inside and Upcoming Episode, which we're delighted by. That's right.
In a couple of weeks. But he tells the information that more than 100,000 teachers are using his chatbot to draft lesson plans and guide students or activities. Over the past year, he reduced the price of the chatbot from $20 to $4 a month because the prices are coming down and because there's improvements in what OpenAI is delivering to him.
And he hopes to have the number of users up to a million by the end of this year. So it's still a success story there. It's still doing well, but you've got to just present it accurately and take out the bluster. That's real hard for Silicon Valley to do. It's the building built on bluster.
Yeah, they get excited about things. It's hard for them to not show that excitement in every way possible. And that's why you end up seeing them falling on their face when things don't quite go the way they had planned.
And speaking of bluster, the next story on your window.
Excellent. Excellent setup.
Was this just, I think, earlier this week, Elon Musk twixt, if we want to call it that. Quote, this week, XAI will open source Grok. And that was it. That was the entirety of his Twix post. This is, of course, following up on his lawsuit that we talked about the last couple of weeks, alleging that OpenAI didn't stick to its not-for-profit mission. So there's no real word that he's sharing here on what the aspects of Grok are that might be open sourced, if and when it happens, I might add. We haven't seen it happened yet, but. Musk did reply to a comment on that post to say, open AI is a lie. So, you know, obviously a grudge that he holds has to do something with this announcement as part of the grudge to say, hey, they didn't stick to their plan the way I expected them to. I'm the good guy here. I'm opening open sourcing mine, you know, and I'm probably, I guess, to an extension implying that they are not. And maybe they should. I don't know. Yeah. Yeah, and it's not as if Musk is very good at irony or understanding hypocrisy because he speaks out of both sides of his mouth so often. But here, if he's screaming at OpenAI for not being open, then he's kind of got to be open himself. And as you point out, and something you put up from his Tesla blog ages ago, is that he was also promising to open all his patents. Yeah. just open source uh software but that as well and I have no idea whether he's done that how much he's done that I don't know yes that's right that was 2014. uh he moved to open source a number of tesla patents at the time yeah that's a good question like what is the status of those open sourced patents now like uh I my understanding is that he did but um Yeah, because he says in this post from 2014, he says, yesterday there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed in the spirit of the open source movement for the advancement of electric vehicle technology. This is 2014, so this is about the time that he was also kind of getting involved with OpenAI, if I'm not mistaken. About then, yes. About then. But here's one question. Yeah. do you think grok really exists I don't know it kind of seems to be that that like elusive like okay I keep hearing about this thing but um you know show me the money maybe not the money but show me show me what you got uh yeah it's a good question I'm sure that I'm yes I'm sure that some form of grok exists but like when when do we get access to this in the way that we do you know everything else capabilities and so on what's this capable yeah exactly what does it mean to be a truth-seeking chat bot We know that's the lie. It can't be. Truth-seeking. Yeah, that's true. Yeah. Well, it's not truth-speaking, Jeff. It's just seeking the truth. But it never finds it. Never finds it. Looking for an honest bot. Yes. Yes, indeed. But I think at the core of this is this idea that, you know, open sourcing... AI models, is that a desirable destination for this technology to democratize the technology so it isn't locked up in the hands of the billionaires that own companies that are able to direct what they are capable of and what they're not. Obviously, that has the benefit of giving everyone access to these tools, but also the disadvantage of giving bad actors access to these tools. But like we were saying earlier, it's like a computer. You can do a million horrible things with a computer. and yet we all have them there is no you know real control that is automatically built in my computer to prevent me from sending a phishing email uh because it's just it's impossible to control a tool like that yeah yeah it goes back to the the piece that I referred earlier the ai snake oil newsletter is part of the argument around guardrails, which we keep coming back to, is that open source is the enemy of guardrails because people can get around them. Well, they can anyway. Right. And it fools you. And I think that the arguments in favor of open source are many and good that it's more accountability that you can know what's what the software is doing that yeah it opens it up to use by more players who otherwise couldn't afford it uh academics and small companies and startups and and countries that don't have the huge amount of resources that we have um and um so I think open source is critically important to trust But this area is so filled with paradoxes, there are those who say that open source is the enemy of trust. Yeah, yeah. So I still don't trust Musk, but I hope he open sources it. Yeah, yeah. I think that's a move that I would be happy to see if that happens. I think maybe he alluded to this happening right this week. Right. What does that mean? Does that mean we're going to release this open source code to you this week or we're open sourcing it and we still, you know, it's still going to be an eternity. I don't know. I guess we'll find out. We'll see if he even follows through on this. He says a lot of things and then they don't happen. So we'll see. Jeff, I spotted, before I looked in the rundown and saw that you added the link to this, I spotted in your, in your Twitter feed, some FOMO. You were, you were sad that you weren't at South by Southwest, but in particular you called out a talk by Zach Seward, who's the editorial, the new editorial director of AI initiatives at the New York Times. I think he's only been there a couple of, couple of months. And yeah, I'm curious to hear your thoughts on this.
I haven't, I haven't had a South by Southwest FOMO in many years. I haven't gone for a long time. Neither have I. And I really didn't care. And the funny thing is, by the way, just parenthetically, my social feed had virtually none. Even my German friends tend to go. I had the double check and be like, oh, South by Southwest is happening?
Yeah, I think it was. So Zach Seward, who's brilliant. Zach was formerly the head of courts, formerly at the Wall Street Journal. He's a really, really wonderful journalist and editor. So he went to South by Southwest to do a talk, AI news that's fit to print joke on the New York Times here. And he does the necessary caveat at beginning, going through the mistakes, CNET, Gizmodo, Sports Illustrated and such.
The Sports Illustrated, I just got to say real quick, the Sports Illustrated, I don't know where I was. Like I must have been under a rock or whatever. But that one got passed.
That's crazy. That was a bad one. Created an entire avatar, like fake people and almost like it seems like passed them off as real, right? Which is even more appalling when you make up fake diversity. You don't really do it for real.
But you try to hide behind the virtue of diversity by making it up, which means it's not diversity. But anyway, again, don't blame the machine. Blame the idiots who did it. Yeah, right. Exactly.
Totally. So then he does what's important is he tries to recognize things that are going on with AI that are smart and good in journalism. And there's just a few examples. Courts uses it for pattern recognition. Grist, which is an environmental, not for profit newsroom. And the Texas Observer have used it for big data. Buzzfeed News identified texts and patterns in numerical data.
So it's not really about generation. It's about analysis, which I think is so important. The Wall Street Journal used it to recognize an investigative journalism. I didn't even know about this. How much lead cabling remains in the U.S.? I didn't know it was there in a danger, but it is. And then, you know, in the New York Times, it's done interesting things with the war in Gaza and recognizing things in satellite images. So these are a Marshall project creating sense with generative AI, watchdog reporting in the Philippines, and so on.
So you could look for Zach Seward or just go to Zachseward.com and you will find it's Zach with an H, not a K. So he's a brilliant journalist doing great work. And I think it's so important to look at this again as a tool. It is a tool in the hands of us humans. And we can use it in good ways and bad ways. And that's what comes across here.
So I think it's just the right attitude we need to have about AI in this. And at my event on the 30th, I'm going to have the editor of VG, one of the Schibsted papers in our episode number two. We interviewed the CTO of Schibsted. I hope to have Amy Reinhart, who you and I interviewed in a prior life of this podcast from the AP and Gina Chua from Semafor.
And I'm looking for a fourth for that hand of bridge. But it's journalists who are doing smart and good things with AI. And we would change the conversation away from danger, danger, Will Robinson, and away from stupid, stupid sports illustrated to here's what it can do if we use it well. So I just was glad to see this. And that there was much interest in it. And I think that's the conversation we want to have about these tools.
Yeah, I think what comes up for me around this is it's interesting because in the past year and a half, I think when we look at generative AI, the thing that wowed people initially was, oh my goodness, this this LLM, which we didn't even know what an LLM was, most people have never even heard that term.
But this system is able to write similar to my bad writing. That's that's amazing. Like I could convince myself that this was me writing this thing.
And I've never seen a machine do that so convincingly before. And through that experience, a lot of us jumped to, oh, well, what are the many ways in which we can use these tools to write for us? And what is a, you know, what is a job, a career, an industry that is built on the foundation of writing that this could help with why it's journalism. So we'll get the AI to write all of our journalism and that'll make things easier and everything. What I like about this report is it points out, hey, maybe that's not the right approach when you're talking about journalism and you're talking about AI.
Don't throw the baby out with the bathwater and say, hey, well, these things just aren't compatible because it's obvious it doesn't work. But rather, what can this system be incredibly useful for for journalists that isn't necessarily the writing part, right? But that is polling, researching, creating a foundation around which for the human to write their piece or to have a better understanding of the data that they're researching. I mean, that that really is the power of this technology. When I think of the medical industry and what we're going to see probably not in the not too distant future of these AI systems trained on data sets that are so specific to particular types of cancer or whatever, that suddenly these are tools that people who know how to use them become incredibly effective at doing things that is just doing doing things that are incredibly important and impactful. To the lives of humans.
Once again, that's the theme of the show. The big difference. Don't oversell it. It's not going to save millions of lives right off the bat. It's not going to be no better than doctors off the bat. It's a tool to be used and we have to explore it and we don't know its full capabilities yet. Yeah.
But in those capacities, it's pretty darn promising. It is. It absolutely is. Yeah. Yeah. Recognizing patterns, pattern recognition like that. You know, anomaly. That's something that's going to be a big deal. It's something that I think it can take a human a long time of training to get on the level of being able to recognize those patterns. And if the AI systems can be trained appropriately, you know, maybe that is a tool set that makes it easier or more effective for the human who's doing that analysis.
Yeah. If you think back what made the personal computer was not word processing. It was spreadsheets.
It was the ability to say what if for businesses. Yeah. And that's what's read the personal computer and that's what's the real power of it.
We might have thought that personal computers would all be about text and writing and Gutenberg and all that. Yeah. Yeah.
That too. But the power ended up being in this opportunity to ask what if the power of AI right now is in analysis and you know, what's there? What's weird? Super true.
What's weird is the imagery that mid journey pushes out. I don't know. That was my type of transition. Midjourney is rolling out a huge feature that has been very requested. And if you've worked with any image generation systems, you might notice that, you know, consistency of a particular character from generation to generation is not very good, right? Like, oh, we almost always there were, you know, these diffusion models, they're working from a prompt to create something entirely new. There isn't a whole lot of iteration from one thing to the next to the next that kind of retains that character model along the way. And so I think this has been, you know, this is one of the signs of these technologies maturing because now we create really interesting looking things and really convincing looking images. But how do we keep them consistent in the way that, you know, creators, film directors, you know, people in the creative industry have just done by default because they're the ones that are drawing the characters or using the digital technology to create those characters. They have that in mind. How do we get these AI systems to have that in mind as well. Midjourney has announced that they have rolled out a new feature to facilitate this character references what it's called.
It's a -cref. If you're doing a mid journey prompt, you add that to the end of the text prompt, and it matches character facial features, it matches body type. You can even do, you know, like clothing. So if you add a URL to a particular type of clothing, you can say put this person in that clothing, and it can, it can match those things up. And then, of course, as a user, you have controls over the weight of these images of these transformations as well. So you could dial that weight down so that it, you know, it's not as effective in matching between or you can dial it way up. And so it keeps and retains that kind of consistency from image to image.
See, you can kind of see if you're watching the video version, kind of a similar character and all four of these. And yeah, I think I think these this kind of progress is going to bring tools like this even further into the creator tool set for things like motion pictures for things like story boarding will be a big use for something like this. And yeah, I think that's,
that's, yeah, I think the word you just use progress is important this sense because now all of the progress and all of the learning occurs pretty much at the model level, not at the user level. Because every time you come in and do something, it forgets you.
It doesn't learn something for you. And I think that's going to be so critical to how people use this stuff, though, it's also going to bring more risk, because you could make it worse and worse and worse. You can make it dangerous. You can do all kinds of things. Yeah.
But right. You know, we talked before, I think about the way I look at media is we go from presentation, which is a newspaper to links, which is social and search to generative, which is where we are with the AI. And the next step is is agentic agents.
And so I think that to get to agents, it's got to remember you and know you and know what you're looking for and have learned it. So I think this is all just a step upon that kind of learning process.
Yeah. Yeah. That's super true. It's got to know that if we're going to get what we're kind of looking for out of the agents in the future. And actually this consistency challenge, this kind of reminded me of a link that you had included, which is a Wall Street Journal article slash video, Joanna Stern, who's awesome. She chatted with OpenAI CTO Mira Murati about the Sora video generation tool. And if you watch the video through, you know, you'll see moments that I'm sure we've all recognized in some of these video generations, which is the foreground is really amazing. But if you focus a lot on certain things like there's one example where there's a person in view and then a car passing behind that person. And before it passes the person, it's one color.
And when it comes out the other side, it's the other color. And, you know, Sora is obviously working on this consistency problem too. I think they all really are at this point. These things are just getting better and better.
It's a good interview. I watched the whole interview. And I found it informative. But if you put it back up for one second, Jason, what irritates me about this was the headline. OpenAI made AI videos for us. These clips are good enough to freak us out. Freak us out. No, they didn't freak you out. Panic. They didn't freak you out. You totally come. Panic.
A, B, what you're trying to do is freak out the audience. Yeah. And this is part of my moral panic.
Take another drink. A view of how media approaches this story. Is it's fascinating? It has its limits. The discussion includes the limits. It talks about the things to guard against. It talks about what needs to be done in the future. All that's a very reasonable discussion. But it doesn't freak us out. That was a lie. Yeah.
Like what, yeah, what is it about this that, that quote freaks you out? Is it, is it, it's so powerful? It's interesting? Or it's so powerful? It's going to take my job and I'm freaked out about that. Or like, what is the freak out exactly? The headline.
Other than kind of click baity. Exactly. Yeah. Meant, meant to intrigue you or confirm your own bias about this particular thing and pull you in as a result of that, whatever that may be. Yep.
So now speaking of freak out, we come to the eerie corner of AI's. Yeah. A bunch of eerie stories.
Totally. Well, so I, I came across this and I saw a lot of reaction. This is a wired story. You know, again, with the title, it's, it's right there in the title.
Welcome to the Valley of the Creepy AI Dolls. And I saw, I came across this on social media, I think on Twitter or something. And so many of the responses kind of verified or doubled down on the, oh, God, that's so creepy. That's so weird.
An AI doll. This is, this is so strange. Yet without really understanding, or maybe they do understand that they still find it creepy, the context about a doll like this.
And so what am I talking about? This is an AI doll shown off at Mobile World Congress by a Korean company called Heo doll. And it's an $1800 AI powered doll. And it's meant to be a companion for people who are lonely people who are in long term care facilities.
Of course, in, in Korea, what was it? It's expected to that they're supposed to have this insane number of elderly people with no children, no grandchildren to visit them, right? So you've got a lot of elderly people who are missing some sort of companionship and don't have it in their lives. And that I think is kind of the idea behind why a doll like this even exists, But it can converse, it can talk. Like I said, it's powered by AI, so it kind of taps into those models to to talk with you to offer health reminders um you can do things like set it to a diabetes mode so it pays attention to its owner and offers suggestions and reminders about what not to eat it has sensors so it can react and respond to touch or ask you to like hold its hand it um it reacts to movement or lack of movement so that's important because if it's paired up with a human who you know doesn't have contact with a lot of other people and say that person is not moving and it's been a set amount of time there's some you know uh curiosity programmed into the the dolls part to be able to then I think alert about that alert somebody to to go and check on the person and I just I see something like this I see it as being very easy to like make fun of it and be like oh yeah who would who would want to have companionship with the doll or whatever. But at the same time, like these are people who have no companionship whatsoever. And I've seen some videos with people, elderly people who have been using these in Korea. And I mean, like they are, they're in, in it I don't know how to explain it like they are getting the companionship that they crave while also saying you know it's not human companionship but it is some sort of companionship and I you know it's it's reduced um uh incidents of self-harm in the people that have been using I just think it's a really interesting uh development that's easy to make fun of I guess is my point yeah I think you have a good point jason I think I think that um if you abstract it, it's pretty weird that we invite wild animals into our homes. That's true.
And I'm not counting your kids. And so that's kind of weird if you think about it. We're humans, we're a species, and we have a whole another species come in.
And why? Because we get companionship. We believe we get and give love. And the machine is not going to give us any love. But unlike a cat, it won't run away and it actually talks to us. So I like it. Yeah. I like it.
But I'm allergic to cats. I like them. But I wish I wasn't allergic to them because I could never have one.
You also added a story here. But when the decision came with the kids about what to get, my wife, she's the cat person. She went in. It was a really cold, miserable, rainy, horrible morning. She'd go and wake the kids up and say, if you had a dog, you'd have to walk it now and pick up its poop. So we have a cat. Oh, wow. Yeah.
Unfair. That's an effective strategy. Very, very. That probably would have worked with our kids who promised us, swore up and down, they'd help us walk the dogs. And it's like pulling teeth to get them to actually do it. I need a robot.
There we go. I put this next story in because I thought it was kind of related oddly. And my first reaction I will confess was that I was a little creeped out.
A startup raised, Axios story, raised $47 million at empathy, an application that provides bereavement, leave, and life insurance companies and employers some way to help employees deal with death. And on the one hand, I thought, ehh. On the next hand, I thought, well, that's kind of copying out of the company's part. But if it provides, it'll do things like help you write an obituary. Okay. I get that. Grief counseling.
Well, we go to search engines and ask questions. So maybe it's just a little better than that. So my first reaction to this was like the reaction of some people to the doll. And maybe it's still bad.
Maybe it's people trying to exploit pain for venture capital money. I can still make that judgment as well. But we'll see. We'll see how it goes. You and I both lost our fathers in the last year or so. What was your first reaction to this story?
Well, I will admit, you added it a little bit later. So I'm only just now kind of thinking about the service that this company provides. Is it manipulative or taking advantage of someone when they're in a bad position? I suppose I could understand to a degree why someone might feel that way. But at the same time, there are services that people who are in that position need help with. And someone's got to do it.
Here I mean, it's not like that need doesn't exist. So and like, yes, I've been in the position to write an obituary. I wrote the obituary for my father.
And I feel like I wrote a pretty damn good one. But yes, that is a really intense emotional moment that I certainly needed help from people with. And so if I knew that it was possible, turn to something like this that would help me through it. Absolutely. If I knew that it was going to give me the answers that I was looking for, you know, yeah, I see nothing wrong with that.
All right. So so finally AI from the grave.
Well, yeah, I suppose that's true. I guess that is the the next step. Those two stories together feels a little weird. But digital Marilyn Monroe is apparently a thing. This is debuted at South by Southwest, that event that you didn't know was happening. Giving Marilyn Monroe a digital voice by tapping into chat GPT 3.5. There's a company, Soul Machines, who created the technology that powers, quote, biological AI powered digital people and, quote, in collaboration with authentic brands group, which is a group that represents many notable icons.
So Marilyn Monroe, Elvis Presley, Muhammad Ali, and more, which just tells me expect to see more like this, which I kind of already figured at some point this was going to be, you know, done. And it feels a little exploitative.
Yeah, it's like it's like going. Yeah. It's like going to a Broadway tribute show for a band. Yes. Or for that matter, Elvis impersonators, by the way, there are other Jeff Jarvis's on earth. One is a doctor. One is a tour guide. There is also a Jeff Jarvis who was an Elvis impersonator. Yes. Yes. Not me. I swear it's not me.
So I've never seen you both in the same room at the same time.
That's right. So it's just a tacky. And I don't think it's going to take off or anything. But yeah.
Yeah, I can't imagine. Like, what do you get out of talking to a Marilyn Monroe chatbot? Like that. You know what I mean? Like it seems like a tech, like a tech demo. Exactly.
Like, okay, cool. I chatted with Marilyn Monroe chatbot. Yeah. Yeah.
It's just words on a screen that I suppose may or may not even, you know, be the kinds of words that would come out of her mouth. I think it's like a Liza, but with Marilyn Monroe's name. You know, I don't know.
It's not weird, but I do think we're going to see more of this sort of stuff. And who knows? Maybe at some point, that technology merges with the amazing metaverse that we were promised. And we get or robotics that infuse this stuff. And we get to a point to where you go into a wax museum and instead of seeing a still wax person, you see an actual Marilyn Monroe and you can converse with them.
That might be an interesting experience. I feel like we're a ways from that. And I don't feel like that's any less tacky. Yes, I agree.
So who knows what we'll see down the line. See, all of these great things that you have to look forward to watching and listening to AI inside every single week, we will follow development of chatbots in the form of deceased actors and actresses.
Jeff, thank you so much for hopping on today. On your marathon day of podcasting, you do This Week in Google, of course, for twit.tv. What else do you want to plug?
Oh, just gutenbergparenthesis.com. You can get discounts for both Gutenberg Parenthesis and Magazines, my Books Out Now. Thank you very much.
Excellent. Always fun getting the chance to talk AI with you, Jeff. Thank you. Appreciate you. Yeah. And you can find me at yellowgoldstudios.com. That's just links you to the YouTube channel where this video podcast is hosted. And then I'm also doing kind of different reviews, technology reviews and stuff, really just playing around with formats there right now and seeing what's effective.
So yellowgoldstudios.com. Also do Android Faithful, a podcast with my friends from the All About Android crew back in the day. We're doing Android Faithful now as well.
So androidfaithful.com. This show AI Inside publishes every Wednesday. We record it live every Wednesday. So if you are subscribed to the YouTube channel at yellowgoldstudios.com, you can actually watch us as we record this live in real time. But you're probably subscribed to the podcast, and that's what we hope at the end of the day that you're subscribed in some way, shape, or form. So just go to aiinside.show. Make sure that you're subscribed to the podcast there. If you want to support us directly, of course, like I said earlier, you can do that patreon.com/aiinsideshow, all one word. And there you can support us directly. And finally, you can find us on all the major socials, just search for @AIInsideshow, all one word, and you will probably find us.
We're posting some shorter clips from the podcast, playing around with some formats and stuff to get more eyeballs and attention on the show. We could not do it without you. Thank you so much for watching and listening each and every week. We appreciate you. We'll see you next time on AI Inside. Bye everybody.