Jason Howell and Jeff Jarvis discuss Adobe's AI ethics initiatives with Grace Yee, Meta's new Frontier AI Framework, Google's removal of weapons restrictions from AI principles, and OpenAI's new Deep Research tool.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
0:01:43 - INTERVIEW with Grace Yee, Senior Director of Ethical Innovation at Adobe
Adobe’s core AI ethics principles:Accountability, Responsibility, Transparency
Licensed Adobe Stock content and public domain material, filtered for harmful/copyrighted data
Rigorous harm testing, iterative improvements, and internal beta testing for AI features
Collaborative responsibility across AI layers
AI as a tool to reduce manual tasks while preserving human creativity
Importance of human oversight, verifying outputs, and using AI as an ideation tool, not a final product
Balancing guardrails with user context and licensed training data
Collaboration with policymakers, monitoring regulations (EU AI Act), and advocating harmonized standards
Adobe’s participation in EU AI Code of Practice and international regulatory harmonization efforts
0:26:13 - Meta says it may stop development of AI systems it deems too risky
0:31:09 - Google removes pledge to not use AI for weapons from website
0:34:50 - OpenAI Unveils A.I. Tool That Can Do Research Online
0:39:47 - OpenAI to release new artificial intelligence model for free
0:41:05 - Gemini 2.0 is now available to everyone
0:44:14 - Elon Musk Ally Tells Staff ‘AI-First’ Is the Future of Key Government Agency
0:46:33 - Josh Hawley: DeepSeek users in US could face million-dollar fine and prison time under bill
0:49:37 - Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try
0:52:22 - AI systems could be ‘caused to suffer’ if consciousness achieved, says research
0:55:15 - AI ‘godfather’ predicts another revolution in the tech in next five years
Learn more about your ad choices. Visit megaphone.fm/adchoices
Well, hello, everybody, and welcome to another episode of AI Inside, the podcast where we take a look at the AI that is layered throughout so much of the world of technology. I am one of your hosts, Jason Howell, joined as always by my friend, Jeff Jarvis. Hey, boss. Good to see you. Good to see you, sir.
Yes. Very good to see you. I'm really excited for today's show. It's been a a while in the making and the crafting. Before we get there, just a real quick shout out, quick thank you to our patrons who enable us to do this show each and every week.
That's patreon.com/AIinsideshow. Svein Jonny is one of our patrons supporting our work here. So, Svein, thank you so much. And everyone else who contributes, we really do appreciate your efforts, behind the scenes. And, of course, if you wanna subscribe, aiinside.show.
Don't forget to do that, then you won't miss an episode like today. If you happen to miss it live, you'd still get it in podcast form. Today, I wanna welcome to the top of the show. We have, this guest for probably the next twenty five minutes or so, Grace Yee, who is the senior director of ethical innovation at Adobe. Grace, it is a pleasure and honor to have you on the show today.
Thank you for being with us. It's a pleasure and honor to be on the show with the both of you. I'm so excited to, have this conversation. Yeah. I mean, you know, we're we're AI Inside for Jeff and I, as we've talked about on the show many times, is, it's it's something that we do because we're constantly learning about artificial intelligence.
We aren't, you know, we aren't of the mind, or at least I'm not of the mind to know how to get into the models and pull the levers and and do all the really nerdy stuff, but I wanna know more about it. And the topic that really comes up time and time again on this show has to do with ethics in AI and the build out of that. As a creator, I use Adobe products all the time, so I'm very familiar with, you know, interacting with the tools with Photoshop and Premiere and how AI is integrated into the toolset. So that makes me particularly excited to, get the chance to talk to you about kind of the development of the AI ethics committee, which seems like a really great place to start. This happened, at least began, I believe, in 2019, and a lot has changed when it comes to artificial intelligence development over the last five or six years, when you started that.
So I'm I'm just kinda curious from that perspective. Like, you you you've been pretty integral integral to this. How how how has that developed, especially in recent years as things have really kind of, snowballed in AI? So Jason, you know, as you, you know, in said, like, we started, we kind of started from the from the foundation. Right?
So really having a set of AI ethics principles that we could kind of ground the development and deployment of our AI features. And that started, I think, it's six years ago now. Incredibly. And we put together an AI ethics committee, which, you know, is a group of cross functional, employees from Adobe. So we had people from our marketing organization, our legal organization, our development organization, because we really wanted to have that diverse perspective to come up with a set of principles that were simple, concise, practical, and the most importantly, like, stand the test of time.
So, you know, we started out when we put it together, it was just AI. And today, it's generative AI. And I am so proud to say that it is the same set of principles that we created, back then. And, you know, our principles are accountability, responsibility, and transparency. Accountability is really around making sure we have feedback mechanisms in our products to allow our customers to report concerns and for us to address those concerns.
Responsibility is really around a process in place that we can evaluate the AI before it's deployed. And then transparency is really allowing letting our customers know how we're using AI in our products. And so, you know, I think there's pieces of it that's changed, but kind of, like, those three principles that we, you know, hold our AI to has stayed the same. You know, obviously, with generative AI, it can there's a lot more possibilities. And so the you know, we have an ethics assessment that, or impact assessment that teams fill out before they kind of start our go through our, AI ethics review process.
And that assessment has changed, you know, between AI and generative AI. But the process of identifying harms, evaluating for harms, which is really around testing and measuring, and then mitigating those harms. And then in some cases, going back and retesting and remeasuring, that has stayed the same through this six year process that we've been in. Grace, one thing that fascinates me as we as as Jason and I learned on the show is there's kind of a developing matrix of responsibility. And having written about the history of print, I go back to how the printer was responsible or the publisher was responsible or the author was responsible.
And in the AI world, there's the foundation model level, there's the application layer, and then there's a user layer. And it seems to me that that Adobe is is very much in the middle there of enabling applications for users. So my question is this, what can you be responsible for and what can you possibly not be responsible for? That is to say that Adobe can't possibly anticipate every use that every user could ever put your tools to and thus prevent anything that anyone may ever consider to be bad. Right?
That's that's just impossible. However, in the kind of PR and policy world, you stand the risk of some reason. Well, they used Adobe for that. So how do you how do you separate out what you as Adobe can legitimately and properly be responsible for and what you just simply cannot control on either end from the from the foundation end or from the user end? I mean, it is it's really everyone's responsibility when we use generative AI.
So it starts for us, You know, we think about how we train our models. And for our generative AI, products and models, we use licensed content, from our Adobe stock collection. We also use public domain content whose copyright had expired, so we start there. You know, we think about before we take that content from Adobe Stock, and put it into training our models, it goes through a content moderation process to make sure that harmful content is filtered out or, you know, copyrighted material is also filtered out before we go and train that. And then when we actually take those models and we integrate them into our AI features that are part of our products.
Like, we really sit down and we think about, you know, the technology itself, how it's being used to enable a specific capability. And then we think through, like, who's the audience? Like, what are they doing with this? And we think through, like, you know, obviously, like, the intentional harms, which is, you know, a bad actor is trying to do something bad to the system, and we try to mitigate those. We also think about it from, you know, if someone's just using our models and they're trying to create something and they put in some kind of put in an innocuous prompt, like, we wanna make sure that our model doesn't unintentionally generate harm.
So it's all of it. And I think, you know, we've always said, like, we would never we'll never get to this zero percentage. Like, if all harm has been mitigated, pulled out of the system. And so I think it goes back to our accountability principle, making sure that we have a feedback mechanism in place so that when our customers are using these AI features within our products and they encounter something, they can easily report it to us. And then we look at that feedback, and then we take that and try to make the model better.
Because it really is this community of people that we need even though, you know, for all of our AI features, we do a lot of internal beta testing. So we ask our diverse, employee population to help test it to kind of, like, understand what harms they're seeing, and we we continue to mitigate that before it goes to production. But that's still not enough. Like, people can use AI in this these most imaginative ways that we've never so we need everybody to help. And that's why our feedback mechanism is really important.
And then on the other side, you know, for us, responsible innovation is not only about how we develop our products, but it's also about how our products, like, can impact the world. And so, you know, we back in again, in 02/2019, it's a pivotal year for us. We cofounded the content authenticity initiative, which is now today a global coalition across industries working together to try to bring more transparency to digital content, and really allowing, like, people our creators to show their work. And we do this through a technology called content credentials, which creators can go and choose the to put it, put content credentials against their content so people can see, like, you know, how the content came to be, what has happened to it along the way, and they can use that information to make their own decision about, you know, whether or not to trust it. And, you know, we think about it like it's like a nutrition label for content, which is similar to, like, a nutrition label for, you know, food.
Yeah. When you when you talk about, community, I think that's a real key component of what what I think of when I think of, Adobe's products because they're so I mean, you know, and and probably because I'm so close to it because I am a big time fan and user of the Creative Cloud, toolset. So I'm using Photoshop. I'm using Premiere. I'm using all these tools, interacting with, you know, the AI tools that are integrated in there.
And I think what occurs to me around that community is what we've seen, especially in the last handful of years, last couple of years, is that from a creativity perspective and Adobe's tools are very, you know, much appealing to the creative community, is that there's the people who are super creative, who have talent and ability of using these softwares and and services to do what they do, squared up against the fact that, the generative AI tools are coming into the tools they're already using. And I think certain parts of that community feel a little threatened by that. I'm I'm really curious to know how Adobe kinda squares up that that the inclusion of those kinds of features in the tools, against the very creators that sometimes feel like, oh, wait a minute. Those things are making my efforts obsolete, which by the way, I don't I don't agree with. I think that, you know, overall, it's just bringing more tools to my palette.
And, yay, I mean, I can do more things with that. But not everybody, you know, feels the way that I do about that. I'm curious to know what you think about that. Yeah. Hey.
Adobe has always been creator focused. So you think about our products like Photoshop and Premiere Pro and Illustrator. Those products have existed before AI, before gen before generative AI. And it's always been about, you know, allowing our creators to express themselves. And, you know, I think creativity is a very human, characteristic.
And so, you know, we had those tools. And then when AI, generative AI came on, what we're doing is we're integrating those technologies into the features into our products to make it easier for our creators to express themselves. So, you know, prior to generative AI, you know, creators in Photoshop would have to, like, painstakingly kind of, you know, draw a line around the object or person that they're trying to remove, and you'd be very precise. And today I I I know that pain very, very well. I do that every day, and the tools have made it so easy now.
Yeah. I mean, today, it's like you can just draw this, like, you know, circle and you it's a very rough circle. And then, you know, you can prompt it and it'll, like, remove it and or generate something else in place of it, you know, and that's our generative fill feature. And it's one of our most popular generative AI features. So it's really about making it easier.
So and it frees up their time to be more creative. Yeah. I I I agree. I've I've argued since the center of AI came out that that it potentially extends literacy in various definitions to people. I can't draw.
Now you can help me draw. Other people don't like to write. It can help them write. Right? So I'm I'm I have a course on the books for Stony Brook University to teach next fall in AI and creativity, an undergraduate entry level course.
And, it'll be interesting to for students to explore the definitions of creativity through this. But I also want to be able to teach them the ethics of creativity and the use of tools. And so you've talked about it from the Adobe end and from the acquisition of content end and now from the creator and in terms of the professionals and how you deal with them. What about the beginner? What about the student coming to this for the first time?
What are the ethical issues that I, as a teacher, should worry about in teaching them? Yeah. I think it's really around, you know, I think AI is an enabler. Right? And so I think when you use it, you really have to think about it's really that human oversight.
I think there's it's so important for humans to be involved in the loop. You know, we've seen those stories where, you know, people are just when they prompt the AI and it drain generates something and then they just take it and they publish it, they run into issues with it. So I think it's really using it as maybe inspiration or as a helper. And then the output that it generates, like, you need to look at it and say, okay. Is this, you know, is the output accurate?
Is it true? Is it reflecting what I want to say? So you really take the ownership and the responsibility of it versus just saying, this is what it generated, and this is why I'm gonna go and publish, before you know, you against my own name. Yeah. There's certainly different levels of people and and different approaches as far as that's concerned.
Some people do see these tools as a, well, I guess I don't need to do anything, and other people see it as you know, I think actually, the analogy that I remember that I always think of is friend of ours, Jeff, Mike Elgin, was on the show one time and talked about the analogy of, like, the moving sidewalk. Like, there's people that use artificial intelligence. Like, they stand on a moving sidewalk, and they say, I don't have to exert as much effort to get from point a to point b. And then there's the people that get on the standings on a moving sidewalk, and they walk on it because now I'm getting there faster, more efficiently to the other side. And, yeah, they really are different, different perspectives on how these tools are run.
Do you do you ever worry? Do you ever I I'm sure as a company, you're concerned about this to some degree, but I'm curious to know what what you have to say about kind of the idea of, like, pushback, from, you know, that kind of knee jerk reaction that some people have to artificial intelligence. They hear AI. They immediately tune out. They immediately say that's that's bad technology.
You know, that's that's harmful. It's hurting. Whatever. And how does that impact, that potential pushback impact kind of the the, I don't know, the brand equity that Adobe has built up, over such a long period of time? So I think it it goes back to we've always been creator focused, at Adobe, for in our products, and in our features.
And I think it's important to recognize that, like, AI isn't gonna go away. And so it you know, that's why it's important to kind of really approach this technology responsibly and make sure that it's developed and deployed responsibility and, you know, respecting our creators' rights. And so, you know, we like, I've you know, earlier, I was talking about, you know, we developed our AI features with that in mind. Right? We looked at making sure we're developing our generative AI features with licensed content, or from and content from Adobe stock collection so that, you know, creators can use this and feel good about it and not feel like, you know, we're their work is being infringed upon.
And so I think it could just goes back to, you know, really thinking about it ethically from the beginning, from how you retrain the AI to how we test it and thinking through the harms and then through the deployment of it and then, you know, going back to we we continually need to get that feedback to make it better. You know, an example that we had when we first launched, Adobe Firefly, our text image model, we were really conservative, and we were really concerned about people generating images of people shooting other people or people shooting, you know, or kids shooting each other. And so on the first thing we did was we blocked the word shooting. And and then we realized that actually, like, that was maybe too aggressive of a move. Like, because the word shooting can be used in the context of, like, I'm shooting a basketball or I'm shooting film.
A photographer. Right. Exactly. And so I think it's just like the continual learning and the context is important. And I can tell you that the feedback that we've received from all of our our features have really taught us a lot about how our customers are using it and how we really needed to sometimes revisit some of the guardrails that we've put into place.
Grace, I'm so glad you're here because there's just so much your field is so dynamic and changing constantly. In just this week, the news around ethics and AI, and policy. Meta says it put out a statement about the not developing certain models. Google took away its statement about not working with weapons. The copyright office said that creations, as long as humans are involved, can be copyrighted.
The EU has, just went into effect with its dangerous AI rules. What I'm curious about is how you follow and keep up on all of this. Is there much connection among the technology companies as you as you learn how to how to deal with these, issues that come up all anew? Are there other specific conferences for AI and ethics people to go to yet? Are there certain academics you think are really doing good work?
How do you stay in front of this? You know, it is a group effort. So we, you know, we have some amazing teams at Adobe that we work with. We have a policy team that kinda keeps an eye out on all of the different regulations that are happening, and forming around the world. There's conferences, as you stated.
There's academic papers. You know, a lot of it is news that, we read every day. So I think it's just information coming in from all these, different sources and then distilling them and thinking about, you know, which ones are make sense, are applicable for the type of AI that Adobe is creating. And so it it goes back to there's some of these amazing powerful models out there, but it's really around, like, how are we taking that model, and how we're using it to enable a specific capability within our products. So going back to that and because we really want to I think the important thing is we really wanna mitigate those practical real harms that our customers, could encounter and thinking about that, during the development and the, process and getting those mitigated so that when those features appear in our products and our customers are using them, they're it's, you know, it's safer for them.
And then, you know, again, just like continuing to learn, through our community, and making our models better from that perspective. So on LinkedIn, one of our our viewers, Wanda Jacobson, thank you, Wanda, asks whether, Adobe is participating in any AI global ethical standard conversations and discussions. Are those going on, or is it is it is it possible to imagine global standards, or is it so specific to each company and its applications? You know, we are participating in conversations all over the world, with governments around laws and, legislation. We are active in, the EU community around the AI code of practice.
And I think the important thing is really around the harmonization of all this. I think that will help, especially for global companies, selling, AI products all around the world. I think that'll just make it allow companies to be more innovative, but also being able to develop, like, safer products for everyone to use. Thank you. Jason, you're muted.
My apologies. The red button was not on. We have a lot of people watching the live stream, who are fans of AI, including And Adobe. Rosner design who says Adobe Photoshop AI is the best. It's amazing.
They go it off in the comments. And that's why that's part of the reason why we jumped at the chance to bring you on. I feel like when it comes to a lot of the artificial intelligence products that are out there, some of them, I feel like, have to really kind of jump through hoops to kinda prove what they're useful for to a wide range of people, and, some do better job at that than others. But I feel like Adobe really does, bring because when I think when I think about technology enabling people to do even more interesting and better things in what they do on a daily basis, you know, Photoshop minus all of the artificial intelligence has always been kind of a good example of that. Just got this new tool integrated into Photoshop, and now it makes that thing that used to take you twenty minutes.
It takes, like, five seconds. And no one saw a problem with that. But then so you know? And and now with AI, it's it's doing that on a wider scale. And, if you kind of give you know, buy into that toolset and those capabilities, it really does improve things.
I think it has the potential to really impact, you know, professionals' lives and how they work. And I think you guys are doing a great job. Thank you. I I I completely agree. I think it really is an enabler.
It helps, you know, I think and especially in our creative products, it's helping our creators be you know, really kind of think through, like, their their creativity and how to express it versus trying to worry about, you know, like we were saying earlier, trying to create that perfect outline around that person or object that you're trying to remove. So you can do it really quickly. It allows you to iterate. You're like, I'm gonna get rid of this particular object and replace it with something else. Oh, no.
It doesn't look that's not exactly what I want. So here's another idea. You know, I use it a lot to just ideate, and I'm not creative at all. And just, like, just seeing what other people have done, and I'm like, oh, this is something that I'm that I think is what I'm trying to do, but then I can go and use that as a foundation. And then, you know, and then just continue to build on it until I get to something I'm like, oh, I actually can do this.
And I Mhmm. And the amazing thing is, you know, I get to be I get to create art or, an image or a picture that I feel really good about. Mhmm. Mhmm. Yeah.
Yeah. It it's an enabler. I like the way you put that. Grace, it really a wonderful, pleasure to get you on today. Thank you for carving out some time for Jeff and I and and the AI Inside audience.
Grace, of course, Grace Yee, senior director of Ethical Innovation at Adobe. It's great to have you here. Thank you, Grace. Great to have you. It's great to have this.
Thank you so much. And, Jack. We'll look for we'll look for opportunities as Adobe kind of broadens the scope on what y'all are doing in AI. We'll, we'll reach back out and have you back. Alright.
Thank you again. Thanks, Scott. We'll talk to you soon. Bye bye. Fantastic.
And, yeah, people in, in the livestream chatter are kinda going off right now. Welcome, everybody. Good to see you all. Thank you. Daniel, I agree.
Great interview. After the quick break, we are going to talk a little bit about what you kind of you brought up, during the interview there. The, you know, Meta's moves around its frontier AI framework, Google's removal of pretty important text from its own public AI principles. We have a lot more coming up, so stay tuned. Alright.
When I was, when, you know, you had put in a lot of really wonderful stories, and so I had some great stuff to pick from. And we just have we have a little more than we would normally put in, but that was just because, oh my goodness, how do you even pick? This has been a big week in AI. It's a big news week. Yeah.
Yeah. Big news week. There's there's a lot happening in the world if you don't if you haven't been paying attention and much of it impacting directly the development of AI. So let's dive into it. Meta recently vowed to create open source AGI for the masses.
And a few short weeks later, Meta is developing the Frontier AI framework to tame risks within that development. It's a multi risk system. So, you know, high risk would be cybersecurity breaches, chemical or biological attacks, limited internal access, restricting release, that sort of stuff. Critical risk would be these possibly catastrophic events, which would lead to, you know, absolutely halted development, ensuring, control over leaks and everything. And, I think that's probably I don't know.
I'm curious to know what you think about this, but, you know, from the from the open source model perspective, which we know that meta operates, this is kind of the thing that some people point to and go, this is why you don't need open source because it enables all these things. And, you know, maybe it's enabled anyways without it, but I I don't know. What what are your thoughts? Yeah. It's it's I I've said a long time.
I think it's a fool's errand to think that you can anticipate every bad use, as I said a minute ago, and thus prevent it. And open source opens up more competition, more transparency. Granted, let's caveat again. It's open ish, not open source fully. Yeah.
But but open open development allows universities to work with it and small companies and start ups to work with it. And so I think that it's well worth it. And and the presumption that it negates the guardrails presumes that the guardrails could be effective anyway. And so Met is out here saying, well, well, we're gonna we're gonna anticipate and evaluate and mitigate and decide and get rid of things in their in their little charts. I'm fine.
They they should, but we shouldn't fool ourselves to think that this is foolproof, that there that any possible bad use, could be, eliminated. It can't be. It just simply can't be. And and I I think that's okay. That's what we have to deal with.
Well, the problems here are not gonna be made by the machines. The problems are gonna be made by, bad actors. And, yes, I know folks this gets to a guns don't kill people. People could kill people or people who all that whole argument. Right.
Yes. It's a tool. It's a tool that can be misused. And yes, you might wanna consider some limitations, but it's a general tool, not a not a tool used to cause damage in bodies. It's a general tool looking to all kinds of good things and a few bad things.
And and that means we're gonna have to deal with the possibility of the bad things to get all the good things. Yeah. And so I'm glad Matt is doing this. This is fine. But we shouldn't think that this is gonna be some, cure all or that they can be held to account for everything that is done with their especially open tools once they're out in the world and down the line because they can't control them anymore.
You just can't. Yeah. I think for me, that's that's the important takeaway that I have is is to ensure that like, I know for me, I appreciate that companies are doing this, and I have a measured kind of understanding Well said. Around what it actually means in the grand scheme of things. It doesn't you know, even if it does promise, it will not deliver on eradicating or eliminating any possible bad thing that happens.
But I want them to do that because I think it's better to have that than to not have that. Yeah. Yeah. Yeah. And the the the problem, Jason, I think is gonna be that, I want in in my book, The Web We Weave, I argue that we all should be making covenants of mutual obligation to be held to account for what we do.
And and I believe that strongly that from the user end up through the highest technology end and government as well and media. The problem is that once you make some warrant, you then become liable for it and you become, a liable when it comes to, let's say, the Federal Trade Commission. It doesn't tell you how to make toothpaste, but if you said your toothpaste would make your, teeth, shine brighter than a light bulb and attract, mates for you in every corner, then you could they can be taken to court for, not meeting that. Not delivering. Exactly.
And so I I can recognize the policy and and legal and these companies are cautious. In the one hand, they wanna say we're gonna try our best, but they're gonna be careful how they say it so that they don't get held to account for something they couldn't be held accountable for. Or, perhaps they should say more and should be more accountable, but all that fear holds them back from doing so. So it's a it's a dance. Yeah.
Indeed. Definitely a dance. Meanwhile, also part of the dance is Google, as you had mentioned earlier, removing important text, I would say, from its AI principles that had pledged to avoid using its AI tech for harmful applications, things like weapons. And, according to to Bloomberg, let's see here. It was a sec, it was a section called applications we will not pursue.
That was apparently there a week ago. Now it's disappeared. And, Google actually pointed TechCrunch to a new blog post on its site, Responsible AI, our 2024 report and ongoing work, where it notes Google's belief that, quote, companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security. But, yeah, still the removal of the text. I mean, the thing that that came to mind with me is, you know and it's the easy it's the easy punching bag response response to something like this when it comes to Google, which is the, you know, don't be evil.
It it's easy to see that and be like, well, that's Define evil. Yeah. Well, that's true. You're right. Well, it'll be interesting to see what the reaction is inside the company because this this came in great measure because a couple years ago, employees protested against project Nimbus, a cloud computing operation with the Israeli government that certainly is on the news right now.
And and as I remember, this was part of the reason I think that Google offloaded its robotics arm because of the fear that the robots can be used as as weaponized drones, grounded or otherwise. And and so the employees caused that to happen. Their protest was so strong. Whether that culture still exists in Google today and whether Google has a culture to listen to protest if it exists will be fascinating to watch. But yeah, I think this is a, an important, change to, to monitor.
Indeed. Yeah. And I, and I think what, what comes to mind as I hear you kind of set those examples that I definitely remember, you know, from not that long ago is has has the world changed? I mean, I know it's changed so quickly, so rapidly, and so, extremely. But have those changes changed certain people's feelings about what that meant then versus what that means now?
In other words, are are you know what I mean? Are are Yeah. People were so disrupted by that idea then. Are we in a different time in a very short period of time where people are less or or have kinda given up? They've either given up or there's so many other things to have to pay attention to right now.
Yes. Exactly. Right? Yeah. And and, you know, I think this is going political in a way that we don't on this show particularly because it's not relevant, though this is what we're talking about right now is political.
Mhmm. But, it's interesting just to look at general in The United States now. There is very little protest. There's very little loud talk. In Germany, hundreds of thousands of people, at a time are coming out weekend after weekend to protest, and they have an election going on.
That that has a fact factor too, but there's huge protests going on there and none here. So is that a change in are we are we are we numbed? Are we are we Yeah. Brought down? Are we have our attitudes changed?
Have our ideas changed? I don't know. Yeah. And how will that impact the the kind of development of of, you know, this this type of approach to AI? And certainly the politics at the top of the search engine.
Numbness, you know. Yes. Exactly. Yep. Yep.
Yeah. Yeah. Interesting. OpenAI released a new research agent tool called Deep Research, the deepest of research, which Which I think, by the way, just just one quick second. I think that's almost a direct answer to DeepSeek, and I think it's called DeepThink, or Deep Reasoning.
And so it's really interesting to see how parallel the two are becoming. Yeah. Indeed. Well and yes. And actually, I think even Altman has, you know, kind of commented in recent weeks needing to up the timetable for their releases and kinda speed things through, I think, as a Competition.
Yep. DeepSeek and all that. That's what competition will do for you. But deep research can essentially go out onto the open web. It can pull back information sources on a topic.
It can create reports on that information. It can also do recursive searching. So it'll search and it'll find kind of breadcrumb trail leading somewhere else, and so that it'll kinda lead down that road to the related searches, and then make sense of all of that information it finds and put it into a report depending on how you tell it to. And right now, this is a, a feature for pro users, pro members, so that's $200 a month. But they do have plans to release this, on the lower tiers.
So plus and teams don't know when, but, you know, they say sometime, down the line. OpenAI is touting this as a research process that might take a human thirty minutes to thirty days, takes deep research only five to thirty minutes depending on the complexity. There you go. Waiting thirty minutes for AI is interesting. Yeah.
Well, I know. And so when I heard that, you know, we're so used to like, oh, just a couple of seconds. But then I hear that, and I wonder if the, you know, a, maybe it actually does take that amount of time. But, b, does that tell me, curious AI minded individual, oh, then this must really be doing some work. Right.
You know what I mean? Right. Right. Right. Right.
This is like those other things that just, you know, funnel out garbage. This is really taking its time because it wants to get it right. You know, maybe that in in incurs more, trust along the way as a result. But, I don't know. I'm curious to use the tool.
I I didn't realize at first that it was $200 a month. It was only for the pro members at first. And And folks, folks, if you want Jason to learn how to use this, you're gonna need a lot more members coming in a lot more members. $300 a month. So okay.
So speaking of Of members. That, let's see here if I can open up my Discord here because Doctor. Do here. I'm I'm trying to vamp while I use the StreamYard interface to open up what I wanna show you. Okay.
So Doctor. Do in the, StreamYard, So if you are a member of Patriot of the Patreon, patreon.com/aiinsideshow, you get access to a discord. And, we're starting to kind of get some movement in, in the discord, which is awesome. Some conversations and stuff. And of course, Doctor.
Do is one of one of the, executive producers of the show, so thank you, doc. But DrDew used this tool because doctor has, access to the 200 a month, plan, I suppose, to do research on me and basically go through my Internet history to pull back, you know, my podcast career and appearance history and to complete an IMDB style table of all the things that I've done. Include as many guest appearances on other shows, not just one Don't forget all your Oscars and Grammys in there, doctor. Exactly. Exactly.
And came back with this, like, really nice like, I I was reading through it. It's multiple pages. You know, this shows Buzz Out Loud. This shows Tech News Today at Twit, all about Android. And there's a number of these pages, you know, all formatted into a nice table.
And I was like, I was actually pretty impressed. I was like, wow, this thing, I mean, yeah, mostly. I mean, I think I, yes, I think if I really scrutinize some of the details, I would find some differences. Some of the dates on these things, I don't think are are purely accurate. But, I mean, Tech News Weekly 2017 to 2023.
Yeah. That's that's spot on. Although Tech News Weekly is still going on, so I don't know why I put the 2023. I guess my departure in Maybe that's what it is. Yeah.
That's what it's referring to. So that would be so anyways, you know, so we don't have access to the tool, but some of our patrons do. And And by the way, folks, yeah, if you ever wanna show us the, the fruits of your AI experimentation, please do let us see it. Yeah. Love it.
Thank you, DrDew. Thank you, DrDew. Great to have you with us and supporting us as you do, as you doctor do.
Sorry, I couldn't help that. Open AI responding quickly to, as we mentioned earlier to the DeepSeek, descent by issuing its, o3-mini, which is its first reasoning model. They're doing that free of charge. So this is another example of how OpenAI is really trying to just kinda, like, push push things out a little bit faster. And, this is the smaller model.
The full o3 model hasn't yet been released. But, yeah. So this is out as well. I haven't I haven't really used it, so I don't really know a whole lot about it. But, is it OpenAI's first reasoning model?
Is that the I think so. I think that's what it is. It's, I think that this is all DeepSeek, like, causing a kind of panic. Yeah. It's deep slash meta causing a panic of having to get stuff out there because otherwise, if you're if they're too much behind paywalls, don't we know in media, they get lost.
Yep. And this is related. This is not on the rundown, but, who was it? Jesse? Jesse, what's your name?
Jesse Scott, told us in the, chat earlier on that the new Gemini two point o model is out and released for everybody today. It's at the bottom of the rundown, Jason. Oh. Yeah. Yeah.
So this is a little breaking news here. And so you have, leapfrog going on with all these models coming out. It's almost impossible and and and free and open, that we don't all have to be Doctor. Do, with the special access. We can try a lot of these things.
And, you know, when I think about teaching students, I've mentioned on the show a couple of times, to be able to have all this available for students for free is just amazing. Mhmm. Yeah. So we'll see what this looks like. It really is.
I I yeah. It makes me wonder what the, what the paid models of two to five years from now, like, what that looks like because so much is happening free. I mean, often with these things, like like with o three Mini, for example, it's not free and clear entirely. Right? Like, you get a certain amount free.
If you're on one of the paid, subscriptions, once the the full model rolls out, then you'll get full unfettered access, essentially. So there are some limitations. But but it is really cool how everybody has access to at least use it and and and start to kind of determine why they use this for that or which ones they have the strong preference for so that they can maybe eventually pay for the one that they're gonna use all the time, whatever that may be. Yeah. It's gonna be really hard to figure out what the market is.
Yeah. So today, by the way, is the day we switch, this week in Google to become intelligent machines over on the Twitch network. And the guests they have on today as a former go to market guy for open AI and he left like a year and a half ago. And I can't imagine what the changes in the sense of what a market is, who's our customer, what will they pay for what? What's the value?
It just changes constantly. Constantly. I mean, even being on the inside, it has to be hard to keep up, let alone on the outside trying to keep up. Well, that's cool. So today's the the day of the the re Of the switch.
The relaunch. I don't know if it's a relaunch, but it's That's a rename. Yeah. Basically, so they can get ads. So, folks, we're not really in competition.
We kinda do things differently in both shows, and this is my my main AI show, and we'll talk about anything over there. But, it'll be interesting to see, what I can learn across, across both. Yeah. Cool. Excellent.
Well, so if people wanna go, to subscribe, do they still go to twit.tv/twig,twig? I don't know. I am or intelligent machines. I don't know. Just do a search for Intelligent Machines.
Yeah. You'll find it. And Twit, and I'm sure But but but I don't wanna I'm not No. Be loyal here. Yes.
Hey. The great thing is that Jason and and and Leo and Lisa are such friends that there's there's really no competition. Leo has made absolutely clear he doesn't wanna compete with this show, but we'll all learn more about AI together. Yeah. I have I have no no qualms whatsoever.
I'm I'm honestly, because I produced Twig for as long as I did behind the scenes. For so long, that show wasn't just about this week. It wasn't just about Google. It was about all sorts of things. So I think the name change and kind of approach is very is kinda long overdue.
It's it's a bad time. It needed it. So, so cool. Congratulations. All sorts of stuff there.
Yep. Yep. Yeah. That'll be interesting. Let us know how that goes, and everybody should check that out this week.
AI coming to Washington, and Thomas Shed, who is the technology transformation services director, also a close ally to Elon Musk is making that happen with a quote AI first strategy according to wired sources. So interesting to me how it was not that long ago that AI was, you know, the, the thing to fear. And yet now suddenly it's AI first. Like, let's just go for it. Let's centralize our databases for AI analysis.
Let's develop AI coding agents for all agencies. Let's automate our in, internal government tasks with AI. Workforce reductions are imminent apparently. All the impacts coming at you, as everything else is right now. But, you know, it's okay if government is behind the times.
These are critical mission things. You're does grandma get her Social Security check? Do we know what products have been recalled? Are we analyzing, a tragic plane crash in in Washington properly? I don't trust ai with those tasks, and I don't think we need to at this point.
Government is behind the times. It should be behind the times. So I think to try to drag it into this future, for the sake of of of technological macho, is gonna be dangerous. Bad things are gonna happen. Yeah.
Well, I that's that's the feeling that I get too is that if you if you rush change everything on a dime the way it kinda seems like it is right now, that, especially with the technology that is still very, you know, kind of uncertain Unreliable. Predictable. Unreliable. Yeah. You don't truly know what you're gonna get.
It just kind of get the feeling that there's a certain contingency of people that are like, yeah, bring it on. Like, okay. We don't know. And we'll see what happens. You know?
It's like, oh my God. We'll see what happens indeed. Yeah. So so so you have on the one hand that that idea that that we've got to change everything and bring ai in and ai as you just said, Jason, it's true. It wasn't that long ago that ai was seen as as risky and dangerous.
Now it's the future. But then as our next story says, it's still seen as risky and dangerous. Yeah. Right. Yeah.
So it's it's like, one hand doesn't know what the other whatever that saying is. You know what I mean? Use DeepSeek in The United States, go to jail, or at least pay fines, one or the other. And, no, this is not law yet. DeepSeek is not named specifically, but it sure is alluded to.
It's a bill that's been filed by senator Josh Hawley to, quote, prohibit United States persons from advancing artificial intelligence capabilities within the People's Republic Of China and for other persons, essentially to prevent the import of technology or IP made in China is another aspect of it, and those violations would result or could result if this were to actually go through. I kinda be surprised if it goes through, but it stays go through. I think it's a Holly does a lot of show off legislation like this of, oh my god. This is terrible, and I'm gonna protect you from it, when sometimes I want protection from him. And, so no.
I think it's I think it's performative. No. What we've seen with deep seated courses, a lot of governments, including the Pentagon, have said that it's not not allowed within Not allowed. Government. I think Australia and New Zealand as TikTok.
Right? Yep. Yep. You know, okay. Fine.
Alright. Though I still think there's use for these things, but we shall see. For sure. It just seems like a similar playbook of, like, oh, well, we did that with TikTok. Now the new boogeyman is DeepSeek, so let's do that with DeepSeek too.
And, yeah, Yeah. Twenty years in prison, potentially. One million dollars per individual. $100,000,000 per business. I mean, that's that's no small yeah.
Those those I mean, you remember back in the day when when, taking MP threes was seen as criminal and people were, you know, your teenager was gonna get you, bankrupted because, he was downloading songs. Well, now your teenager's gonna get you bankrupted and in jail because, she's, downloading DeepSeek. Yeah. Right. Oh, man.
That just reminds me of of when I was a kid, my dad would rent a Betamax machine from the video store. No. He'd rent a VHS from the video store and movies on VHS and then bring it home and connect it to his Betamax that he had connected. And so every weekend, it was like, well, we're just recording new movies for our library. And every time he'd do that, I'd see the FBI warning at the beginning.
And I was too I was too young to know, like, how the heck is the FBI gonna know that you're doing that. But in my mind, I was like, dad's, like, really, like, putting us out on a limb here. I hope we don't get caught. Outlaw dad. I know.
Totally. But I tell you what, when friends came over, they knew we had all the best movies. So they knew that. All on recorded Betamax. Rest in may it rest in peace.
Alright. Let's, take a quick break and then round out the show with a few other news stories. So many great stories to talk about. We've got a few more coming up in a second. Alright.
Anthropic showed off a new defense system. Lots of defense kind of stories today, I suppose. This system is meant to target jailbreak attacks on LLMs. And I guess my understanding there is, you know, we talk about, guardrails all the time. There are always really and Grace actually alluded to this.
There are always really interesting, unique, and unexpected ways that people use these systems that when you're designing them, you don't know that people are gonna go in that direction. And if they do, sometimes these systems respond to them in ways that they are prohibited from doing just because you've worded it differently. You somehow got around it in one way, shape, or form, and I think that's probably always gonna be there to some degree. But Anthropic has created a filter on input and output. It scans for potential attempts to circumvent restrictions.
So people who are, you know, geared or directing them, you know, their efforts at getting around these restrictions. It tested it with a 83 users in, bug bounty program. 10,000 jailbreak prompts were analyzed in the process. The results I mean, you know, it got some results. It dropped, the successful attacks dropped from eighty six percent to 4.4.
So that's a big drop. You know? Yep. Can't expect one you know, 0%. You can't expect that, but that's still something.
We saw about this that, it's anthropic making the public and the users do their their work for them. Okay. That's that's a fairly standard complaint. But what strikes me about this one, Jason, is when I was thinking about it, is, okay, you know what your guard so called guardrails are, and so you can look and see whether or not they were, evaded. The problem is not that.
The problem is the guardrails you didn't anticipate. Yes. And so it's not so much a jailbreak. It's it's a an unanticipated use that's gonna get you in trouble, in PR or policy. And, there's no way to measure that because you don't know what it is.
You don't know what it is until it's happened coming from or what it will be or a Yep. Until it's too late. Until it's too late. Interesting. Yeah.
I mean, but there there are people out there that spend their time. You know? They're it's their funsies is is to try and get these things to do the worst of the worst just to kinda prove, just to kinda show, like, oh, and see, even Gemini is not safe or even, you know, Claude's not safe. You know? So Red teaming is a fine thing to do.
It's it's it's but it's just we can't wall ourselves into thinking that that's gonna solve every problem in humanity. That that's cool. Do it all. Yeah. And then finally, I thought we'd end the show with a few notable names in artificial intelligence and their vision of what the future holds.
We can start off with, Stephen Fry, Demis Hassabis. I still never know how to pronounce his last name. Hassabis, I think. And more than 100 other experts releasing five principles for, responsible research into AI consciousness. So, you know, understanding AI consciousness to prevent mistreatment and suffering, that's one.
Constraints and safeguards, make sure that those are in place, some way, shape, or form, that's two. Gradual progress, moving slowly with expert involvement, so not moving too fast, in other words. Balance transparency with safety, that's the fourth, and then careful communication. So avoiding overconfidence in claims about consciousness, which I feel like, wow, the train has left the station on that one. Acknowledging uncertainties, transparency about the risks, that sort of stuff.
So So, two things here. One, I'm not sure that Demis Hassabis or however you say it, signed this letter. I think the Guardian kind of screwed that up with the wrong photo on. Okay. Because I don't find his name on the on the signatories, just for the record.
Interesting. Okay. Now this was a big eye roll for me. Okay. Because the way the headline of the Guardian puts it is AI systems could be caused to suffer if consciousness is achieved.
You know, there's so much truly human suffering in the world. Hurting software's feelings is the last thing on my mind. Yeah. It's a machine. It's not even a full machine.
It's just a program. It's just an algorithm in there. It has no feelings. It has no consciousness. Yes.
It would be good not to have a culture in which we want to insult the thing because it can't be insulted, but, I think it's ridiculous and it's getting way ahead of ourselves to presume that it has consciousness. Ergo, you better not hurt its feelings. You better be nice to it. And where are you saying thank you to your LLM? You know, I actually do say please.
Yeah. I I because it's reflexive. Oh, I save a ton. But this is absurd. This is just absurd.
And, it's from an organization called Conseum, that was, started in great measure evidently by a guy at an ad agency. I I I don't I don't get this, WPP. So, yeah, I I I I rolled my eyes big time at this. I thought it was kind of ridiculous. But that's why I put it in there so I could roll my eyes here on the show.
You go. I feel like I heard the resistance of your eyeball against your eyelashes when you just rolled it. I could hear it through the microphone. These, these mics are really good. And then another that I thought was kind of unrelated, but, you know, also kinda good to throw in here just from the, you know, from the perspective of big notable names in AI saying things about the future of AI, Meta's Yann LeCun, one of the godfathers of modern AI as he is well known for, says the next major leap for the technology of artificial intelligence will happen by the end of the decade.
Says current systems are too limited. And I think, really, what he's talking about, seems to be about understanding of the world. Exactly. Can't do things like domesticated robots, can't do things like fully automated cars because it has AI currently has a very hard time interacting and understanding the physical world, which is something that us humans, we you know, it comes very naturally because from the day that we're born, we're learning that that language. And for AI, apparently, you know, building systems that understand the world is a big challenge.
It sounds like Lacun is saying, you know oh, and he says, quote, we're not talking about matching the level of humans yet. If we get a system that is as smart as a cat or a rat, that would be a victory. So that's where we're at right now. I'm a big fan of Yann LeCun because I think that he is the among the most realistic here. You don't hear him talking.
Yes. Yeah. He really is. You don't hear him talking about AGI and superintelligence and destroying the Earth and all this stuff. I I think he's tied to reality and what AI can and can't do.
And he he compares it to the intelligence of a cat often, then we can argue whether a cat, a dog, or a rat is the better comparison. But but he's absolutely right here that AI has no sense of reality. That's why you get six fingers. As as I think we said on the show the other day the other week, if a ball if a video shows a ball going off a table, AI doesn't know that it persists in reality. The ball is gone.
There is no ball. It never was. Right? And and so how do you teach AI what a a three year old? This is what John also talks about often.
How does a three year old learn the world? The amount of data that a three year old takes in to get the understanding the three year old has is beyond anything that is taught to an AI. It's a it's a it's a huge task to get AI to the level maybe of a toddler or a cat. Mhmm. And we're not there.
And so I think this is a this is a point of realism. But what I like about what he says here too is that this is talking about what's the next leap we're gonna need Mhmm. Here. And and I think that that's, you know, we have a lot of argument now that it's reasoning, but I think it's I think he's right. I think it's reality.
And unless it can get some sense of it has a has a good sense of our language, It has a good sense of Yeah. A very good sense of language. So so that worked. Can there be training such that it gets that kind of reality? Now we talked about this with, Jensen Wong about the, the the the the, shadow AI existence.
The digital twin. Thank you. Sorry. I forgot. The digital twins created and how they trained, small automobile AI on on thousands upon thousands upon thousands of hours of, video.
And then in turn had to make up more synthetic data to try to give it that sense. The task of training on reality is really, really hard. But, you know, we'll see. I think that that that's a it's a tangible goal, far more tangible than artificial general intelligence. Well and is is what he's alluding to here that, like, in order to get to AGI, this needs to happen.
And so they I don't think he's even putting that on on on his timeline. Yeah. I think that, that he's more realistic than that. I think he's trying to say that if we want because he I think he's put it the right way. If you want a robot or a car to operate properly in our physical world, then it has to have a better understanding of that physical world.
It's straightforward. Full stop. And, and and I think that's that's why I like him so much because that that kind of expression is tangible. Mhmm. Yep.
Totally agree. Totally agree. I like I like hearing that too. But and by the way, this happened because Lacun, was one of seven engineers who were awarded the Queen Elizabeth Prize for engineering, and so he was on stage talking. And he's off to a big conference, in Paris, I think, next week that I wish I could've got invited to because, hey, it's Paris.
There's there's a there's a big AI conference, happening. I think it's next week. Yes. Monday, Tuesday, the French AI Action Summit. Oh.
And so, there'll be a lot of important people there. Some people I know who got invited, and I didn't. And I'm Hopefully, they're listening right now so they could, like Well, let's let's ask Jeff over. So years ago, I went to a, E G 8, sarcosy, in the early days of the Internet. Had, had a whole bunch of people in a huge tent on the tweedlery about the Internet and what's happening, and I got to question sarcosy, and it was it was kind of a blast.
But I think these are the kinds of discussions that that we need to see at this high level. What are they are they being smart? Are they being stupid? We'll see what comes out. Is this gonna be AGI talk or is this gonna be cat talk?
Right. Right. So we shall see. Yeah. We shall.
Well, thank you again to our guest, Grace Yee from Adobe. It was fantastic conversation. Really enjoyed having Grace on and, yeah. Thank thank you, Jeff. Always appreciate Always.
Out with you an hour every week. I love it. Jeffjarvis.com for everything that you've got, book wise especially, The Web We Weave, Gutenberg Parenthesis, Magazine, everything's there. That's the place Thank you, Jason. Including a link to Blue Sky with a little little butterfly.
Just now saw that. Thank you for that. Everything that you all need to know about this show can be found at our web or or a place on the web. I was gonna say web page, and for some reason, that just sounded so articulate. I didn't know it's accurate.
Our web page on the on the worldwide web. Jason, do we have a home page? I've heard that one should have a home page. Page. Oh my goodness.
Yes. How It's w w w or no. It's worse it's worse than you said. It's h t t p, no less than colon slash slash And then is it slash / or backslash? Backslash backslash.
Yeah. Yeah. Right. And we're old enough to remember the backslash. Totally.
aiinside.show. That's really all you need to know. That's the the page, that has all of our information, has a way to subscribe and everything. Big button right here. Just subscribe to it.
Or you can go to, patreon.com/aiinsideshow. Become one of our, our patrons who support us doing the show each and every week, something that we, you know, just highly appreciate. You get ad free shows, you get the discord community that I mentioned earlier, that that is has some really great conversations happening. You get an AI Inside t shirt if you become an executive producer like DrDew, who had sent in that example via the Discord.
Also, Jeffrey Marraccini, WPVM 103.7 in Asheville, North Carolina, and Dante Saint James, our most recent executive producer. So thank you all for enabling us to do this show. We couldn't do it without you. And, yeah. As for everyone, thank you for being here.
We will see you next Wednesday on a I think so. Anyways, I'm looking at my whiteboard calendar. Yes. We are good. We will Jason Jason's had a little talent.
Jason and I both need AI to solve our calendar problems. Oh, jeez. No. I no. Because I figured it out.
It's a whiteboard calendar. That's what works for me. Period. End of story. Thank you again, everybody.
We'll see you next time on AI Inside. Bye, y'all.




