Sal Khan, founder and CEO of Khan Academy and author of the new book "Brave New Words: HOW AI WILL REVOLUTIONIZE EDUCATION (AND WHY THAT'S A GOOD THING)", joins Jason Howell and Jeff Jarvis to talk about the many challenges educators face as AI tools become pervasive and ever more powerful. Sal explores how teachers and administrations are embracing technology to improve the learning environment and make their own jobs easier and more effective.
Become a Patron of AI Inside and support our work directly
INTERVIEW with Sal Khan, founder of Khan Academy
- Sal's upcoming book "Brave New Words: How AI Will Revolutionize Education and Why That's a Good Thing"
- Challenges teachers face with AI pushing against traditional teaching methods
- Legitimate concerns about AI use in classrooms
- AI as a tool, not a replacement for teachers
- Potential of AI to mitigate cheating and provide process insights
- Impact of AI on education and the role of humanities
- Suggestions for educators to use AI in classrooms
- AI as a tutor and writing coach
- AI supporting student assignments and teacher productivity
- Managing teacher resistance and concerns about AI
- Sal's perspective on AI bias and representation
- Ethical considerations for AI in assessment and hiring
NEWS
- Stability CEO resigns from generative AI company
- United Nations adopts U.S.-led resolution to safely develop AI
- Ben Evans: The problem of AI ethics, and laws about AI
- Tennessee Adopts ELVIS Act, Protecting Artists’ Voices From AI Impersonation
- Financial Times tests an AI chatbot trained on decades of its own articles
- OpenAI is pitching Sora to Hollywood
- Sora: first impressions
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 10, recorded Wednesday, March 27, 2024, How AI will revolutionize the classroom with Sal Khan. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, hop on over and support us directly, and thank you for making independent podcasting possible.
Hello, everybody, and welcome to yet another episode of AI Inside. I am one of your hosts, Jason Howell, and I'm super pumped for today's episode. This is an idea that kind of came up a couple of months ago for me, which we can talk about in a few minutes. I love the topic, and I absolutely love our guests.
Before we get there, bring it on to the show. Of course, Jeff Jarvis, the co-host. How are you doing, Jeff? Great.
I'm so eager to talk to Sal Khan. This is what a great cat. Thank you, Jason. Absolutely.
Hey, sometimes you just got to ask and you shall receive real quick before we get to the interview. Thank you to our supporters on Patreon. Joecatskill is this week's famous patron at the top of the show.
If you want to support us, you can patreon.com/aiinsideshow. And you too can get your name called out at the top of the show. We could not do this show without you. We could not interview the people that we get the chance to interview without your support.
So thank you very much. Let's waste no more time. Let's get right to it. Bring into the show Sal Khan. You may know Sal if you know anything about the Khan Academy, which I think is just a fantastic online education tool resource platform. Sal is the founder, also author of the upcoming book, Brave New Words, How AI Will Revolutionize Education, and why that's a good thing. Sal, it is fantastic to have you here today. Thank you. Thanks for having me, Jason and Jeff.
Yeah, it's an absolute pleasure to get the chance to chat with you for a little bit about this. I alluded to it kind of in the setup, and I think maybe I've told the story on the show before, but I'm going to tell it again just because this is the reason why I even considered this in the first place. I was, I help out in my daughter's school from time to time. I go in there helping out with homework and get the chance to kind of see everything go and on behind the scenes and everything. When I was there, like a month and a half ago, there was an art teacher who was working with a class, and she was giving an art assignment to the fifth grade class.
One of the boys, one of the fifth grade students raises his hand. He's like, can we use AI for this assignment? She seemed kind of confounded a little like, hmm, like she was, I don't know if she wasn't prepared for the question or if she was truly considering the request, but ultimately she said, no, you can't use AI in the classroom. But it just really got me thinking like, this is, I mean, this is a really big moment right now for the institutional kind of norms of education and how they're being challenged in a number of different ways. I guess talk a little bit about that. That's this challenging position that teachers are really kind of facing right now as that AI is pushing up against their practiced methods of teaching course material.
Yeah. I mean, there's, there's real tensions and I write a lot about this in the book where there's, there's legitimate reasons to be worried about artificial intelligence in the classroom. I mean, you gave an example from an art class just now, but yeah, if you're used to the students fully constructing it on their own and now there's this, it takes a little bit to process it. Obviously there's a lot that's been written about writing and students using these tools using chat GPT and now I write in the book, it completely makes sense for teachers to not allow that because these tools were not made for education purposes.
And so if a student uses chat GPT in a place and hold on, my dog is. That's okay. We like to. We approve of dogs. Okay. I'm trying to hold my focus with her scratching the door like that.
I get it. But, but if you're, it's completely legitimate for an educator to have a sense of what the student is able to do completely on their own. And for, for, for that, you know, I could imagine doing more in class assignments. Now on the other side of the argument, we know that these tools, whether it's image creation or whether it's writing, they're going to be part of the future. I encourage all of the employees at Khan Academy. If you're not using these tools in some way, shape or form, like you should be, you should be learning how to use them. And so there's also the argument that there should increasingly, especially in I would say high school and college, more and more projects where the teacher, whether it's an art teacher or whether it's a writing teacher say, no, I do want you to use any tool you can find, but I'm going to expect something more ambitious. And then I think there's going to be a middle ground where, and there's a whole chapter about this in the book where teachers are going to want to still the traditional take home project, which right now things like chat GPT puts a lot of questions around. I think there's going to be, and this is what we're working on at Khan Academy, new ways to assign assignments through AIs, but through AIs and the applications that are actually built for education so that the AI knows not to cheat.
The AI can act as a coach and give you feedback as a student, but not do the writing for you. And then maybe most importantly, well, I think the most important is actually the support for the student. But then when the, when the student submits the assignment, the teacher doesn't, they no longer just get the output, they're going to get the output plus the process. So the AI can tell the teacher, hey, I worked with Jason on this for four hours. He had a little trouble coming up with a thesis statement, or he iterated a little bit with me on this one image, whatever it might be. And, but I'm confident Jason's work, it's consistent with the writing that he's done in class, versus if you were to go to chat GPT and just copy and paste, it would say, hey, this assignment looks shady. I don't know where it came from.
We didn't work on it together. So I actually think AI can help mitigate the cheating as opposed to just becoming a tool for it. So I'm so delighted to be talking to you because I'm an educator of sorts, only a journalism teacher, but that's, that's all I am. But I'm working on a program about bringing AI and the internet together with the humanities and the social sciences.
And I noticed in the table of contents through your book, which I can't wait to read next month when it comes out, that you talk about the social sciences, even more, you talk about STEM. And I'm sure you know that Jensen Wong said about two weeks ago that universities stop training computer scientists that now the miracle of AI is that we can all code, we can all do the same thing at once. And I'm curious whether you see, you talk about the AI as a tool of education, but also the impact of AI on education. Is this the revenge of the liberal arts major? Is this, can it change how we look at the subject matter of education and higher education? I'd say yes and no. I think at the end of the day, I think it's always been a both.
I think if you are a humanities major where you are learning all sorts of valuable skills, or at least practicing and displaying all sorts of valuable skills, if you want to be fully actualized, whether or not it officially says that you're an engineer on your on your diploma or not, it's valuable to have that type of thinking to say, okay, I have some constraints, potentially in the real world, I need to design for that those constraints. I need to test it. Something's wrong. How do I debug that?
How do I iterate quickly on it? I think those types of skills are going to be universally valuable. Now, the place that I would take a second look, and this isn't just because of AI. This has been going on for as long as education should have been around. Early, and as many as know, computer science as a field didn't even exist in the 60s. It was really applied math. But then if you were essentially a computer focused applied math major in say the 1960s, you had to learn a lot about hardware and writing in machine language and assembly language and these very low level of abstraction. When I was a computer science major in the late 1990s, we were learning all these higher level abstractions at least at the 1990s level of it, but we still were learning a little bit of that low level stuff and we were all of us in the late 90s like, yeah, do we need to, I don't think we're going to program an assembly anymore.
Most of us are not and most of us haven't. It was useful, I would say, to just know what is going on truly at like a processor level, like what is really going on. It does inform you as you're trying to optimize things later, but I think over time things have evolved towards more and more abstraction, more and more high level programming instruction. And I think AI is just the next wave of that. But we will see how good these AIs get. Obviously, there's rumors that GPT-5 is about to come out. But I suspect this engineering thinking and then the ability to debug, the ability to go on multiple layers of abstraction when something isn't working and really do the detective work there, that's going to be a very, very valuable skill for at least the next couple of decades.
But so will the humanity. So the ability for someone to be able to, for my own children, if they're interested, I would say, yeah, get these types of skills, whether or not it has to be in a formal engineering major, but also design skills. But you're also going to need to know how to communicate really well and think philosophically about the world, which is changing at a breakneck speed right now. In your book, what are some of the, you know, spoiler alert here, which I'm sure you get to toward the end, what are some of the suggestions you have for the best ways for educators to use AI in the classroom? Yeah, you know, it's, there's a bunch of use cases. I write a lot about, you know, when I gave the TED talk last year, I talked a lot about the tutor and use case for a student. And we've been learning a lot as we've been putting it out into real classrooms, but I think that's going to continue to be really powerful. And to your point, not just in math or math and science, I actually think large language models are better at doing this in the humanities.
So I think that's going to be interesting. And on the product side, we at Khan Academy were doing a lot of work on making the tutor more proactive and making sure we can get the math accuracy and other accuracy there. I think, so I think teachers using it in the context of an existing scope and sequence, say on Khan Academy, that is not AI generated, but AI there to support it, I think is going to be really interesting over the next couple of years. I think, you know, we're creating a whole writing coach on Khan Academy. If you asked me three years ago, when would Khan Academy have a writing coach?
I would say maybe in 20 years, well, it's going to happen by back to school. That's if you're not just a humanities teacher, I can imagine a science teacher asking students to do assignments that way and then getting that a report back on not just the output of the assignment, but also on the process of the assignment. You know, I could go, there's a whole suite of things that I'm, you know, we're creating where students, we're now teachers can assign talking to say, a simulation of a literary figure or a historical figure. So I would encourage teachers, have it support your students and traditional tutoring, start experimenting with ways to use it in more of these avant-garde ways to engage your students and say a history lesson or science lesson where they can talk to Marie Curie or design an experiment. If you're doing writing, I think by this coming back to school, you know, for sure I could speak for us at Khan Academy, we're going to have tools where you can assign essays, it won't do it for the student, but it will work with the student and you can get reports there. And then I would say, you know, very meat and potatoes for teachers to use these tools for their own productivity, for creating lesson plans, for progress reports, for creating rubrics. You know, we've had this, let's call it teaching assistant functionality on Khan Academy since we, or on Khan Migo since we launched it over a year ago. But in the next few months, we're going to make this much, much more accessible to a very broad segment of teachers.
Pretty much any teacher in the United States is going to be able to have access to these tools. Yeah, so what comes up for me around this is like this, this is all really fantastic. But I also know that, you know, a lot of teachers out there, they have the process, they have their approach, they know what works. And now this technology kind of comes into view. And I, and I'm sure some teachers are curious and a lot of teachers are resistant, like, I don't know, in your experience, what are you seeing as far as the teacher reaction to kind of the necessity around not just learning how to, how to teach in different ways, because they're probably very used to that. But, you know, technology and especially emerging technology can be unfortunately a really scary thing because it's uncertain, it's unknown. It's this like thing that, oh, well, I couldn't possibly do that.
I don't know enough about it to do it effectively. What, like, how are teachers responding to what you're talking about here as far as that's concerned? Yeah, I mean, on one level, the response has been more positive than I would have predicted, say, a year and a half ago when we started working on this.
There's definitely more openness to it. I think most educators have realized that they can't ignore this technology at the same time to your point. You know, there's an early adopter crowd as there always is of, five or 10% of teachers who have already, they're already lashed onto it and are already using it in different ways, especially for their own productivity. But the other 90%, what we're finding when we go into school districts, we're having to do more professional development than we had before.
Even though in some ways the AI is easier to use, you can interface it more conversationally than a traditional web user interface, things like that. And the first conversation is to just really make sure teachers know that this is not going to replace you. And we say that completely sincerely. We do not believe, and I write about this extensively in Brave New Words, this is not going to replace teachers, but it is going to be a significant teacher's aid. And so when we make that very clear, and then we show very credibly how it can be their aid, we have a school district out here in Northern California who's estimating that it could save their teachers about five hours per teacher per week. That's a very exciting narrative, especially coming from the EdTech world. EdTech in the last 10 years, and Khan Academy is guilty of what I'm about to say, we would go to educators and we'd say, look, we have all these efficacy studies. If you learn to use this new thing and modify your school day so that you get to use Khan Academy or whatever tool we're talking about for 20 minutes, three times a week, you're going to have all sorts of learning gains. And most educators, that's what they're in it for. They want to improve their students' learning outcomes.
And so when they hear about it, they're like, oh, yeah, I want to do that. But it is giving them one more thing to learn, one more thing to do and they're already stretched thin. And what's exciting about these generative AI tools is the headline is, hey, before you even talk about using it with your students, you can use it directly. And yeah, there's going to be a little bit of a learning curve. But as soon as you get down that learning curve, you could be saving five, 10 hours a week. That's a pretty good value proposition for teachers. It's actually going to take stuff off their plate. And then I think they're going to feel more comfortable leveraging it with students. So I went to a meeting at a university talking about this new program I want to start. And it was very exciting that an English teacher in the room, undergraduate level, had come in right after, I was forgetting, is it Sola? Oh, yeah, yeah. So they had just released the first videos.
The videos we just got are 10 times even more amazing. And she threw out her lesson plan for the day and she said, okay, we're going to write it props. And it was wonderful to see how excited she was about how excited the students were about thinking in a new way. And at the end of the day, she ended the lesson.
She could say, see, learning how to express yourself in English is a good thing. And to see this not as a replacement, which you've made very clear that people should not see it as, to see it as a tool for creativity, AI and creativity is one of the subheads in your book. I wrote a book about the Gutenberg years called the Gutenberg Parenthesis and it struck me how it took time for the technology and the technologies to fade in the background. And so I wonder when there comes a point with AI where we don't see it and the internet itself as this awesome technology, but instead really understand it as a tool and a tool to put to use as we want. I was heartened when the MLA, the Association of basically English Teachers, issued its guidelines for the use of AI and said, don't be scared of it. And they said that print and auto correct are just tools to help people write and the same here. How do you see the, so often I think at the beginning of a technology, people think to use it is cheating.
And we heard that obviously at the very beginnings of chat, GPT and LLMs. When does the idea that this is a useful tool to help you express yourself better? One thing I've speculated about is whether LLMs can help extend literacy and our definitions of literacy. What are some of the expansive ways that you see AI expanding our definition of education?
Yeah, and I'll start in kind of in reverse order. I'll give a very tangible example that I have directly observed of AI support improving engagement with literacy and writing in ways that would have been hard before. My, well now nine year old, but eight year old when he started playing with Kahnmigo, we have an activity that is designed for essentially his age, which is write a story with me, where you get to riff with the AI and brainstorm a plot for a story. And then you start, you write a paragraph, the AI gives you feedback for that.
And then it'll write a sentence or two or a paragraph then you go back and forth. And my eight year old, his stories, one, he didn't consider himself a story writer or a great reader before. It wasn't part of his identity. And the stories that he did write weren't particularly cogent. They were kind of all over the place. They didn't have a good plot line.
And obviously they were written at the writing level that he could write, which wasn't, it was one of those things where as a parent, you'd read the story, you're like, good job, you know, but it wasn't really that good of a story. But you won't show this to your son. Yeah, he can watch it later. But now when he's writing with Kahnmigo, the, it's a real story. And actually it's giving him really powerful coaching on what makes a story, what makes a story arc, how do you introduce characters. And honestly, the story that he's producing is higher than his reading level or what, what I thought was his reading level higher than the level of the books that he will read on his own. And as a parent, I think that's like great exposure because not only is he writing his parts of the story, but the stuff that the AI is writing, he's reading those. In fact, he's reading it like an editor. He's reading it very closely. Right, right. And it's, he's getting it much deeper.
And there's no other way I could have gotten him to read at that level. So that's, I think we're going to see more and more of that. At the same time, you know, you mentioned about technologies just becoming seamless. I think that's going to happen with AI faster than actually almost any other technology that's come before it. Because at the end of the day, it is, you can interface with it more and more the way you can interface with another human being. In fact, some of the, some of the training we've really had to do for teachers and students is just making them feel comfortable talking to the AI the way that they would talk to anyone else. They feel like they have to talk to it the way they talk to a Google search or something.
They were like, no, you literally can just say what you want. So over time, you know, right now, Kahnmigo is still sprinkled in in kind of a traditional web interface and a traditional Khan Academy experience. But over time, the AI is going to take center stage, and it's going to bring things to you. And you're going to be able to interface with it similar to the way, obviously, Alexa and Siri, these are AI's, they're kind of the last generation of AI's.
And for the most part, they don't have a visual input, some do, but the next generation of, I think all apps, including Khan Academy is very the way we're chatting, you're going to be able to chat with Khan Academy or or Kahnmigo. And then it's just going to feel very, I think very fluid. And and where you do need a UI, let's say a teacher wants to make an assignment or plan a lesson plan, it will be, it will feel very much like talking to the Starship, you know, enterprise computer, where you're like, Hey, okay, take another stab at this paragraph that I've just highlighted.
And so, and I don't think that's that far away. As I mentioned, some of this writing workflow tools going to be out by back to school, I think in the two or three year timeframe, a lot of these writing teachers, they're going to, they will do more in class writing, but even there the AI can support because it can give reports and narrative reports to the teacher on exactly what's going on. Then you have these workflows where the AI is supporting the student as a writing coach when they're working independently, but once again, it's giving the teachers a lot more feet ID, like what's going on and even can give preliminary grading, which is going to be exciting and save teachers a lot of time. And then, you know, this teacher that you just mentioned, they're going to realize that, Hey, I can also encourage my students to do something more ambitious. You can imagine a high school teacher saying, I want you to write a novel. I want you to write the next year.
I want you to, I want you to start a something really ambitious and your class assignment, just write a novel. Right. Honestly, with these tools, that's not like too far out or make a movie or possibility for a student or make a movie, right?
A screenplay and then make the movie on it. And it's, you're absolutely right. It's not, it's not, it's not ridiculous to say that, but at the same time, it's not going to be easy to do it. You know, I've been dying to write a science fiction book for a long time. I have ideas. And, you know, I've actually started a couple of times saying, okay, maybe I can use AI to help me. And it's not an easy thing, especially with the tools as they are now, like you got to, you know, take parts of it and put it back into the prompt and then bring it back. And the, you know, you have to really coach the AI like, no, that that character development is just not there, etc.
I'm sure the tools are going to get much better, but it is a really good project. Because, because I've tried to do it and I still haven't been able to pull it off. Yeah, that's one of the things that I've noticed about my own use of AI that really taps into what you're talking about here and potentially developing skills in the students that use them is that it's easy to look at these systems and think, well, I'm, you know, I'm cheating. I'm putting in words and it's creating this thing. But when I look at it through the lens of the skills that I'm learning in order to do that, like there is a certain syntax to everything, you know, that when it comes to computers, but with AI, the syntax really is, as we've talked about many times on this show, often the English language and just speaking clearly and organizing your thoughts and using your creativity and your imagination and your knowledge to essentially be a creative director to a certain degree. And that is a certain skill that is learned. And so I suppose for some people it's inherent, but for other people, like myself, like I have to practice to do that when I interact with some of these tools, I realize I'm flexing that muscle and that's got to be a huge, like a very advantageous skill for students to learn, especially in light of the fact that these tools are going to become more and more powerful and more pervasive in all the things that we're doing online.
100% agree. I mean, I think this is something that, you know, those of us who've had the privilege of within an organization, either leading an organization or being at a senior level, we have to play that. That's essentially most of our job is playing that creative director role, being able to put the pieces in place, being able to correct if something is not working the way that it should be working and sometimes making hard calls on all that.
Those are the people who get paid the most in most organizations. So that's already there. And a lot of I write about this in the book in brave new words is the skill of the future is you're going to elevate from being a entry level coder to being a software architect very fast. You're going to elevate from being a staff level writer to the editor in chief very, very fast. But no one wants an editor in chief who can't write as well as the staff writers. No one wants a software architect who can't code as well as the staff coders. And so it's just as important to learn these skills, but now you're going to have to learn this next level skill. So yes, any, if you showed me a young person who has made a high quality polished app today using AI, because the high quality polish part that last 10% takes a lot of know how to use these somewhat raw tools to get that polish.
And I don't think even GPT-5 or GPT-6 are going to just like, you know, just with a simple prompt, you're going to be able to create a truly polished artifact. I think that I would hire that person. I think that's fascinating. You're presenting a world where the AI helps, I think people learn leadership and standards. I went to a World Economic Forum event online yesterday and someone there talked about how the software used to QA us and now we QA the software. We become the arbiters of quality.
One more question for me, if I may. I think when we talk about learning sets right now in my field of journalism, my colleagues are being jerks, frankly, in wanting to pull things out of the web and out of training sets, New York Times suing open AI and on and on and on and fights over copyright and so on. But what concerns me far more is what's missing in the corpus of digital human knowledge. That's built by the people who had the power to publish in the past.
It's an equation, product of power. I think what we have to do is look at what's missing in this. As you look at these tools, I'm curious whether you've come across things where the bias inherent in the history of all of our text that comes through the models is a problem. What do we need to encourage these companies to do to augment them to make them better in their essence?
Yeah. Well, the obvious bias that you see is the bias towards English, just because English is the preponderance of the internet and the texts that have been trained on. You see that in the large language models where we've seen that we run benchmarks on how it's performing in a whole set of languages and how close it is to English or another major language like Spanish is essentially how well it's going to perform. There's a broader conversation. It's a little bit of tension and an irony where I think the most critics of AI, especially how AI's get trained, and it's usually the same person will make two statements that are in some ways contradictory. They will say, hey, this thing has scraped, the internet is trained on the internet, so it's kind of extractive.
It's taking advantage of things that have been written already. Then they'll also say, oh, but it's not representative. It doesn't have a diverse input set. But then you say, well, okay, well, let's get more of that diverse input set. They're like, no, no, no, we're not going to do it unless you pay us a lot of money. Exactly.
Those two statements are in tension with each other. If you really want it to be better at representing certain points of view or certain cultures or dialects, you have to make that available to the AI's to train on it. I haven't personally observed.
I know that the people who make these frontier models, the open AI, Anthropic, Google, Microsoft, they are, as you can imagine, Gemini Ultra had that snafu a couple of weeks ago around some of their image generation. You can imagine, they try to really stress test and read team these things to make sure that there's no apparent bias in it. It's hard to be completely bias free. I would argue, and I write about this in the book, you actually don't want to be bias free. There's positive biases you want. You just don't want biases that correlate with things that aren't consistent with our values. I make the argument in the book that I actually think there's a lot of sensitivity about even using AI for things like assessment or for screening resumes. It's legitimate to be thoughtful about that. But I will also say that you shouldn't compare it to perfection. You should compare it to the status quo.
What's the status quo? It's very biased human beings sifting through hundreds of resumes, spending four seconds of a pop on it. I'm sure if you audit it, yeah, there's going to be a huge correlation with things that you don't want it to be correlated with. With an AI, you can run 500 test cases on it that are essentially the same resume, but different names, different genders, different ethnicities, and see if there's a statistical difference in whether it gives a thumbs up or thumbs down to interview that person.
I actually think there's a higher degree of auditability in an AI world than in the humans being the arbiters. That's great. Thank you.
Yeah, absolutely fascinating. Sal, I know we're running up against the time that we have you for. I really want to thank you for taking time to talk with us today about this topic. I'm thrilled because when I had the idea a few months ago, I was like, man, that would be an awesome conversation. It absolutely has. You are absolutely the best person to bring on for this.
Sal, of course, founder of Khan Academy, author of the upcoming book, which we mentioned a few times is not out yet, but brave new words, how AI will revolutionize education and why that's a good thing. Looks like the release date May 14th. Is that right? That's right.
2024. Excellent. Well, thank you so very much, Sal. Thanks, Jason. Thanks, Jeff. Indeed.
What a pleasure. We'll talk with you soon. Best of luck. Thank you. Bye, Sal.
All right. Excellent stuff. Like I've got goose bumps. Yeah, goose bumps because I really appreciate Sal supporting our new little podcast here. Yes. And I'm very, very grateful.
Indeed. And also, just as another personal anecdote I have to say, and I think I mentioned, I meant to mention this to Sal when he was on my little fanboy moment, but during the pandemic, my daughters, we had to find a lot of ways and get creative on filling their minds with good information and everything. And my older daughter expressed interest in coding. And so I went to Khan Academy and she worked with that for a while to kind of start learning things. And she really, really enjoyed it. So she hadn't stuck with coding really much, but she enjoyed it then.
And that's what it was. No, she should be a, your daughter at this age should be a dilettante. She'll be trying lots and lots of things in the fact that she tried coding. She might come back to it. And that's great.
She was thinking as he was talking, I wonder what in my, sorry to plug in my upcoming book, I write a thank you note to the internet because imagine what life in the pandemic would have been without it. Totally. And we now see testing coming out about education, students are behind. Well, of course they are. They went through, you know, we weathered a storm and they came through better than they would have otherwise. And so let's be grateful for that. Yeah. It makes me wonder what if the tools that Sal has up now, the Khan Academy and the tools he anticipates had been up at the beginning of the pandemic.
How much of a difference would that or would that not have made in the outcomes for students to have had another helpmate? We won't know, but I think the potential is great. Yeah. You know, for the next pandemic we're prepared.
Sorry, I said that out loud. The great thing about Sal too is this attitude in the beginning of Khan Academy is of education as generosity. And I think that's just so much the ethic of what he has always done and proven in what he does is education is interesting and fun and possible. And we share it and to use technology, the internet and now AI to do that is just wonderful to watch.
So great get my partner, my friend to get Sal on. I'm just honored we got to talk to him. Yeah, absolutely. Me too. Me too. We've got interesting news, of course. The news world with AI does not slow down for Sal Khan as much as we might think it possibly could.
That's coming up in a second. All right, let's take a look at some of the big news items from the week. Emad Mostaque, the CEO of Stability AI stepped down last week, departed the board entirely as well, which actually in combination with the the inflection AI CEO Mustafa Suleyman, leaving for Microsoft makes last week a pretty consequential week when it comes to leadership and AI. Yeah, I hadn't followed his career very much the stories after the fact where the people were leaving. It was difficult management.
I think that's often a case of geniuses who take charge of these companies may be good at being a genius, but not at managing. So I don't know the details here. I also saw a story today, which I shouldn't even mention because it was so anonymous and probably from embittered ex employees. But I saw a story from Business Insider that said that VCs don't like Sam Altman's attitude. Well, Sam Altman kind of won every power game there was.
So he's in charge of the mountain right now. I think all of these guys who are in charge of these companies who were building smart machines kind of get full of themselves. Yeah. Well, yeah, and especially the rate at which in the last couple of years, the rate at which it kind of accelerated to this moment of, oh, my God, you know, the second coming is artificial intelligence.
It's going to do all these things. And yeah, it's how, you know, I often think that think about this with with incredibly followed and, you know, and popular for lack of a better word right now, like actors and singers and stuff. How do you do that and keep a sane kind of view of the world and not get swept up in that public energy and that that I don't know how you do that.
I don't know that I'd be able to do that. I think that's true. I think some have.
I mean, I think some have for sure. I'd like to think that Matt Mullenweg, who now serves what 35% of all the web has managed to keep his head about him for sure. There's a few out there, but as everybody always points out this at this moment, Steve Jobs was a jerk.
He was a brilliant jerk and a wonderful jerk and we love what he created, but he was a jerk. Yeah. You know, so is that is that an inevitability of genius? I hope not. No, I don't think it is.
I don't think it is. But but but I could see how how that force can be very influential on someone. But as far as most Aku is concerned, he said the concentration of power and AI is bad for us all.
I decided to step down to fix this at stability and elsewhere. He also tweeted not going to beat centralized AI with more centralized AI. So definitely really kind of pounding the kind of open, open source aspects of the future of AI in this move. Yeah, there was a story I came across before we went on the air, then I have lost it now because I've looked at something else on my phone.
So I won't find it out. Yeah, an effort to start a coalition of some who want to assure that there is a counterweight to the power of these huge companies. And I think that this is after we talked last week about NVIDIA and the scale at which some of these companies will operate.
Sure. Means that we've got to be very direct and purposeful in trying to counteract that from the open source world. I think that's going to be vital. Well, this next story kind of follows follows a little bit of that pathway here. The United Nations General Assembly last Thursday unanimously adopted a global resolution on AI.
A non-binding agreement over 120 nations quote to govern artificial intelligence rather than let it govern us. But that contrast with what Sal was just saying. Yeah. You know, where the point of this is that we are now governing the machine war because it will speak our language. We are in a position of covering it more. And I think we've got to recognize that agency that we have and demand it.
Yeah. And I think another thing that's interesting about this as with a lot of kind of the regulation efforts, you know, building up around AI is that they often they say a lot, but they don't seem to set out a whole lot of action. It's like everybody's got to get their PR moment of like, oh, well, we're doing something. And what we're doing is we're talking about it.
Yeah. And I'm going to jump into the next story because I think I'm going to mix it in here with a mixed master is the Benedict Evans, who I often quote to, I think is a brilliant analyst. He stands back and abstracts moments like this and says, does it really make sense to regulate AI? It would be as if in the early days of electricity is saying, we've got to regulate electricity. Well, yes, to an extent that happened at the electricity level. But everything that good and bad that could occur with it happened at different layers at the machines where it was used. And we come back to this.
I come back to this all the time on the show is as the technology, the application or the user. And it's fine. I think the UN resolution is OK. The EU AI legislation is OK. But I think you're right, Jason. I'm not sure what it's actually going to do. Yeah.
What is the direct impact? Also, you know, you mentioned the Benedict Evans piece and reading through that. I appreciated his comparison near the end of the article. He compares it to the auto industry that's looking for.
Yeah. That we that yes, we regulate different aspects of the auto industry, but that it isn't one single government government department touching on all aspects of all regulation about it. Because it's just it's more complicated, it's more complex than that. It's a kind of a blunt tool, I suppose. And yet a lot of what we're seeing around AI is that it's, you know, one group afraid of all things.
Therefore, you know, attempting to use the blunt tool. Benedict argues that, you know, it shouldn't be regulated entirely by one group. There should be multiple groups looking at the different facets that they understand at the end of the day. Do they understand that that's important for this type of and do they have and do they exercise their responsibility in their domain?
Right. I mean, to read from his from the PC wrote, we don't have one government department, one omnibus law to cover how GM treats its dealers collision and safety standards, congestion, charging cities, whether the tax code encourages low density development, what to do about teenage boys drinking and driving too fast and the security of national oil supplies. So all those things are regulated. All those things are researched and looked at, but from domains. And to think that you could have that one Uber piece of legislation or that one Uber resolution. Okay, we've dealt with it now. I can't do everything.
It abrogates the responsibility at all levels. Yeah. Yeah, indeed. Because I was going to touch everything and every sector is going to need to know is going to define responsible use. I think is the key here and make promises in my next book.
I call it Covenants of Mutual Obligation. This is what you can expect from me. This is what you hold me to account for as a technology maker, as an application maker, as a user, as a citizen, as a government, as researchers. We all need to be involved in this.
Yeah, 100%. Further down this line is the Tennessee law that... You put this one in here. I am eager to hear as you explain it. What do you think of this? I'm sorry, I interrupted. Go ahead and explain it. No, no, no. Not at all.
I just think it's interesting. The ensuring likeness, voice, and image security act. Break that out into your acronym and you've got Elvis, which I think is hilarious. They had to go there. Yeah. This is Tennessee. Oh, man. All those words writing Elvis.
That just cracked me up when I read that. Anyways, it was signed into law in the state of Tennessee. It focuses on how a person's likeness can be used, includes protections for artist voices that would be used and cloned using AI and deep fake technology, as the spooky word is. Prohibits the use of AI to mimic artist voices without their permission, which I think is totally reasonable, right? If something is convincingly another person and that person, you know, and that something is saying things that that person wouldn't say, I don't know.
I think I don't want my voice saying things I don't want it to say. I don't know. What do you think about that?
In my contrarian way, I was troubled by this. Okay. The ELVIS act of 1984 was intended to extend protection past death so that you couldn't just got it.
Yeah. Take advantage of Elvis. But it was really about a right of publicity. You couldn't use Elvis for an ad unless the state approved. It wasn't just about Elvis impersonating Elvis because God knows there's an entire industry. There's an industry. Elvis impersonators.
You're absolutely right. And by the way, one of the Elvis impersonators out there is named Jeff Jarvis. And he works in Vegas and I'm sure he's very good. Not you. You're not saying that you like the moonlight as an Elvis impersonator.
We have not been in the same room at the same time, but there is an Elvis impersonator named Jeff Jarvis. That's what it is. What troubles me about this? Because now what they're doing is they're going beyond just commercial use to say that any impersonation of someone and their voice and their image and a photograph is barred. Well, that once again, the same problem I have with the New York Times suing open AI is it shrinks fair use. And fair use says that we can comment on things.
So around satire, around comment, around simply creative. What about, you know, garage bands that want to be the Rolling Stones? They ate the Rolling Stones. You could hear in two strums of a guitar, but shouldn't they have the right to try to be the Rolling Stones? Shouldn't they have the right to just try?
Or imitate them or make fun of them or pay tribute to them, any of those. And so if you think that you come along well now, AI is here and deep takes. So we've got to stop it. We have to stop thinking at the layer of the technology and start thinking it's a layer of the principle. So with the layer of the technology, people are saying, oh, I bet.
But at the layer of the principle, what else are you cutting off when you cut this off? So even a student who wants to say one of Sal's, if Sal wants to write a, I'm sure he won't do this. He'll do it in his own voice. But if he wanted to write a sci-fi novel in the spirit of pick your favorite author and sci-fi, you know, shouldn't he be able to do that? We come across this with fan fiction all the time. You're using characters that were created by somebody else.
So I do not want to see, and I have, and please don't tell me whether I've not looked at them in years, but there's an impersonator of me on Twitter who's not funny and I can't stand them and I don't look at them. And he was using my name in a way that confused people. That's wrong. Right. If you use things in a way to defraud people, to impersonate them, to confuse about their own true nature, to confuse the audience, yes, that's wrong. And the social networks and others should have policies that deal with that. And jerks who do that should be shunned for doing it. But to pass a law saying, no, no, no, no, nobody can sound like Elvis or Dolly.
Nobody can do a Dolly tribute. I find that troubling in an extension of copyright and the shrinkage of fair use. So I said I was going to be contrarian. And there I am.
No, I love that. And I appreciate that because I also, as you were kind of setting this scene, I'm also reminded like there are a lot of people who sound like other people and who do voice impersonations. And they're incredibly convincing. Again, kind of touching back on a recurring theme on this show. What is the difference between that person doing a really effective Bill Gates impression with their voice and being allowed to do it versus an AI being able to mimic Bill Gates voice and not being allowed to do it?
Like what? What actually is the difference? The question is not.
It's the technology is the difference. But in the end, this is what Benedict Evans would say too. In the end, it is the use. Did you put that to good use or bad use? If you did it to pay tribute to Bill Gates or to have fun at his expense in a parody, that's fair. And that's that's that's free of expression. If you do it to convince people that you should give fake Bill Gates money for fraudulent purposes.
No, but there's already laws against that. Yeah, 100%. I totally agree with this. Okay, I appreciate you talking about this because you've kind of reversed. It's not that I looked at this and I was like, yes, heck yeah, to the wall. Like I love this thing.
But I but I often like read these things and the thought that comes up for me is, okay, like if I put myself in that, I would be in that position and there was a Jason Howell voice out there saying incredibly inflammatory things or telling people to send money to this Bitcoin fund or whatever with my name on it. I would absolutely not want that to be the case. But what you're arguing here is that it isn't necessarily the AI system that creates the duplicate voice that is the problem.
It is just the act of using that to do something that is illegal, unethical, whatever the case may be. And that is the ultimate issue. So it does act or a bill like this actually address the core issue.
No, it is more of a reaction to the technology. And yeah, I get it. Thank you. Principal and behavior over technology and regulation. Yeah, yeah, yeah. I will help me solve that. Okay, there we go.
This next story. Oh, by the way, just so you know, just so everybody knows, if you're watching the video version on the YouTube channel, YellowgoldStudios, there is your alter ego, Jeff Jarvis, the Elvis impersonator. As far as I can tell, it's not you.
But you know, there's a lot of sideburn going on. Does it say how much it costs? Yeah, 150 per event.
150 per event starting. Oh, if you could get more than that, you look just like him. There you go. I'm so happy I was able to find that. I was like, oh, it's got to be out there. Got to be out there. Next story here, we just got a few more and then we'll wrap things up.
And this actually reminds me of the conversation we had with Sven Størmer Thaulow from Schibsted. You've gotten better at saying that. Yes.
It comes off the tongue now. Yes. Yes, indeed.
Yeah, me too. About how they were creating, how Schibsted was creating a utility out of the journalism that they had produced and really kind of leaning into that. Financial Times is putting its archive to work with an AI chatbot that allows subscribers to chat with the Ask FT, it's called Ask FT chatbot, about the topics, the events, the coverage that exists within the Financial Times corpus of data. It's powered behind the scenes by Claude. It reaches into those archives that they have, summarizes what it finds.
So it gives you an ability as a Financial Times superfan to interact with their knowledge base and their information. And, you know, again, asterisk, it's not 100%. You know, so you do have to check your information. They even call that out, I think, in this Verge article that I'm showing on the video version. But it is beta for now. Only a few hundred of its FT professional tier members have access. But I think later in the year, eventually, more people are going to get access. Yeah, I agree.
I think this fits the open-minded attitude that we saw from Schibsted and Sven. Yeah. And it's using the AI to create another interface for users against this wealth of information they have. It's another service to their subscribers, which is all smart. But if every newspaper had one of these, we're kind of in the same boat where we are now, where everybody has their site and everybody has their paywall. And what we're going to want is agents to say, you know, answer me this question about the Donald J. Trump public offering this week.
And I've got a weird question about it and go find the policy experts who can explain this or that. Well, I'm going to want my agent to go query multiple databases or multiple chatbots to see who has the best answers. And I'm not going to subscribe to 100 chatbots. So, and I understand, again, there's a business model necessity here as the FT wants to get paid for the FT stuff.
Stipulated, Your Honor. But we're going to have to figure out the reverse way is that the FT can also create an API to its chatbot. So it can be called on by some higher level general interest chatbot or agents. And we've got to figure out that business model.
It's a little early to do that. I think if we if we think everything's behind a desk, if we go to AI destinations, we're not learning, I think, from the experience of the last few years. One story that has nothing to do with AI whatsoever that I just saw this week is the forward, which is the Jewish publication in New York used to be in Yiddish. Obviously now in English also used to be in print now online. They had a paywall up. They took it down and their revenue went up 37% because people wanted it to be free. Time magazine took down its paywall because they it wasn't working. I think the paywallization of everything has hit its limit when it comes to content. Now we're replicating that with AI even worse where the publishers are saying you can't quite off unless you pay me millions of dollars to license it as we discussed with Sal.
And I get the reflex, but it's not going to get us very far. And once again, I will ask why can't we be more like Norway where all the publishers collaborated. I'm giving a talk. Unfortunately, I'm not able to go. I was I was invited to Copenhagen for the Norway AI and media conference, which I was dying to go to.
But now that I don't have an expense account, it's a lot harder. So I'm going to give them a talk and my and the inspired by our show. The title of my talk is why can't we be more like Norway?
Like Norway, if I could be like Norway. Yeah, interesting stuff. Finally, you I'm really happy that you put in the link of the the Sora kind of video examples. But this I had come across kind of a small post on the verge just saying that sources are pointing to the fact that open AI is actively pitching its Sora video generation technology to Hollywood. Unsurprisingly, of course, meeting with studios, talent agencies, media executives in LA this week, according to sources to Bloomberg and some A-list directors and actors already have grant. Have been granted access. Public release, of course, is expected this year. We'll see that when we see that. But you ended up posting a link to an open AI post that has a number of videos that kind of act as little like one of these is like a short film called short film by shy kids.
You aren't going to hear audio on this because I don't have a share appropriately. But, you know, it's all of these examples of people and companies and and and creative, you know, agencies taking the video capabilities of Sora and piecing together really cohesive creative works. And what what I think is really fascinating about this, you know, right now I'm showing this like balloon head person kind of and all the challenges in life.
That's one of the examples called shy kids airhead. But is the really kind of like unique perspective, almost psychedelic perspective, which is not uncommon for anything in the realm of AI, you know, there are so often examples that are shared that really have kind of like a psychedelic twinge to them, where or a dream like state where you're like the only other place in my life that I could have a visualized anything close to this would be in my dreams. And now these systems allow you to kind of expand on these two or three second clips right to get to the point where we can do some really unique storytelling and do some things with with the video generation that, yeah, I'm sure someone could do this with with their tools that they had before. But I think this these tools are going to kind of fuel a whole new art style and a whole new approach to storytelling that can that can be a compliment to the many other ways of storytelling and visual form we already have it's just kind of a new a new style a new approach and I'm excited to see where it goes based on some of these videos that you shared. A few things to cite me about this one is that open AI is being collaborative with creators. And they went to talented people, and they in turn use this as a tool for creativity, which is what I think this stuff should be from the beginning. It shouldn't be writing news stories. It shouldn't be used with search. It should be used for creativity. And, and it's pretty amazing what they came out with so that the airhead story is a person whose head is a balloon. And I don't know what I don't know is how this was made. What were the prompts? Did they use any any specified real video, because it's very, very lifelike except for having a balloon for a head.
The way that a bicycle looks going through or people walk or people look on a subway. Is this is this made all made up or not? I don't know.
I'd love to see be under the hood here. But what it says is, imagine once again these tools, first in the hands of a movie studio, okay, fine, and they'll make money from that fine. But imagine these tools in the hands of your kids and Sal Khan's kids when they want to create something.
And it's going to be amazing. One of the jokes in the airhead thing is that he's walking through a store with cacti. And yeah, that's scary for a guy whose head is a balloon. Well, if the kid imagines that as a joke, and now they can visualize it and share that in a high quality way. That's pretty amazing.
That's been wonderful, I think. And yet should they make the same arguments? Should they still learn to spell? Should they still learn to? Diagram sentences?
Should they still learn to draw? Okay. And as Sal said, you know, it helped him to do a little bit of assembler language. Okay. But what these tools can do with creativity and how it can bring out the creativity of so many more people to say what they want to say.
Machine isn't talking. They are. So that's, that's exciting. But I can't wait to hear more about how these were done. And I worry, of course, at the beginning, this is extremely expensive processing. It's not going to be in the hands of everybody at first. It's going to be as happens in the hands of, you know, Steven Spielberg and people who have tons of money.
But I hope sooner than later, the again, open source world mimics this and brings this power to every school kid there is. Because it's amazing. It really is.
Yeah. What a what a powerful tool set to put image and motion, you know, imagery and all that to, to creativity. And, you know, there's also, I'm sure, a certain layer of unexpected outcome, you know, that that can spawn other types of creativity when you're using these tools and when you're as imaginative as a kid is and getting access to this stuff, you know, just as long as it's a safe environment for safe enough so that a kid isn't presented with something truly awful on the other side. I think these tools could be incredible for, you know, and I would love to see my kids, you know, create some really unique stuff with this.
And you know what, they probably will in the coming years. Somebody I hope we're going to have another show coming up is Lev Manovich from City University of New York, the Graduate Center there. And he has one book out about AI aesthetics. And he's working, collaborating on another one. And I think it's, you know, it's exciting because it's happening right before his eyes. It's just changing week by week. So we'll see what comes next. Hmm. Have to reach out to him and see if he wants to join Sal in the ranks of the AI.
That's the great thing about Sal coming on too, as we can say now, well, we have Sal Khan. Oh, okay, then I'll be on. You know, it's great. So that his generosity and his time and being able to brag about having him on. Absolutely. Yeah, no, super grateful to go by his book.
Yes, indeed. Go buy his book. Again, the title is Brave New Words, How AI Will Revolutionize Education and Why That's a Good Thing. I don't it's not out yet, of course, but I think you could probably go to Amazon and do a preorder or go wherever and do a preorder, which all authors like. Yes, yes, indeed. Get that preorder and once again, thank you to Sal Khan for spending a little bit of time with us talking about AI and education.
So happy about that. What do you want to plug, Jeff? What do you want to send people? Just by usual, GutenbergParenthesis.com where there are discount codes to the Gutenberg parenthesis and my other book magazine. Excellent.
GutenbergParenthesis.com. Jeff, what a pleasure. Thank you so much for being here today. Thank you, partner. Appreciate it.
Always. And then if you want to find kind of what we're up to here with this show, you can go to aiinside.show. That is the webpage for all things AI Inside. There you go.
You can also support us on Patreon, patreon.com/aiinsideshow at a certain tier, the SUPER AI level. You get things like hangs. Today's post show kind of production, post production of this episode. I figured, hey, you know what? I'm going to open a Discord stream and anyone who wants to see how I how I do post production of this show can join and we can kind of talk about it.
So that's offered. I did an exclusive video earlier like last week about ideogram.ai, my one of my favorite AI generation sites and how you can use it to kind of iterate ideas for logos and stuff, all this stuff exclusive to patrons at patreon.com.
Thank you so much for watching and listening to this episode of AI Inside.
We really do appreciate all of you for your support. We'll see you next time on AI Inside. Bye, everybody. you