Ben Goertzel: What AGI Means Now, Open Source AI, and Compute Constraints
December 24, 202501:04:25

Ben Goertzel: What AGI Means Now, Open Source AI, and Compute Constraints

This episode is sponsored by Your360 AI. Get 10% off through January 2026 at Your360.ai with code: INSIDE.

Ben Goertzel, founder and CEO of SingularityNET, joins us to argue for an open path to AGI. We unpack what “open source” should include beyond code, how access to training data, pipelines, and compute shapes who can actually participate, and why “open weights” can still fall short. We also ask who needs convincing, what incentives could move the market, and how decentralized infrastructure fits into his vision for building advanced AI outside a handful of mega labs.

Note: Time codes subject to change depending on dynamic ad insertion by the distributor.

CHAPTERS:
0:00 - Start
2:13 - How the definition of AGI has shifted over time, and why the term has become distorted
7:58 - The “thousand narrow AIs” future, and who builds the next specialized system
12:40 - Revisiting Goertzel’s past AGI timeline predictions, what changed by late 2025, and is 2029 still feasible for AGI?
18:42 - Whether today’s AI investment is flowing to the right bets (LLMs vs world models and beyond)
25:43 - Why Goertzel argues LLMs cannot reach AGI, and what evidence could change his mind
31:44 - What it would take to build a truly open-source path to AGI
37:18 - Open weights vs real openness: Goertzel’s take on what’s missing from current releases
43:56 - Who needs to be convinced for open AI to succeed, and how that persuasion happens
50:20 - Advice for college students preparing for an AI-shaped career landscape Learn more about your ad choices. Visit megaphone.fm/adchoices

00:00:00:04 - 00:00:11:25
Jason Howell
This episode of the AI Inside podcast is sponsored by your 360. I get 10% off through the month of January 2026. When you use the code inside.

00:00:11:27 - 00:00:49:12
Jason Howell
Ben Gerstle joins us to explain how he coined the term AGI two decades ago, and what it meant then versus now. What he thinks happens in the period when I can replace a lot of jobs, but society hasn't adapted yet. And why he believes the leap from human level AI to something radically superhuman could come surprisingly fast. That's coming up right now on AI inside.

00:00:49:15 - 00:01:06:21
Jason Howell
Hello, everybody. Welcome to another episode of the AI Inside Podcast, the show where we take a look at the AI that is layered throughout the world of technology. I'm one of your host, Jason Howell. My co-host, Jeff Jarvis is here. He's going to join us in the moment, for the interview before we get there. This is a Christmas holiday.

00:01:06:21 - 00:01:30:26
Jason Howell
For those of you who celebrate a lot of people taking time off. As such, instead of a typical newsy Wednesday episode, Jeff and I decided that we wanted to share this amazing interview that we had little more than a week ago, actually, with a fascinating luminary in the AI industry that you probably heard of. Excited to welcome today Ben Gerstle, who's an AI researcher and entrepreneur.

00:01:30:26 - 00:02:01:01
Jason Howell
He's best known as the founder and CEO of Singularity Net and a leading developer of the Open Cog framework for Artificial General intelligence. Now, Ben has been a long running voice in AGI research, actually created the terminology artificial general intelligence back in the early 2000s. He's also led applied AGI work and robotics as chief scientist at Hanson Robotics, where he led the software team behind Sophia the Robot.

00:02:01:02 - 00:02:06:28
Jason Howell
Tons to dig into. Let's dive right into our conversation with Ben Gerstle.

00:02:07:01 - 00:02:11:15
Jason Howell
Super excited to welcome to the show Ben Gerstle. Ben, it is great to have you here.

00:02:11:15 - 00:03:07:00
Jason Howell
Yeah. Great to be here.

00:03:07:02 - 00:03:43:22
Unknown
Mostly. Yeah. I mean, I think so. The origin story for the meme of AGI had to do with the book I was editing, which is more of an an academic volume of research papers dealing with building real thinking machines that can think as well as people in the ultimately even better. Right. And my original title for the book was going to be real AI, but that was only sort of a working title.

00:03:43:22 - 00:04:12:25
Unknown
And I thought it was a bit disparaging to AI applications that were doing highly valuable things, but not aiming at the sort of end game of human level general intelligence. And then the contrast Ray Kurzweil drew in his book and his book, The Singularity is Near, ended up coming out in 2005, the same year as my initial book titled Artificial General Intelligence.

00:04:12:28 - 00:04:58:17
Unknown
Ray and his prior books had contrasting narrow AI with strong AI and that that seem weird because narrow and strong are not antonyms right and strong. I had a meeting about consciousness in in the cognitive science field, so we were just thinking like, how do we elegantly articulate the difference between narrow eyes? I do highly particular things like play chess or help with your taxes, or predict the market versus AI that tries to do a whole bunch of things across the board, like people to do, and and continually pivot to doing all sorts of new and different things.

00:04:58:17 - 00:05:30:23
Unknown
Right. And so we came up with artificial general intelligence after some discussion, I think one of the coauthors of the book chapter, a guy named, Pei Wang, a Chinese guy, suggested gee Gee, AI general artificial intelligence. And we then seen legs and went on later to found Google DeepMind. He had worked for me before that. And Peter Vose, who now runs AGI incorporated, they they both, as I remember, suggested, let's not do to let's do AGI.

00:05:30:23 - 00:06:01:19
Unknown
Right. And so we were pretty much looking for the title of a book, and we put the book out. And in the book, among other things, were some chapters describing mathematical approaches to defining what is AGI? I mean, the crux of which is it's a system that can constantly leap beyond its knowledge and pivot to deal with with new things and generalize beyond its experience.

00:06:01:20 - 00:06:44:23
Unknown
Right. So like one, one, one way to. Boil this down mathematically is like over all computable environments and all computable reward functions. How smart would you be? And that has to include all sorts of things that were not in your life history or in your experience. Right? So I would say the the conceptual meaning we had in mind when we titled that book Artificial General Intelligence and the mathematical meaning we had in mind in defining what is AGI in the chapters in that book, like those are still totally relevant.

00:06:44:25 - 00:07:12:18
Unknown
What's changed is now various people from a commercial perspective have started introducing new new meanings for the term AGI. Like, no, AGI does not mean something that can generalize beyond its history and programing as well as a human. AGI means my company hits $1 trillion market cap right? Right. Yeah. I mean like AGI means we can do 95% of human jobs or something.

00:07:12:18 - 00:07:43:00
Unknown
And I mean all of those things are probably related to AGI, but they're not the same thing as being able to generalize beyond your experience with the agility of a human or even more. But I mean, I would say that. So the core meaning of AGI is understood by the research community hasn't changed. But I mean, as always happens, once the marketeers get Ahold of something, it accumulates a bunch of other, ad hoc meetings.

00:07:43:02 - 00:08:38:13
Unknown
And that's that's just just like life extension. Like, everyone really knows that life extension means. But you still have, like, you know, Doctor Jim Bob's, life extension cream that makes your skin look shiny or something.

00:08:38:16 - 00:09:14:13
Unknown
So, yeah, there's a very clear and obvious appeal, but I think that through selling on the clarifies, it didn't become the goal. It was the original goal. I mean, the original goal of the AI field when the name I was come up with in the late 1950s and even before that. So when Norbert Vinas book cybernetics came out in the 40s or whatever that was, I mean, the goal was to make digital or analog like engineered versions of human like general intelligence.

00:09:14:13 - 00:09:37:14
Unknown
Right? So that was the original goal of the AI seeing the divergence of narrow. I have in later more in the 70s, when people started to realize how hard it was going to be, fulfill the original goal of building human like thinking machines. I think the other key point I want to make also goes back to the origins of the AI field.

00:09:37:21 - 00:10:06:27
Unknown
So I j good that you, like me, was a mathematician who was writing in 1965, a year before I was born. He introduced the notion of the intelligence explosion, and when he wrote in 65, was the first truly intelligent machine, will be the last invention that humanity needs to make. So there's the notion here of self-improvement, self modification, ongoing expansion.

00:10:06:27 - 00:10:42:28
Unknown
I mean, if we if we have a thousand different narrow AI is doing a thousand different specific things, well, who makes the thousand first? Who makes the next one? Who makes who makes the next one year is still constrained by human inventiveness in creating each of these specialized machines. If you have an AGI that can build its own their eyes for different applications, and that that is an AGI which is dealing with, how do you customize your own mind or create new artificial minds to deal with unforeseen use cases that may pop up in the future?

00:10:42:28 - 00:11:06:12
Unknown
And that's that's obviously of much greater value than any fixed collection of narrow AI's that you could create. So, I mean, I think the the original pioneers in the AI field operating before I was born and when I was a little kid were correct in fixating on the fundamental ability to generalize and take a leap into the future.

00:11:06:13 - 00:11:33:17
Unknown
Now, being able to generalize exactly as well as a human being is sort of arbitrary. That's like trying to make, you know, a motor vehicle that can run faster than a human or something, right? I mean, human intelligence is a sort of. Arbitrary benchmark in the grand scheme of things. I mean, it's important economically, right? And it's important to talk to us for obvious reasons.

00:11:33:17 - 00:12:08:09
Unknown
But one of the things that I did see, correct me back in the 60s also was once your AI can generalize with human level capability, I mean, it's probably not too long before it can generalize with radically superhuman capability. Right? And then in Kurzweil's The Singularity is Near, you know, in 2005, the same year I published a book on AGI, I mean, Chris and I said, we'll get to basically human level AGI 2029 and superhuman AGI superintelligence, the singularity 2045.

00:12:08:09 - 00:12:31:03
Unknown
And I think now the idea of a 16 year gap between human level AGI and superintelligence seems probably too long to a lot of people. But as we can see now, well, hold on. Like, alarms are already so good at math and science. Like if you have if you have a human level AGI that adds on some agency and creativity.

00:12:31:03 - 00:13:07:28
Unknown
In fact, groundedness with other realms can already do. I mean, why would it take that thing 16 years to really launch the intelligence explosion?

00:13:08:00 - 00:13:11:11
Unknown
At one. That's that's true as well as prediction. I mean, Ray

00:13:11:14 - 00:13:31:25
Unknown
Ray Kurzweil, in his 2005 book Singularity Is Near, predicted we would get human level AGI by 2029. And then in 2005, that seemed pretty far out there to a lot of people. Right. And that now now you have some major company CEO saying, no, that's too long.

00:13:31:25 - 00:13:56:01
Unknown
It's going to be 2027. Right. And you have others saying, no, we're at least ten years away from AGI. And now now that brands you an age you I pessimist. If you say it made ten years maybe ten months right. So I mean I think in the big picture, if you're writing in 2005, you know, pinpointing 2029 was pretty good.

00:13:56:04 - 00:14:32:27
Unknown
But I mean, you of course, none of us can predict now the exact redevelopment of these frontier technologies because they depend not just on the technology. I mean, it depends on on the mood of the market and the venture community and government grant funding and so forth. Right. Like it's it's pretty straightforward if you understand the science and tech and the conceptual aspects, it's pretty straightforward to see the sequence of what's possible, but how fast the sequence is progress through depends on the human world as much as on the on the technology.

00:14:32:27 - 00:15:04:14
Unknown
Right. And I mean, that's I think. Certainly the case when baby AGI I mean we could have had a baby AGI but now it's not what the world has wanted to fund. I mean, we have Math Olympiad winning AGI right now would not. Let me rephrase. We have we have Math Olympiad winning AI right now. Right. And so we're progressing very fast in that regard in terms of making artificial babies.

00:15:04:17 - 00:15:36:11
Unknown
It's not something the world has wanted to fund too much. So I mean, the the rate of development of different aspects of general intelligence depends on money and memetics as, as much as anything else. I mean, I think on the whole, progress toward AGI is quite dramatic, right? I mean, I don't I don't think elements are AGI. I don't think that big of them in itself will become AGI.

00:15:36:13 - 00:15:53:02
Unknown
But I mean, I think if you look at what lens when integrated with other kinds of software can do, I mean, that functionality is getting closer and closer and that certainly certainly tells you something.

00:15:53:04 - 00:16:10:20
Jason Howell
Sorry, I'm going to button here real quick. But, you know, it's to throw a thank you to those of you who support us directly via Patreon. Thank you so much for supporting us because you allow us to do this show. Patreon.com smart AI Inside Show. You could support us and I might read your name out on the show.

00:16:10:20 - 00:16:37:05
Jason Howell
We have a couple of new patrons, in fact, Tom Callahan, one of our newest patrons, and then another one, a familiar name to me because he's my cousin Vince. Welcome events. Welcome to the AI inside family. All right, so Patreon.com AI Inside show. Thank you to everyone who supports us. You literally drive this show forward. Also driving this show forward is our sponsor today, your 360 AI.

00:16:37:05 - 00:17:08:12
Jason Howell
Now, now, at this point, about two and a half, three weeks ago, I invited Jared Guralnick, who's CEO of your 360 AI, on to the AI inside YouTube channel to show me his product. And I really found it super compelling enough so that I'm actually doing the review process myself. I have a couple of reviews I'm waiting on a few more, so I'm knee deep in the process and, Jeff is actually one of my feedback providers, so I'm going to hear from him also a few other friends, Ron Richards, Tom Merritt, names you might recognize.

00:17:08:14 - 00:17:34:04
Jason Howell
Anyways, I'm psyched for that. Here's what's really interesting, though lack of career development. It's the number one reason people quit a job, and yet most of us has never gotten feedback. That's good enough to actually act upon. And that's certainly been true for me. Well, your 360 does something that wasn't possible, you know, a half year ago. And with the rapid pace of development in AI now, it is totally possible.

00:17:34:09 - 00:17:56:29
Jason Howell
It uses voice AI to conduct real conversations with you and your colleagues. Just 15 to 20 minutes each. I did one, it was super, you know, easy to do, drilling in on specifics, my wins or your wins and your growth areas. Everything in between. Then it synthesizes all of that and then walks you through the findings step by step in a coaching conversation.

00:17:57:01 - 00:18:18:03
Jason Howell
If you happen to be a manager, it surfaces all of those patterns that happen to exist across your team that you might not normally see in a standard survey. A PM at Dropbox did call this the most helpful, actionable career advice they've ever gotten. Big words there. And, so far, I totally get it. Start the year with real clarity.

00:18:18:03 - 00:19:19:07
Jason Howell
At your 360, I use code inside. You'll get 10% off through January 2026. That's your 360 AI code inside for 10% off through this January 2026, and we thank them for their support of the AI inside podcast. Okay, going to take a few minute break and then come back and get back to our interview with Ben Gerstle.

00:19:19:10 - 00:19:47:18
Unknown
Yeah. So first of all, I haven't I haven't analyzed this from an economic standpoint, but I would guess that Elon's and the rise has probably increased funding for competing AI paradigms as well. Beyond beyond what it was before other lenses rose. So, I mean, I think, yes, elements suck up more of the AI R&D pie than they should.

00:19:47:20 - 00:20:12:06
Unknown
On the other hand, I think their success has expanded the pie, very, very considerably. So, I don't think it's true that research dollars for neural symbolic AI is less now than it was in 2021 or something before ChatGPT came out. Right. Like, I think it's it's probably more it maybe less proportionally, but probably more in absolute dollar terms.

00:20:12:06 - 00:20:38:05
Unknown
But that's that's a feeling I have. I haven't done the, the analysis there. I mean, I could say for my own work, which is not centrally oriented. On the one hand, we're badly underfunded compared to major efforts. On the other hand, we're significantly better funded than we were in 2021. Right. And I think I think that's probably true across the board.

00:20:38:08 - 00:21:09:01
Unknown
That said, yeah, I do think the business world has a tendency to take whatever sort of works and just everybody piles on. Right, which which is because of a, an ability to accept market risk. More so than technology risk. And that's extremely true in Asia and most parts of the world. In the US, there's more willingness to take on technology risks within the business world, but still not much.

00:21:09:01 - 00:21:39:29
Unknown
I mean, I mean, still almost all investors would rather pile on to copying something else that's out there and putting a different marketing spin on it, then on gambling, on building something quite different. So, I mean, I think Ireland's got to a very impressive and useful level of functional, the before other AI paradigms and some of the natural dynamics of modern capitalism, like those get piled on to like mad to a degree that's probably not rational.

00:21:39:29 - 00:22:06:15
Unknown
And I mean, and that's what you see across all areas of technology, like why, why did internal combustion engines get all the all the dollars for, you know, making cars moves, right. Until governments intervened and said, hold on, wait, there's electric, there's hydrogen, so forth, like internal combustion works. You've got an infrastructure around it and no one wants to take a risk on on something weird.

00:22:06:17 - 00:22:10:00
Unknown
So yeah, I think.

00:22:10:02 - 00:22:38:24
Unknown
There's more than enough money spent on people copying each other's elements. Like, we don't need that many. You don't need that many elements with slight variations of the same functionality. I mean, I understand that's how capitalism works and that's how competition works, but I mean, there's loads of other very promising AI ideas and concepts have been demonstrated at a smaller scale.

00:22:38:26 - 00:23:09:01
Unknown
Right. And and there should be more money going into scaling those things up also. I mean, there's logical reasoning, and I think neural symbolic when you mix neural nets with logical reasoning, is is about to get it's interval in the sun. Right. I just I saw a big blog post, your article from Amazon this morning about all the ways they're using neural symbolic behind behind the scenes in AWS infrastructure.

00:23:09:01 - 00:23:36:19
Unknown
IBM, of course, has been working on it for a long time. But there's other things beyond logic and neural nets also. I mean, there's evolutionary programing, genetic algorithms, genetic programing. You simulate the process of evolution by natural selection inside the computer. And, you know, John Koza was designing new circuit designs in the well in the early aughts using genetic programing, maybe back to the late 90s.

00:23:36:26 - 00:24:11:06
Unknown
And I mean, for creativity, there's concept blending. There's algorithms for putting new idea ideas together inside the computer. There's a long list of interesting AI techniques have been shown to do cool things at the small scale. Then there's not money going into scaling those up and seeing what they can do at the large scale, even though, you know, we've seen what happened by taking deep neural networks trained by backpropagation and running those at the large scale, they did amazing things, done what they did on a smaller scale.

00:24:11:08 - 00:24:36:15
Unknown
But that's a little too abstract for the business world, right? Like what? What typical investors want to do is take exact things that worked, copy it to a different brand and market it. They don't want to say, okay, well, the the methodology that worked was taking an AI tech that did something kind of interesting at the small scale and massively scaling it.

00:24:36:15 - 00:25:17:18
Unknown
So what what other AI technologies do kind of interesting things at the small scale? Can we massively scale? Right. Like that's that's a very simple act of abstraction. Relative to string theory or the theory of automated reasoning. But it's it's apparently too big a leap of abstraction for the investment community to take, which creates a tremendous opportunity for anyone who's willing to do something besides copy other lands or build apps on top of other lines, because there's a huge scope of interesting possibilities out there in the AI world just waiting to be scaled.

00:25:17:18 - 00:26:19:13
Unknown
And what what I spent much of the last three years on is building up an open source software infrastructure for scaling up a wider variety of AI techniques, because I can see that that this, you know, it's what needed to be done. And the, the venture funded corporate mainstream just, just isn't doing it.

00:26:19:16 - 00:26:51:13
Unknown
No, not not not remotely. No. I mean, I would say, the loans have given a lot of capabilities that were more than I expected. On the other hand, not in a way that makes me think they're going to be AGI capable. I think it's, it's been hard throughout the whole history of AI to predict which things are AGI hard meaning you need a human level and you have to do them.

00:26:51:16 - 00:27:16:04
Unknown
And which things can be done by clever tricks like the the field has never been good at that. Like we didn't. Nobody thought in the 70s that chess or go would be susceptible to simple tricks. Even though checkers was right. Yeah, the world champion checkers in the late 60s. I mean, people thought chess and go were too hard to do simple tricks.

00:27:16:04 - 00:27:45:08
Unknown
And then when they succumb to them, people are like, oh yeah. Yes. All we did was alphabeta pruning with faster hardware. Right? So, I mean, people didn't see until the 70s that computer algorithms could outperform human traders because people thought, well, there's an intuition you have about the mind in the market. But no, it turned out, you know, relatively straightforward statistical algorithms can outperform almost all human traders.

00:27:45:08 - 00:28:08:14
Unknown
So we just none of us has been good at predicting which of the things that we do with a lot of deep thinking, creative, general intelligence. No, none of us has been good at figuring out which of the things we do with a lot of general intelligence needs general intelligence when you do them in a, in a computer, right?

00:28:08:14 - 00:29:03:06
Unknown
So, I mean, I think if anything, I was surprised by the ability themselves to intuit human thinking in fuzzy domains like you could you can ask an algorithm like, you know, what would a 25 year old not that religious Muslim from, Tehran think about this love triangle situation in Cairo or something. And then it's so good at spelling out, you know, what ethical judgment this or that kind of human would make about this or that situation and that that, I would have guessed, would require some more embodied thinking or more nuance.

00:29:03:06 - 00:29:43:12
Unknown
But on the other hand, in hindsight, you can see, okay, the internet has loads and loads and loads of relevant examples to extrapolate from. I'm a little less surprised by the ability of other LMS that programing in math, just because it's clear. Like in the end, those are linguistic domains. Like you have a huge corpus of code and of math online and unless you're making really radical innovation and leaving way beyond what's known, okay, extrapolating from all the examples of code in math online should be possible.

00:29:43:15 - 00:30:12:04
Unknown
But to me, one of the big takeaways is okay, yes, other LMS cannot leap very far beyond the training distribution. On the other hand, maybe 95% of human jobs don't require living very far beyond your training distribution. So like you, you may not need AGI to do almost everything that's done in the economy. And that that wasn't so obvious to me before.

00:30:12:04 - 00:30:45:09
Unknown
Seeing other lines is fairly obvious now. Like you can take vision, language, action models and put them behind robots. You can integrate alums with various specialized statistical and rule based systems, and probably without even getting to AGI, you can do 90% plus of the jobs that people get paid for, but that that really just means that most of our jobs don't need much general intelligence, because they just involve imitating stuff that was done before.

00:30:45:11 - 00:31:15:29
Unknown
Nevertheless, if you can't do that, 1 or 2% or whatever of human activity involves taking a big leap beyond your training data, I mean, then you don't have any advance, right? Like the fact that science, business, engineering, art, culture, the fact that these advanced is because of that little bit of human activity that does involve taking creative leaps of generalization beyond history and it's obvious airlines will never do that.

00:31:15:29 - 00:33:10:18
Unknown
I mean, the way they're architected internally is not that the where they're architected internally. It's like a huge, artfully weighted collection of special cases from all the data that they've seen. They're not abstracting very much. And the key to radical generalization is radical abstraction. And that's not what labs are doing inside. And that's not a tweak that requires a very different architecture.

00:33:10:20 - 00:33:42:15
Unknown
It's a little complicated. So, I mean, a thing that. There's a lot of nuanced half truths here. And again, the media tends to want to push to one extreme or another. So. Absolutely. Scale is important, right? Like the late Marvin Minsky, one of the pioneers of AI, said to me once that he thought if you had the right algorithm, you could get human level AGI on an IBM 46 PC.

00:33:42:18 - 00:34:06:24
Unknown
And, I mean, I think Marvin was just totally wrong about it, about that. Right? And I mean, I, I discussed this with his son, Henry Minsky last year, and I mean, with all respect for his dad, I don't think Henry agreed with that sentiment whether Marvin would now where he's still alive. I don't know either. Right. So I think scale is important.

00:34:06:27 - 00:34:34:25
Unknown
And, I mean, you can have an algorithm that you run on a megabyte of Ram, and when you run that same algorithm on a petabyte of Ram, you have complex, self-organizing, emergent phenomena that are causing the algorithm to do just qualitatively different stuff. Right? And I mean, few shot in-context learning in large transformer neural nets, which we see behind our lens, is one example there, right.

00:34:34:25 - 00:35:04:22
Unknown
Like you just you don't see a lot of that at Gpt2 scale, but you start to see that GPT 3.5 scale, right? So I mean, the GPT five scale, Gemini scale, it's even more impressive phenomenon. Right? So I mean, scale is important. On the other hand, that doesn't mean it's as important as as Big Tech is now saying, right.

00:35:04:22 - 00:35:34:13
Unknown
So I mean, yes, you're not going to run the first human level AGI. On the laptop. On the other hand, does that mean Google is overestimating by a factor of ten the amount of hardware that you need? Right. They probably are. They probably are overestimating it significantly. Even if, the level of philosophy, they're kind of. Right.

00:35:34:13 - 00:36:00:25
Unknown
And I mean, Demis Hassabis and the DeepMind guys understand this nuance very, very thoroughly. Right? I mean, they've to some extent they've been assimilated by by Google. But, I mean, I think that I know those guys pretty well that they have a quite deep understanding of these trade offs. Right? So, I mean, I think it's in terms of investment in data centers.

00:36:00:25 - 00:36:35:15
Unknown
I mean, I think it's. It's pretty clear everyone is under investing in data centers, even while normies think they're they're over investing. But it's not it's not because you're going to need, you know, $1 trillion of servers to run GPT seven or something. It's just that even though you can get AGI using fewer servers than the mainstream things, you're getting a lot of AGI is doing a lot of different things.

00:36:35:15 - 00:36:56:21
Unknown
The AGI is going to take over the whole economy, right? So yeah, we're going to have much, much, much more efficient use of compute resources. But nevertheless we're going to have much, much, much higher demand for compute resources, because these efficient systems, as they move toward AGI, are just going to be doing more and more and more useful things.

00:36:56:21 - 00:37:59:10
Unknown
And again. I think, I mean, the Google founders understand this device and say that DeepMind understand this. Zuckerberg understands this. I mean, I think there's I think there's a more nuanced understanding behind the leaders of big tech than what gets projected out in the media.

00:37:59:13 - 00:38:30:10
Unknown
To be fair, the terminology regarding open source has been a mess since before I became so big. Right? So, I mean, you an open source Foss free and open source software with Richard Stallman and so on. Right. So. And I mean, many open source zealots believe believe that only the GPL is actually open and Apache and the MIT license aren't actually open.

00:38:30:10 - 00:39:04:14
Unknown
So the that's always been complicated, but I think it's gotten even more complicated now, because now we're at the point where indeed, opening the software code behind your AI system doesn't necessarily do that much. Like if you haven't opened your training data and your training data cleaning pipeline, and then if it takes $1 billion of servers to run the open code, once you've cobble together the training data, in effect, there's no way for anyone to leverage that open core before you release the next open code.

00:39:04:14 - 00:39:40:25
Unknown
Six months later, right? So, I mean, when you have things that require so much data and so much compute as well as source code, then just opening the source code doesn't necessarily fulfill the the spirit of open source, like even even if it is literally, literally open source. Yeah, I think what we need to achieve the sort of spiritual and philosophical and social goals of open source, what we need is more than just opening the code.

00:39:40:25 - 00:40:11:18
Unknown
You need to open the code. You need to explain what the data was, and ideally give an open way to cobble together that data. And you need to give tooling that lets people roll out their own fork to the code on infrastructure and as flexible away as possible. Right? And I mean, if something just needs a huge amount of compute to run, then okay, so it does.

00:40:11:18 - 00:40:46:14
Unknown
But you can build infrastructure that lets people flexibly run it on different compute infrastructures, right? It doesn't require the exact parallel distributed processing infrastructure you you have on your on your server farms. And then there's the question of, are you trying to develop algorithms that. Could be run on a variety of compute infrastructures? Or are you trying to develop algorithms that need, you know, monolithic, multi-billion dollar server programs?

00:40:46:14 - 00:41:14:26
Unknown
Right. And I think there's there's a sort of economic and sociological point here, which is big tech has every reason to promote AI methods that need massive amounts of data and massive monolithic server farms. Right, because as long as the AI field is focused on methods that need this sort of big data and big compute, then no one but big tech can compete, right?

00:41:14:26 - 00:41:47:27
Unknown
So big tech has a big disincentive to develop AI methods that need less data or less compute, or can run on more heterogeneous compute infrastructure. So what we need to be doing is developing AI methods that are designed to run on as little data as possible, as widely available data as possible, and that are designed to run on as flexible a variety of compute infrastructures as possible.

00:41:47:27 - 00:42:26:29
Unknown
And you you need that together with open source code. And then you have something that's actually free and open and fosters global participation in the development of, of of AGI. And this this is what we're developing in my own AI projects. Of course, the the open cog hardware, project, which is building a neural symbolic evolutionary AI infrastructure for trying to build thinking machines, going beyond just deep neural nets and then then singularity net, where we've built the singularity net decentralized multi-agent platform.

00:42:26:29 - 00:43:11:11
Unknown
And we're now building that ACI chain platform which allow you to run hyper on or other AI methods flexibly on, you know, heterogeneous decentralized processor networks, right? No, you can't I mean, you can't wave away compute requirements with good intentions entirely, right? So I mean, some some aspects of AGI may just want a server farm where you have a bunch of multi GPU, multiple CPU servers or the rack in one building, but they're clearly other aspects of AGI that could be run on like a mesh network of phones or on the bunch of people's laptops that together in a decentralized way.

00:43:11:14 - 00:43:25:14
Unknown
And we should be seeing how much we can push into this radically decentralized infrastructure and which kernels really do need to be on the server farm. And how much can you minimize the size of that server farm?

00:43:25:16 - 00:43:46:26
Jason Howell
Hey everyone. Pausing down for just a quick moment, just to remind you, we do have a YouTube channel. Maybe you're watching this interview on YouTube on our channel AI inside shows what you search for. If so, make sure you're subscribed. Then you won't miss other interviews and our news episodes. And if not, if you're just listening to this in the podcast, hey, there's a whole video component that you're missing out on, and sometimes there's some really cool stuff that hits there.

00:43:46:26 - 00:44:33:02
Jason Howell
So go to YouTube, search for AI Inside Show and subscribe so you don't miss them. All right, super quick. Break them back to our interview with Doctor Gazelle.

00:44:33:05 - 00:45:14:26
Unknown
There's probably three things that we that we need. Right? So one one thing is you need governments not to pass adverse regulations that allow capture of the AI sale by by big tech. Right. And ironically enough, Trump, whose politics I don't agree with across the board by any means, but. Trump is so far being pretty beneficial in in this regard.

00:45:14:26 - 00:45:50:27
Unknown
Right. In in avoiding the inaction of adverse regulations. And we saw attempts in this regard like California not that long ago was considering regulations against open source AI that would basically have made it so that only big tech could have developed large neural models. Right. And Trump has been against that, which I think is is quite beneficial. It's not clear what will happen in in Europe.

00:45:50:27 - 00:46:18:16
Unknown
So you need you need government not to just hand over. I do a few big companies and to allow allow open development to happen. That's one thing. You also need the development and research communities to to jump on board, right. Like you need folks interested in AI on the R&D side and on the software hacking side to not just buy into.

00:46:18:16 - 00:46:47:20
Unknown
Okay, well, you know, this Python toolkit a big tech company gave me is really easy to use. It's fun. I can build a bunch of cool stuff with it. I will just join the ecosystem. Right? You need you need the R&D community to be willing to deal with tools that are a little more raw and less polished, but that, you know, let you do AI in different ways and in an open and and decentralized way.

00:46:47:20 - 00:47:14:00
Unknown
And I mean, Linux gives a beautiful path there, right? I mean, the Linux development tools have always been a little bit more of a pain than big tech development tools. Certainly like I mean, Microsoft Visual Debugger is beautiful, like GDB, you know, debugger is amazingly capable, but it takes a little more learning curve to get into into a mind meld with it.

00:47:14:00 - 00:47:50:03
Unknown
Right? But in spite of all these issues, I mean, there's a robust Linux development community, and this has led to a lot of amazing things, right? Minix dominates the internet. I mean, Linux the next dominates mobile globally. And this is on an open foundation. Then Linux cuts across political divides. I mean, it's for better and worse, like it's powering servers in North Korea and Iran and China as well as Finland where Linus Torvalds was in here.

00:47:50:03 - 00:48:18:04
Unknown
Right. So I mean, we we need that community. And then finally the third thing is you do need money, right? I mean, you don't need as much money to do things open and decentralized as you need to do things closed and central, because you can get people to participate from a variety of different business business models. You can leverage universities in a different way.

00:48:18:06 - 00:48:42:14
Unknown
You can work globally rather than just in high cost tech hubs. But I mean, in the end, if we need only 2 or $300 million of hardware to run the first AGI, and it's okay if it's divided among 30 different locations and smaller server farms rather than in just one. I mean, that's still a few hundred million dollars of hardware you need for for your AGI, right?

00:48:42:14 - 00:49:16:27
Unknown
And so I that comes down to my point that, yes, we can make it cheaper and less insanely bloated than than Big Tech is doing. On the other hand, it's still going to need a bunch of machines, right. Like the, the the there's a limit to that. So you do you do need funding and I mean my own AGI efforts since 2018 have been funded through the cryptocurrency world to our AGI token.

00:49:16:27 - 00:49:57:17
Unknown
And now through the the fetch and aside token. And that's that's been interesting. It's been distracting in some ways with its my AI efforts in in touch with a community that is in favor of openness and right decentralization as, as well as having the some other weird characteristics you find, you find in the crypto world. But I, I don't know if the crypto sphere is going to be enough or good to be where all the funding of the open, decentralized AI revolution happens, right?

00:49:57:17 - 00:51:01:17
Unknown
And so there's, there's some uncertainty there. And anyway, we need these three things to line up. I mean, we need the regulation not to stop us. We need the communities, and we need the the funding for the hardware infrastructure, which still is considerable. I mean, even though not as big as Big Tech is trying to make you think it's.

00:51:01:20 - 00:51:29:13
Unknown
Okay. You know, my my niece, who's 13, and her friends are chewing over this sort of question. They're like, yeah, is college going to be there when we get to college? Or, like, what? What should we be studying? What jobs will still exist? And when I, when I get out of college and, I mean, on the personal level, the best advice I can give them is like.

00:51:29:16 - 00:51:54:28
Unknown
First of all, you might as well do something that you enjoy and are passionate about because then at least you have that you were fulfilled and employment along along the way. Secondly, you know, become good at learning new things and adapting and pivoting, right? Because clearly the world is going to require that of you, like, don't, don't get in anyone, right?

00:51:54:28 - 00:52:21:07
Unknown
No matter how appealing or lucrative it may seem in that moment, because any rut you get into is under serious risk of being disrupted and and randomized like the before you, before you know it. And I I'm also a strong advocate of getting the foundations like learn learn mathematics if you like tech right. Learn the laws of physics.

00:52:21:09 - 00:52:50:28
Unknown
You know, read the classics of literature and philosophy like that. The basics are still going to be the basics of nature, mathematics and and and human experience. Right. And so I think that's certainly enough to keep young people busy. But what you what you don't have now is okay if I major in business or major in computer science, like my career is set like that era.

00:52:51:00 - 00:52:54:02
Unknown
Yep, that era is gone. And it's also

00:52:54:09 - 00:52:54:17
Unknown
bullshit.

00:52:54:17 - 00:53:19:25
Unknown
Like, okay, become a plumber or an electrician and you'll always make money because I mean, the the plumbing bots are not not far off. Right? So I think on the whole, the most young people I encounter are pretty open minded and, will be relieved that they don't have to get a job and earn a living.

00:53:19:25 - 00:53:33:02
Unknown
And the Aggies create abundance for all. But before that, before they grow up. Right? The the people I see hand-wringing more about this are like middle aged white men like, who are like, oh

00:53:33:02 - 00:53:33:13
Unknown
shit,

00:53:33:15 - 00:53:44:18
Unknown
what if I'm no longer in my power and money and money position? How will I feel important, right?

00:53:44:20 - 00:54:19:07
Unknown
The problem that really worries me is more like when the early stage AGI has rolled out, but isn't yet a super intelligence that can give abundance for all. It's like in the interim period. You're an early stage AGI and superintelligence who gives you universal basic income in the Congo or Ethiopia or Afghanistan. Right? Okay. As sort of think once the early stage AGI takes most of the jobs, social welfare systems in the developed world will kick in.

00:54:19:07 - 00:54:44:29
Unknown
I mean, even if not exactly universal basic income, something in that spiritual direction. The the thing is, no one wants to give an economic helping hand in the developing world. And so what happens when the middle class jobs in the developing world are taken? What happens when the the factory in the outskirts of all this Ababa, where Ethiopian guys assemble shoes for a Chinese company?

00:54:45:05 - 00:55:12:29
Unknown
What happens when that shuts down? Because robots are cheaper at assembling tunes than young African guys. All right, I mean that then once you get superintelligence, which brings abundance on Earth, I mean, then, okay, drones will come in airdrop, molecular nano assemblers, and everyone's fine, right? But in the interim, between AI takes 90% of the jobs and super AI airdrop for molecular nano assemblers in everyone's farm.

00:55:13:02 - 00:55:41:26
Unknown
Like what happens in that interim period in the developing world? Does everyone go back to subsistence farming? Then they have no way to buy antibiotics or or power their cell phone, right? So, I mean, I think and then then the ethical dilemma is to minimize suffering in between early stage AGI and superintelligence. You would want superintelligence to come really fast, but to maximize the odds that the superintelligence comes out beneficial instead of otherwise.

00:55:41:28 - 00:56:05:28
Unknown
You don't want to hurry the path from early stage AGI to superintelligence. And this. This is where things will get really, really interesting. We're now in the pre phase where we don't have early stage AGI yet, but I do think Kurzweil's prediction of 2029 for human level AGI is probably roughly right. Like it could be 2027, it could be 2031.

00:56:05:28 - 00:56:30:28
Unknown
And I don't think that's off by decades, though. But I think neither he nor anyone else predictive of what's going to happen in that period right after the early stage AGI. And that's that's where we will need all the compassion and all the mental agility that we can muster.

00:56:31:01 - 00:56:55:29
Unknown
So it's not that far off. But still, during the few years before early stage, AGI, you know, there's a lot of interesting things to do. There's a lot of ways to make money if you're in business by picking the right niches that that are going to be revolutionized by, by throw the AGI technology one year versus the other year.

00:56:56:01 - 00:57:28:26
Unknown
And as a technologist, it's just so much fun, right? Because you've got logic based systems, you've got evolutionary learning, you've got concept blending, you've got different sorts of databases and operating systems. And it's it's so much faster to go from concept to realization now partly because of them development tools. Right. And it's the time when we get to see all the ideas in the last half century in the AI and computer science world, actually work and actually get, get deployed at scale to practical effect.

00:57:28:26 - 00:57:51:17
Unknown
Right? So, I mean, it's a it's a very, very cool time to be to be working on these things.

00:57:51:20 - 00:57:56:01
Unknown
Yeah. Well, thanks. Thanks for the good questions. And that the chance to share.

00:57:56:03 - 00:58:16:19
Jason Howell
Just a huge thank you again to our guest, Ben Gerstle. Fascinating conversation. I knew it would be when we were booking this I was like, oh, this is going to be really, really interesting. And I mean, you can't get much better than the guy who, you know, created the terminology of AGI, something that everybody's throwing around right now, and that means so much carries so much weight in AI.

00:58:16:23 - 00:58:36:06
Jason Howell
So thank you to Ben and Team for allowing us to do this. Thanks to Jeff Jarvis. Of course, Jeff Jarvis, Dot com. Go there, find all those books you know make sure and buy them stock Jeff out. He will be he will really appreciate if you do that everything you need to know about this particular show AI inside can be found on our site.

00:58:36:06 - 00:59:00:20
Jason Howell
That's AI inside dot show. And finally, if you really, really love this show and you want to support us on deeper level, we've got a lot of you. We could use more Patreon.com AI inside show. You get ad free episodes, you get access to a discord community, you get occasional giveaways. Now, I did mention last week that on this week's episode, I had a few items that I was going to give away.

00:59:00:20 - 00:59:17:21
Jason Howell
So I've got like this. Google found it hat for search. I've got an AI mode t shirt, which is actually pretty cool. Once I put this up, I was like, yeah, do I really want to give that away? But it's too late. I can't take it back. I've got a Google, Google DeepMind t shirt as well.

00:59:17:21 - 00:59:41:11
Jason Howell
So three pieces of swag. And I did go through all of our patrons, both paid and free. It's entirely open for the three winners. And I selected three. Now I'm going to tell you right off the top, the first winner happens to be and I swear this is totally random. Happens to be my nephew Christian Blazer. So Christian buddy, it's been a little while.

00:59:41:11 - 00:59:58:01
Jason Howell
I haven't seen you, but I'm going to go ahead and send you this hat. So I hope you like wearing, Google hats because there you go. The AI mode t shirt going to Robert Frisky. I think that's how you pronounce your last name. Robert. You're going to get a t shirt and the DeepMind t shirt going to Tom, roughly.

00:59:58:04 - 01:00:16:06
Jason Howell
Thank you all so much for your support throughout this year. I say it a lot and I mean it. I'm going to say it again, we couldn't do this show without your direct support. Like, it is one of many ways that we continue to foster and build the health of this show and make it, you know, make it run, essentially.

01:00:16:06 - 01:00:38:10
Jason Howell
So thank you for doing that. You know, mailing out a few brand new pieces of Google swag is probably the least that I could do, but I'm happy to give back in some way, shape or form. To those of you who did win, I'm going to DM you or text message you if you're Christian. With all the information so that I can send this out to you and, you know, it's going to happen after Christmas.

01:00:38:10 - 01:00:52:29
Jason Howell
But anyway, speaking of swag. Wow, this is a lot at the end of the show. But hey, we, you know, we don't go this long at the end of the show very often. You can get an AI inside t shirt if you like. All you got to be, do is become an executive producer. And we have ten of them right now.

01:00:52:29 - 01:01:24:04
Jason Howell
Doctor du, Jeffrey, Mary Cheyney, radio Asheville, one of 3.7, Dante, Saint, James Barnard, Eric Jason Cipher, Jason Brady, Anthony Downs, Mark starker, and Carsten Simon Ski. Thank you. Thank you, thank you for supporting this show and enabling us to do this. And thanks to all of you for watching and listening. We will see you next week. On a day earlier, actually on Tuesday, since it is, you know, right up on New Year's Eve, we're going to do a normal news episode, Jeff and I, next Tuesday on another episode of the How You Inside podcast.

01:01:24:07 - 01:01:25:07
Jason Howell
Oh, see you then.