Émile P. Torres: The TESCREAL FAQ
July 03, 20241:15:33

Émile P. Torres: The TESCREAL FAQ

This week hosts Jason Howell and Jeff Jarvis welcome Dr. Émile P. Torres, philosopher and researcher who coined the term TESCREAL to describe the complicated ideologies driving the push toward AGI, as well as explore recent news in AI disclosure and consumer tech integration.

🔔 Please support our work on Patreon: http://www.patreon.com/aiinsideshow

INTERVIEW: Dr. Émile P. Torres

  • Explanation of the TESCREAL acronym and its component ideologies

  • Overview of artificial general intelligence (AGI) and its relation to the TESCREAL bundle

  • Discussion of the differences between AI "doomers" and "accelerationists"

  • Exploration of the connection between TESCREAL ideologies and eugenics

  • Analysis of recent news related to TESCREAL and AI development

NEWS:



[00:00:00] This is AI Inside Episode 24, recorded Wednesday, July 3rd, 2024. The TESCREAL FAQ with Dr. Émile P. Torres. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash

[00:00:16] AI Inside show. If you like what you hear, head on over and support us directly and thank you for making independent podcasting possible. What's happening everybody? Welcome to another episode of AI Inside. I'm one of your hosts Jason Howell joined as

[00:00:37] as always by my cohost Jeff Jarvis. Good to see you again, Jeff. Good to see you. It looks like you didn't gain too much weight from all that pasta in Italy. Let me tell you, there was a lot of pasta to be eaten in Italy.

[00:00:48] My youngest daughter, I'm pretty sure she she probably matched the amount of pizza that she's eaten in her entire lifetime in two weeks because I think there were many days where she had a breakfast, lunch and dinner. It was like, I'll have a margarita pizza.

[00:01:02] Hey, win it Italy. I mean, that's just what you do. Exactly. Yeah, it's good to be back. I missed. Yeah, thank you. I missed having the opportunity to talk with you about the news and everything. We had some great episodes. You had a great vacation.

[00:01:14] Admit it. That's fine. Now you're back and you're in the grind. It's OK. I had both things can be true at the same time. I really enjoyed stepping out. But as we were talking about in pre-show when you do a podcast

[00:01:26] and I've done this long enough to counter this time and time again, you step away from the topics. Stepping back in is always a challenge. It's like you have to catch up.

[00:01:34] And so I don't know that I'm fully there, but well, you know, you you're the pro today and usually, but you're the one that's been connected to this stuff ongoing for the last three weeks. So you can tell me how it's going this time.

[00:01:49] Anyways, it's good to see you, Jeff. Before we get started, big thank you to those who support us directly on Patreon. You know who you are. And if you don't, well, that's probably because you're not a patron yet

[00:02:02] and you should Patreon dot com slash inside show like Anthony Downs, one of our supporters. You can support us directly and make sure that we continue to do this show each and every week like Anthony is. We really appreciate your support.

[00:02:15] OK, this week we are going to have a little bit of a flashback to the the previous incarnation of A.I. Inside when I worked at Twitter, Twitter TV, Jeff and I were basically trialing this show for a number of months

[00:02:34] and kind of building what it would be. And we had the opportunity to talk with an impressive person. Today's guest, in fact, Dr. Emil Torres, who is philosopher, historian, has a focus on existential risk and human extinction of which there is plenty of fodder right now.

[00:02:55] Unfortunately, my goodness. Not sure why you did that to yourself, Emil. This is that's got to be hard sometimes, but you last year published the book, Human Extinction, A History of the Science and Ethics of Annihilation.

[00:03:09] And I mean, I could go on in collaboration with Dr. Timnit Gebru. Basically, y'all coined the term test grill, which comes up on the show so often. So, Emil, it's great to have you back, although on this show you haven't been on to what you were.

[00:03:27] But anyways, it's great to have you here, Emil. Thank you for joining us. Thanks for inviting me. Yeah, absolutely. Wonderful to be here. Yeah, it's really great to talk with you again. And when I reached out, I think like a month and a half ago,

[00:03:40] you had mentioned that you had an FAQ on the test grill bundle coming soon. And so we decided, hey, that's a really great opportunity to bring you on. And then, of course, I went away to Italy, so it had to delay a little bit.

[00:03:55] Just last month to your sub-stack, that's xriskology.substack.com. Hope I got that right. The test grill bundle FAQ hit the site, hit your sub-stack. So we're going to talk a little bit about that today, because I know people hear us talk about test grill

[00:04:13] and we talk about it in bits and pieces. But I feel like when it comes down to it, you're one of the people that's really driving awareness around this. And like I said, you coined the term test grill.

[00:04:28] So you of all people are the right person to talk about this. So why don't we start with the basics and then we'll get into a little bit about what's in the FAQ and go from there. I know we covered this a little bit previously,

[00:04:42] but for those who didn't watch, what is test grill? We'll just start with the simplest question. Or is that simple? I don't even know. Yeah, it's relatively speaking. It's simple. So the concept of the test grill bundle, this was the result of a collaboration

[00:05:07] that I was working on with Dr. Timney Gebru, computer scientists used to work at Google. And we were specifically trying to understand what are the various ideologies that have shaped in our driving the current race to build AGI or artificial general intelligence.

[00:05:23] And in the paper, we was writing the paper, we found to be somewhat unwieldy because we could not talk about the origins of the current race to build AGI without mentioning these seven ideologies. And so at some point I proposed the acronym test grill

[00:05:45] in order to economize our speech and streamline our conversation so that we weren't just listing these seven ideologies which are represented linguistically by these quite large polysyllabic terms like transhumanism and singularitarianism and so on. And kind of the, once we had the test grill acronym,

[00:06:06] it occurred to us that it really does, there are compelling reasons for thinking of these ideologies as constituting and there are various, the communities that have coalesced around each ideology in the acronym as constituting a cohesive movement that really goes back to the late 1980s

[00:06:27] with the emergence of modern transhumanism which coincided, it was coincidental with extropianism, the first organized transhumanist movement extending from the late 1980s, early 1990s all the way up to the present. So we thought like, actually in the test grill FAQ I mentioned that there are two possible interpretations

[00:06:48] of the test grill thesis. One is the weak, you could call the weak thesis and the other is the strong thesis. The weak thesis simply states that you cannot provide a complete explanatory picture of the current race to build AGI without referencing these seven ideologies.

[00:07:04] Referencing these seven ideologies is absolutely crucial for making sense of what's going on. The strong thesis is that these ideologies really do form this bundle. And in our paper we defend both. So we also, in particular we defend the strong thesis that we really should be thinking about

[00:07:22] this, these ideologies as just a single wriggling organism that extends from the late 1980s all the way up to the present. So all of that being said the acronym itself stands for transhumanism, extropianism, stinularitarianism, cosmism, rationalism, effective altruism and longtermism.

[00:07:44] And I'm happy to discuss what each of these ideologies are but essentially they, you could think of the backbone of the test grill bundle as transhumanism because all of the other ideologies really just grew out of the transhumanist or modern transhumanist movement.

[00:08:03] So the first three letters after T, so the test part of test grill, those are just variants of transhumanism. Cosmism, stinularitarianism, extropianism, those are variants of transhumanism, rationalism, effective altruism, longtermism, those were introduced by people who were very much involved in the transhumanist movement.

[00:08:27] So transhumanism is sort of the through line as it were the common denominator of all of these ideologies. And again, the central claim is that these ideologies form a single cohesive movement that has been instrumental, absolutely integral in launching, sustaining and accelerating the current race to build AGI.

[00:08:47] So let me ask two questions to start off with. I wanna get to this bit some news and things that have happened since we last spoke so I wanna get to those but continuing the background here. If you could also explain, well first, artificial AGI, artificial general intelligence,

[00:09:03] the belief that the machine will be made to outdo our capabilities on a general basis. I for one think that's BS, it's not gonna happen, I'll be eager to hear if there's, where you think that is on a scale of possibility. Two is that this is all related

[00:09:18] to what people are probably heard more about is duberism, the doomsters, people who think that this AGI could get so out of control, it could destroy us and a few people have the wisdom, having made it to control it and we should give them power and money.

[00:09:32] But the third thing I'd like you to address in that is how, and I think this is so critical for people to understand how all this relates to eugenics. When I try to scare people about what this stuff is

[00:09:43] and I say, listen, it's not just a bunch of crazy geeks saying they're all powerful, it's got this larger agenda of making the new Ubermensch. And so if you don't mind extending your background for just another minute, you could hit on those things, Amiel. Sure, yes.

[00:09:58] I mean, fantastic questions. In terms of possibility, my focus these days is mainly on the, us philosophers would say the normative question of whether we ought to build AGI as opposed to the question of whether or not it's even possible.

[00:10:20] So one of the claims that Gabru and I make in our article is that AGI is inherently unsafe, because it's an unscoped system. So if you want to apply established standards of risk assessment to a technology, you need to first have a clear definition

[00:10:43] of what that technology is and what it's supposed to do. So like the computers that we're using right now, like they're well-scoped. They have particular functions and so on. Once you understand those functions, you can apply these norms, these standards of risk analysis

[00:10:59] to determine how safe or unsafe the technology is. AGI is not like that. It's this God-like system that we'll be able to perform at or above human level intellectual, in terms of intellectual capabilities in every cognitive domain of interest. Mathematics, social persuasion, scientific innovation,

[00:11:24] arch creativity and so on and so on. So it's just an inherently unsafe. I don't know if it's possible. Maybe it is. Like Gary Marcus, for example, makes the argument that the large language models that are powering chat GPT and various other chat bots,

[00:11:45] a lot of the image generation technologies that we have out there, these LLMs are in off-ramp to the road to AGI. So he's, in fact, he tweeted, I believe just today, that he believes AGI will be built at some point,

[00:12:05] but not as a result of just scaling up these large language models. So my point of mentioning that is there's a variety of different views. My own opinion is like, I don't know, maybe AGI is possible. But again, I'm much more interested in the question

[00:12:21] of whether or not we should be trying to build AGI in the first place. And actually this ties into the second question about doomerism. There are sort of two camps within the test grill movement. There are the doomers and the accelerations.

[00:12:42] And it's not, there's a spectrum that separates them. So people can fall anywhere along the spectrum in between those two extremes. I think the important thing to recognize is that all of the doomers, at least all of the most influential and prominent doomers within this test grill tradition,

[00:13:07] they are not opposed to building AGI. They are opposed to building AGI in the near future. And so, you know, Elias Giotowski, maybe the most famous AI doomer out there, Jan Tallin would be another example, co-founder of the Future of Life Institute, and so on and so on.

[00:13:28] All of these individuals want AGI and they want AGI as soon as possible. They just believe that we're not sufficiently prepared to build systems that are at or above human level intelligence. So above human level intelligence that would be referred to as superintelligence,

[00:13:44] sometimes abbreviated ASI for artificial superintelligence. So you've got AGI, which is a broad class, and then you've got ASI, which is a subclass of AGI. Because AGI would include human level, ASI is just superhuman intelligence. So all of these individuals want AGI,

[00:14:04] one way to think about the situation and what motivates the doomer narrative, the doomer movement within the test-grill community is that you have AGI capabilities research. So the capabilities research is focused on trying to build an actual system that does outperform humans

[00:14:27] in virtually all cognitive domains of interest. And then you've got AGI safety or AI safety. And this is a field that emerged directly out of the test-grill movement that is trying to solve a problem that's sometimes called the value alignment problem or the control problem.

[00:14:48] So if we create a system that is superintelligent, how exactly do we get it to do what we intended to do? So this is the alignment problem. If we ask it to eliminate cancer, all cancer in humans, maybe being extremely clever, but also very sort of single-mindedly focused

[00:15:13] on this one particular goal, the ASI goes out and just kills everyone. Because if you kill every, if there are no more humans, then there's no more human cancer. Okay, so then you add a constraint that says, well, don't kill everyone. And so it goes, okay,

[00:15:26] I'll just kill most people around the world, but I'll leave a million people in like a pen and I'll cover the rest of terrestrial earth with laboratories to try to cure cancer. And you go, okay, that's not what we want either. So you add another constraint.

[00:15:42] And the point of this exercise is that you can go through this iteratively over and over again, keep adding constraints. And then once those constraints are added, sort of figure out other ways that there could be catastrophic unintended consequences.

[00:15:58] So AI safety is trying to figure out this problem of how it is that we code into the system, a set of values that aligns the behavior of the ASI with what we want for the future of humanity. And so you've got AI safety and AI capabilities

[00:16:19] or AGI safety, AGI capabilities. Right now, if time goes this way, you've got capabilities that are here. Let's say this is the AGI finish line, that's where we get AGI. You've got capabilities research here and you've got AI safety research here.

[00:16:40] And so the claim is that if the capabilities research crosses that finish line before AI safety research does, then you get by default a catastrophe. Meaning probably everyone on Earth dies because there will be this technology that is extremely, extremely powerful that has agentic properties.

[00:17:04] So it's an agent in its own right. That makes it qualitatively different than any other kind of technology. There's no way to control at that point, it's thinking a million times or more than a million times faster than we are,

[00:17:15] which means that when it looks out at the world, it sees us as essentially frozen in time. So maybe we're struggling to go unplug the machine. But the two seconds it takes for us to unplug the machine, that in terms of subjective time for the ASI,

[00:17:31] that's like 200 years. So it's got all the time in the world to figure out how to prevent us from actually unplugging it so much. So all of this is to say that the tumors are worried about this configuration where capabilities research is leading the way with AI safety.

[00:17:47] You're so extremely, what I still think is BS, you're so extremely fair to them giving them an appropriate, I think they would say explanation of what they see. So I appreciate that intellectually. It's spot on, it's an excellent visualization and really encapsulates exactly what we're up against.

[00:18:07] So I'm sorry I interrupted. No, that's fine. I mean, I am trying to be charitable. I mean, I think that there are a lot of the problems within AI safety. I think the problems themselves are deeply problematic, are flawed in various ways. The frameworks are all wrong.

[00:18:30] So I have, this would be a charitable interpretation of their view. And also the reason I mention it this way is just to say that as soon as the situation reverses and AI safety is ahead of AI capabilities, all of the tumors would say pedal to the metal

[00:18:49] because AGI or superintelligence is absolutely key. It is integral for realizing this very bizarre techno utopian vision that is motivating the test cruel movement, according to which we will use superintelligence to figure out ways to completely re-engineer humanity essentially resulting in almost certainly resulting in the extinction

[00:19:15] of our species. Maybe that would happen in the very near future. Test cruelness would be completely fine if there were no more homo sapiens and we were replaced by some new post human species. So in a sense, the test cruel movement is very pro extinctionist.

[00:19:28] Don't let their rhetoric fool you. And not only will we re-engineer humanity but we go out, colonize space, we plunder the cosmos for its vast resources what they call our cosmic endowment of neg entropy which stands for negative entropy. So just basically usable energy.

[00:19:45] So part of the plan then is to build literally planet sized computers on which to run virtual reality worlds where you could have trillions and trillions of people. Some of these individuals like Nick Bostrom who has been involved in every single one

[00:20:03] of the test cruel ideologies from the T to the L he estimates that if we colonize space there could be on the order of 10 to the 58 digital people in the future. So that's a lot of people that amount so that's a one followed by 58 zeros.

[00:20:20] That is a much, much, much, much larger number than the total number of people who exist today. So that makes the future way more important than the present which in turn motivates this idea that we need to build safe superintelligence according to whereby AI safety leads the way

[00:20:43] in front of AI capabilities as we cross the AGI finish line but we need to do that as soon as possible so because once we have superintelligence then we get to live forever. We get to become post-human, we get to colonize space and create this literally astronomical amount

[00:20:57] of value in the far future. So to sort of summarize key points here the difference between the acceleration accelerationists and the doomers is important but also very slight because both of them share the exact same vision of the future. It's about going out and conquering the universe,

[00:21:20] becoming post-human beings, uploading our minds and so on that is the vision all of them except essentially the exact same future logical picture that we ought to be striving towards. They just disagree about the near term risks of AGI. So people like Yotkowski famous doomer

[00:21:41] arguably the most influential doomer in the world today thinks that if we create AGI in the near future there's a 99% chance or greater that everybody on earth will die and that would be very bad. His institute called the Machine Intelligence Research Institute just released their 2024 communication strategy document

[00:22:00] and if you look at footnote, I believe it's footnote number one they say right there do we think we should never build superintelligence? No, we need to build superintelligence as soon as possible. It is a key part of fulfilling our long-term potential

[00:22:13] over the coming millions, billions and trillions of years but we just need to make sure that AI safety is in front of AI capabilities and so the accelerationists just think actually the default outcome of building AGI in the near future is utopia, it's not doom

[00:22:32] and so we shouldn't worry so much and in fact even if there are risks this is another key claim that a lot of accelerationists even if there are risks associated with AGI the best way to solve those risks is not through government regulation or something of that sort

[00:22:49] it's through the free market. So if we just open source everything we have a thousand or a million companies out there that are all building their own AGIs or their own superintelligences then okay maybe there's like 10 apocalyptically bad AGIs. Sure but there's a million good AGIs

[00:23:09] that are going to neutralize. Even smarter, this is Mark Andreessen's gospel. Exactly, yeah. I feel like I spoke a lot there but I got two at least two of your questions so. Well real quickly Eugenics, real quickly on. Yes, so I would say that

[00:23:32] maybe the first thing to note is that so Eugenics is this idea that there are ways of traditionally ways to alter the reproductive patterns of a population to improve humanity. This idea goes back to the very origins of the Western tradition. Ancient Roman law, the ancient Greek philosophers

[00:23:59] Plato and Aristotle both endorsed eugenic policies like infanticide targeting specifically babies with congenital deformities and this idea has popped up over and over again throughout Western history. In the decade after Charles Darwin published The Origin, on The Origin of Species, 1859 was when he published that.

[00:24:26] I think it was exactly 10 years later if I remember correctly his half cousin Francis Galton published a book called Hereditary Genius which introduced the first quote unquote scientific version of Eugenics whereby he was using Darwin's theory of evolution by natural selection

[00:24:46] to argue that we should be able to use something like natural selection, a kind of artificial selection to encourage the best, the most fit individuals to reproduce more while preventing individuals who are the least fit quote unquote from reproducing and then over, you know, transgenerationally over many generations,

[00:25:09] the human population will improve, our health will improve, we'll become smarter and so on. And so this was so-called scientific Eugenics. Eugenics was hugely popular among both sides of the political spectrum particularly in the early 20th century. Progressives loved it, fascists loved it,

[00:25:32] the Nazis of course made it quite famous and dreaded. I mean they sterilized I think 400,000 people or so in Germany. So all the reason I'm digressing here is to say that Eugenics goes back to the beginning of the Western tradition and it's never gone away.

[00:25:56] And so constantly there were two French scholars who talked about the eternal return of Eugenics. And one of the claims that Gabru and I make is that the Tescheryl movement is just the most recent iteration of the eternal return of Eugenics.

[00:26:10] So transhumanism itself is a version of Eugenics. That is uncontroversial, you can go to the Stanford Encyclopedia of Philosophy, look it up, it's a kind of Eugenics. And in fact it was developed in the 20th century by some of the most prominent Eugenicists

[00:26:26] in the Western world, like Julian Huxley would be a prime example. And the difference between transhumanism and Eugenics, this gets at why I call transhumanism Eugenics on steroids because the traditional Eugenicists of the 20th century going back to the late 19th century,

[00:26:47] what they wanted to do was use science reason, maybe technology to improve the human stock as much as possible to create the most perfect version of our species that we could actually create, while simultaneously preventing our species from evolutionarily degenerating over time. Transhumanists like Julian Huxley said,

[00:27:11] why stop there? Why stop it perfecting humanity? Why not transcend humanity? Why not create, he didn't use the term post-human but he was employing the concept in arguing that we should just transcend humanity entirely and just become a new superior species.

[00:27:29] And so this idea then combined with the fact that there were breakthroughs in certain fields of technology in the second half of the 20th century from like the 1970s onwards, like genetic engineering. And so for the first time people felt like, okay, if the Eugenics on steroids,

[00:27:51] i.e. transhumanist goal is to create this superior new species called post-humans. And now we have these technologies that enable us to actually modify our genes and maybe there's like AI systems that we could merge with or nanotechnology that we could inject,

[00:28:13] nanobots we can inject into our bloodstream that enhance our cognitive functioning, enable us to live forever and so on. Then actually we don't need the old techniques of 20th century Eugenics for sterilization and stuff that was aimed at changing population level patterns of reproduction.

[00:28:33] No, in a single lifetime over a single generation, if you're talking about parents and children, we may be able to use these emerging technologies to radically modify ourselves. And so this is how modern transhumanism was born. It was this marriage between this Eugenics on steroids

[00:28:51] idea of transcending humanity and the fact that these emerging technologies were being developed and sort of the realization that yeah, these Rakers what calls them GNR technologies, genetics, nanotech and robotics which includes AI. These GNR technologies could enable us to just radically modify ourselves to become post-humans ourselves.

[00:29:12] And so one thing then to, one implication of this is that because all of the test-cureal ideologies emerged out of the backbone that is modern transhumanism, all of these ideologies directly came out of a modern Eugenics movement. All of them are infused and imbued by the legacies

[00:29:37] that the values, the ideas that are central to the Eugenics movement. And so in a very, I think really important sense, the entire AGI race by virtue of being driven and inspired by the test-cureal worldview, the entire AGI race is infected with-

[00:30:01] It doesn't mean that it's essentially that but it's infected by that. Yeah, I like that. Yeah, yeah. I mean it's, but I would say once you start to sort of look around the literature surrounding AGI, I mean there's just evidence- You're in it all, yeah.

[00:30:16] It's in it all. I mean there was a, one of the examples that got a lot of attention maybe a year ago or so was that some researchers published this article titled I believe it was Sparks of AGI. The Sparks paper is how it's known.

[00:30:35] And it claimed that in GPT 3.5, GPT 4, you can find the evidence of general intelligence. These systems are actually getting us closer to the ultimate goal of the Holy Grail, which is human level intelligence. That is, and the reason I mentioned this is that

[00:30:58] their definition of intelligence came directly from an individual who argued that certain racial groups are intellectually, for genetic reasons, intellectually inferior to other groups. So I mean this is a- It's all tied together. It's all tied together. So you see traces and residues of the worst aspects

[00:31:20] of 20th century eugenics all over the test grill movement and all over discussions about AGI. So I do think there's a kind of substantial connection there. That's really clear, thank you. It's not a mere association. So I guess the question that comes up for me then

[00:31:41] is because we continue time and time again to hear about the march towards AGI. Yes from the people that you point out in your FAQ, Nick Bostrom, Sam Altman, Elon Musk and the others who could very easily be looked at

[00:31:57] and fit into the test grill kind of bucket. But you also hear about it from people who are just coming from the perspective of, well, this is just the natural evolution of the way we work with technologies is that technology is always about pushing forward

[00:32:12] and progress and air quotes and stuff. And it doesn't, isn't AGI isn't a widely wise and almost humanistic technology that is as powerful as we are if not more. Isn't that the obvious end game of what we're trying to do as people who are building technologies?

[00:32:33] I mean, is it possible to do AGI responsibly in light of everything you're saying here? I mean, maybe, but I think we would need a completely different framework. So a couple of things come to mind when you mentioned this. One is that 20 years ago,

[00:32:53] really before, I mean, maybe even just 10 years ago, almost nobody was talking about AGI. There was a huge amount of research in the field of AI, but it was mostly focused on sort of narrow AI projects, narrow AI technologies. And so these are technologies that focus

[00:33:14] on a particular task. AGI is different. It is supposed to be this universal technology. You can use it the same exact algorithm or algorithms in any domain and it will outperform humans as opposed to earlier work, which was just focused on like designing this particular algorithm

[00:33:34] which will do this specific thing. In which case oftentimes the established standards for risk assessment are applicable and are useful. And so what caused that shift from this sort of focus on narrow AI to artificial general intelligence, it was the test-cruel individuals. Right, right.

[00:33:59] Who said we have this universal system, that is the key to techno utopia. And so I suspect maybe there are ways of just trying to build some kind of more general system that are responsible. But the fact is we live in this particular timeline

[00:34:20] and this particular timeline has been, the reason where we are where we are right now with billions of dollars being funneled into these companies that are explicitly trying to build, ultimately to build superintelligence is because these individuals shifted the discussion.

[00:34:38] They changed the landscape of the field of AI because their test-cruel techno utopian vision appeals to billionaires who want to live forever, want to colonize space. So billionaires are like, okay, I'll give you huge amount of money to start DeepMind or OpenAI, whatever.

[00:34:59] And so that's how we ended up here. And it's why I would say just the whole framework is deeply problematic. We need to jettison the whole framework I think and start again. And then maybe there is a path towards more general systems that would be responsible

[00:35:21] and not associated with this techno utopian hype. Sorry for talking over you, sorry. No, no, no, that's why you're here. Jason in response to your question, I'm curious what Amil will say about this. The way I've been looking at it

[00:35:33] and I think I've said it on the show is I see the general is not a bad word because it's a general machine. And we're concentrating too much on the technology as if it had full agency, which is part of the question.

[00:35:45] But to me, it's a general machine. And that means that AI, the subset LLMs, like the printing press are general machines. That is to say that anybody could make them do anything. And thus to think that we can build in guardrails and safety into it is foolhardy.

[00:36:04] The issue isn't the technology, the issue is us as human beings. And there is no way to predict how human beings malign human beings or even well intentioned human beings doing things accidentally. Every possible bad thing that they could ever do with this.

[00:36:20] And so that also leads to a question. So to me, the idea that we can build in this kind of safety I think is foolhardy and ignores the reality of what we have. We have an amoral machine. It's not, it's just amoral, doesn't have morality.

[00:36:39] And so trying to hold the model makers responsible for what happens with everything that could ever happen with a model maker. There's legislation in California right now that says that model makers have to sign a statement under penalty of criminal perjury that their systems are safe.

[00:36:54] Versus the author or the user, whether that's Martin Luther with the printing press or whether that's some schmuck who uses an LLM to do something dastardly, that's where a lot of responsibility lies. And I think we're looking at it as if the technology matters,

[00:37:07] whereas the humanity that matters. Does that make any sense to you? Yeah, so it does. I actually, you know, I mean, feel free to push back if you think I'm wrong about this or if I've misunderstood your position. But I sort of think that there are,

[00:37:24] there's a tension between two of the things that you said. Maybe again, maybe I'm wrong, but on the one hand, this idea that technology is just kind of neutral thing. No, not neutral at all. It has prejudices, but it doesn't, but you have to include in your calculation

[00:37:46] of what it does, how we choose to make it do something, how we design it or how we ask it to do things or what our input is that the humanity has an impact that I think has been ignored this discussion

[00:37:58] is that the technology we're in a separate being. That's more what I'm trying to say. Okay, so I think we're in total agreement then. I see technology as in general in the vast majority of cases, maybe all cases as a non-neutral entity.

[00:38:16] So there is the, it's in the philosophical vocabulary, it is the value neutrality thesis, this idea that technology is just this inert thing and it's entirely up to the user, whether or not the... I search the same, right, right. Yeah, so guns don't kill people,

[00:38:36] people kill people as a classic, but actually a lot of philosophers technology would say, no, technology is absolutely infused with the values of its creators, which I think gets at your point. These AI systems aren't just like all of the talk,

[00:38:54] the vast majority of talk tends to be on to focus on these technologies or these AI technologies themselves dangerous. And you sort of forget about the fact that there's a huge amount of human labor that goes into the creation of these technologies

[00:39:10] that their design is the result of intentional decisions that humans have made. So this gets at why I've tweeted on many occasions, I'm much more worried about AI companies and the individuals who are leading these AI companies than I am about AI because you're totally right.

[00:39:31] Yes, yes, exactly. I couldn't agree more. You're quoted at length in my upcoming book. Do I have a copy here? Yes, I do. The web we weave, I gotta always be selling, always be pointing. Very nice. And at the end I say what scares me

[00:39:43] is not the technology but the people and it's because of you showing me the crazy crap that these people believe in. Let me switch if I can to two news things around this. So curious, since we last talked, no one was covering Tescrayal.

[00:39:58] It was hard to get it into media. I think it's still extremely difficult to get it, journalists, the New Yorker has done a big thing about the long termers and they just completely ignore all the bad sides that you pointed out

[00:40:10] and they can present somebody's view of the world that's okay but at least for God's sake do the reporting and they're not doing it. They're not calling you, they're not quoting you enough both you and the caregiver. Two quick news things.

[00:40:23] One is the Guardian not too long ago, let's see what it was this in June 16. So very recently reported on Sam Beckman Freed's ties to an organization with a racist that is to say eugenics roots and it mentions Tescrayal, yay, right?

[00:40:47] And the other thing that is the person you've already mentioned who's kind of the philosopher king of the Tescrayalists, Nick Bostrom got ousted from Oxford and his center there got defunded. Do you think that there's a little bit of traction around the reporting you've done here

[00:41:07] and awareness of this or are these random events? I think there is growing interest in the idea and a growing recognition that this concept is important for understanding what's going on. I mean, I've said before that the, I think one of the virtues of the Tescrayal acronym

[00:41:33] in addition to the fact that it presents the ideologies in roughly the same order that they emerged historically but one of the sort of philosophical theories about what explanations are supposed to do is that a good explanation unifies. So Darwin's theory of evolution by natural selection

[00:41:54] is just this vast amount of data out there, all these different phenomena and it organizes them all under a single idea. So that is to say that I think the Tescrayal concept does that as well because I've spoken to scholars

[00:42:12] on numerous occasions over the past year, year and a half who some of whom have gone to Silicon Valley and embedded themselves in these communities and they've told me like, yeah, I'm like, I know transhumanism is like, it's omnipresent in Silicon Valley.

[00:42:30] It's the water that these people swim in. It's the air that they breathe and also like long-termism but like how exactly does EA fit into that? Like long-termism is kind of a version of EA and then there's the Racialist community

[00:42:43] which is kind of a sibling community to EA and there's a huge sociological overlap. A lot of people who consider themselves EA's or also Racialists and so on. And then when they heard the Tescrayal term, they were like, okay, that is so useful.

[00:42:58] It just unifies, it provides this unifying framework. It does. So I think that's part of the appeal. I think people are starting to recognize like, oh actually it is useful in trying to make sense of where this AGI race came from. What's sustaining it? What is accelerating it?

[00:43:17] Half of the picture is at least at this point profit. But these AGI companies are unique in the sort of capitalist landscape that you that we're all in. Fossil fuel companies, they're motivated. How do you explain their behavior? Profit motive.

[00:43:39] And then you have pretty much a complete explanation. These AI companies, DeepMind, OpenAI, Anthropic, XAI and so on. How do you explain their behavior? Well profit motive is a part of the- It's also when capitalism becomes a religion with these overtones, it's hard to then beat it down.

[00:44:00] Yeah, absolutely. So capitalism has become intertwined with, I mean there's a sense in which the Tescrayal vision of the future is just techno capitalism on steroids. It's about going out maximizing value, plundering the cosmos, colonizing, expanding, extracting and so on and so on.

[00:44:21] And so yeah, I think Tescrayalism is the other key part of the picture and I think people are starting to recognize that. Back in late 2022, so this was when long-termism was just starting to get some attention from the media. And that was because William McCaskill,

[00:44:42] co-founder of the Effective Ultrase Movement had just published his book, What We Know the Future. And he was on The Daily Show, he got coverage in New York Times, New York Times Magazine and so on. So people were just sort of scratching their head

[00:44:53] like what is this long-termist ideology? What did this come from? Like what does it think we ought to be striving towards collectively as a species? And so I remember talking to a number of journalists because I was in the long-termist community for 10 years.

[00:45:12] I mean, I was a true believer. I'm literally listed as the, I believe it's the fifth or the sixth, I can't remember, fifth or sixth most prolific contributor to the existential risk academic literature, which is, existential risk is the key concept within the long-termist movement.

[00:45:30] So I'm very proud of that. I have a certain street cred. So I talked to these reporters and I tell them, here's what the vision is. It's about re-engineering humanity, colonizing space, creating computer size, planet-sized computers to run for two reality worlds. And over and over again,

[00:45:44] the journalists would come back to me a month later and say that I spoke to my editor and there's no way we're gonna publish the interview or the article because my editor told me whoever I spoke to like clearly just is exaggerating

[00:45:58] or doesn't know what they're talking about or whatever. And then the end of the story is that invariably, there were probably like five cases like this, maybe more, six months later, I get those same journalists coming back to me

[00:46:12] saying my editor told me whoever you spoke to before go talk to them again. Because we need to do a story about this. Cause people just, once people had a bit of time to actually sort of listen to what Bankman Freed

[00:46:26] and McCaskill and so on have to say or actually read some of the papers that I had recommended that are canonical, you know really influential within the test grill literature. They realize, oh actually I'm really not exaggerating. I'm not being hyperbolic at all. This really is the vision.

[00:46:44] So I think that's also partly why this test grill concept seems to be gaining some traction. Now has a Wikipedia page which is kind of nice. Which one, which was taken down and is now back up. Yeah. Yeah. I actually had a friend who's a postdoc at Stanford

[00:47:02] and there was a undergraduate. He told me this a couple months ago, undergraduate who walked in, he'd never met and they got to talking and at some point the undergrad said, can I ask you just kind of a random question? Do you know what test grill stands for?

[00:47:17] This is just some random kids at Stanford. So yeah, the idea seems to be making the rounds hopefully because it's again a useful framework. I can't believe how much fun you make it talking about doom. Right. But it is intellectually fascinating. Yeah, captivating. Captivating is good work, Chris.

[00:47:38] And yeah. I like the, you know, I am actually like deeply, deeply pessimistic about the future. But there's a line that I heard couple years ago which is like, yeah, I'm pessimistic, gloomy. You know, I'm pessimistic, hopeless, whatever. But that's no reason to be gloomy.

[00:47:58] That was a little bit long. I feel that through, yeah. Might as well have a chuckle. Thank you for the nightmares and the chuckles together. There we go. Oh, all living harmoniously while they still can. Together. Emil, really appreciate you taking time again.

[00:48:19] One of the questions that I had which you totally talked about was how far Tescreal as an understanding has come since the last time we talked to you. Cause definitely when the last time we talked to you, it felt like a terminology and it was a terminology

[00:48:32] that was still pretty young. And I've certainly seen a lot more of it bouncing around the AI circles, that understanding of what it means and you're the perfect person to talk about it. So thank you for coming back with us today to talk about this.

[00:48:46] You know, I have plenty of questions that we didn't have time to get to. So we'll have to have you back again. xriskology.substack.com for people who want to go, yes of course read the FAQ on the Tescreal bundle,

[00:49:01] but all of Emil's writings on their sub-stack can be found. There anything else you wanna point people to before we let you go, Emil? Maybe Twitter, I mean X for some reason. It's Twitter, it's Twitter. Okay, Twitter. X riskology on Twitter, I do still tweet a lot.

[00:49:25] So that's a good place to keep it. It's hard to stop. And yeah, people should reach out if they have any questions or I'm always eager to chat with people. So. Excellent. Thanks so much. We appreciate your time and your kindness.

[00:49:36] Thank you for doing what you do, Emil. And we'll talk with you soon. Thanks for inviting me. All right, take care. All right, fascinating. New would be, we've been to this rodeo before. They're just great, Emil. I mean, every time you just, I get captivated

[00:49:50] is the right word. Cause the research is deep, the understanding of this is deep. And it's important. It's really important. I don't want people to pay attention to what they're reporting on. Yeah, 100%. So all right, we do have some news. Now we're gonna chronicle doom

[00:50:06] and see how close we're getting. Yeah, we'll pair up what we just talked about with what's happening in the world of AI and news. We've got a small smattering of stories before we round out this episode. So that's coming up in a moment.

[00:50:21] All right, definitely with the news, the way it shaped up this week anyways, there was so much and like we said at the start of the show, like me being gone for three weeks and not really having my fingers on the pulse of all this stuff.

[00:50:33] Coming back in, it's like, man, what is the most important thing? So I just basically, you created a huge list of options and I just waited for the emotional reaction to hit me to be like, oh, that sounds really interesting. So I don't know if these are necessarily

[00:50:46] the most important news stories in the world of AI, but they're certainly interesting and you grouped them nicely. So why don't we start with the world of disclosure when it comes to AI? We're starting to see more and more examples of how companies like Google and Meta

[00:51:05] are requiring disclosure in a number of different ways. One example here, of course, that you had added is that YouTube is tweaking its policies. They actually last month, YouTube started a new policy silently. So they revealed on their help documentation, this wasn't announced necessarily,

[00:51:25] to allow users to take, to issue takedown requests for AI generated content as well as synthetic content, that sort of stuff that simulates the user's face and voice user would need to fill out a YouTube privacy request directly referring to the content in question.

[00:51:43] And then YouTube will use its judgment to determine what it does from there. So it's really asking the question like, is the subject famous? Is this parody or satire? Was AI usage disclosed properly? Is it quote sensitive behavior that's being exhibited? All that stuff that is aimed at,

[00:52:07] I guess giving users of YouTube, at least a little bit of a say if their face or their voice is imitated in a way that they don't agree with. And there are others in this little trendlet here where Google is requiring disclosure

[00:52:23] of digitally altered content and election ads, particularly, right? I have an immediate reaction to the word election right now. I am, I'm kind of surprised you came back from Europe, Jason. You had to deal with the dogs. I think that's probably why you're back. Yes, totally. Totally, totally.

[00:52:43] And then Metta has added a made with AI label to stuff. And here's the funny thing you won't know because it happened while you were gone. Micah Sifri, Micah Sargent. I know somebody named Micah Sifri and there's Micah Sargent.

[00:52:57] And I was like, was the co-host substitute for Leo on last Wednesday. And on, I think it was Instagram had a picture of him and his fiancee Mazel Micah and AI just Instagram just put a label on it. Said it was made with AI. Oh boy.

[00:53:18] The show title of the end was my AI fiance. But he couldn't get rid of it. He couldn't convince it. Just whatever signal was there, it said that so they've changed it from made with AI to AI info to indicate it.

[00:53:33] So these are three efforts now to say, okay, okay, we're dealing with it. But you know, as you were just talking, Jason I was thinking, I predict that within, pick it two years, this will not matter. That this whole, and I think I'm gonna put it

[00:53:47] on another panic label, this whole idea of, well, we've gotta know what to say. I we've got to label it. Again, it's not about the technology what's the human beings, whatever tool you use to try to fool people. It could be AI.

[00:53:59] It could be hiring a voice double. It could be just lying. There's all kinds of ways you can try to fool people and all kinds of things. You use Photoshop, obviously. You could use video platforms. You can do lots of things to try to fool people

[00:54:16] or lie to them or defraud them. The issue, I'm not excusing the technology. I think that Emil was right when they said that there are biases and leanings built in into certain technologies. What are you gonna use Photoshop for, but to change a photo?

[00:54:40] But I think at some point we've got to shift the focus here to the human beings, the users who were doing things, their motives for doing things. The tools they use can make them more powerful, could make them more persuasive, could make them more effective at this,

[00:54:53] but it's still about what the human being has done. If an ad company tries to fool you that Jason Howell just endorsed Starbucks when he didn't, then the problem is Starbucks, not the technology. Yeah. I mean, I do think that if the technology allows for,

[00:55:11] like if it's relatively simple for a company like Meta or Google to recognize that something was created with a certain tool and it can put that in in some way, shape, or form, like if the ability is there to do that, personally I don't see any harm

[00:55:30] in including that information. But I think at the end of the day, it really comes down to what are our, how do we respond to what we see regardless of what that information is? Do we see a video of Jason, me, AKA me,

[00:55:49] I don't know why I referred to myself in the third person, but me endorsing Starbucks when I didn't and say, well, that's absurd. I know that Jason does not like Starbucks, which is not true. But you're welcome to buy him.

[00:56:01] Just give to Jason and you'll get him a lot of things. There we go, yeah. Starbucks if you're interested in a brand deal, let me know. But I mean, if the technology exists to be able to do this with, and to do it effectively,

[00:56:15] like personally I don't see any reason not to necessarily do that, but to do that and expect that it's gonna solve the problem. I think at the end of the day, that's what I have a hard time with. It's great to spell it out

[00:56:27] because for some people that's going to be enough. Because we've seen time and time again, some people are not the best judges of what is real and what is not. And this kind of stuff is very easily fools a lot of people, I suppose,

[00:56:42] that I would think would be ridiculous, but rather people fall for it. So I guess it doesn't hurt for that to be there. I just don't know how effective that's really going to be. How much people care? Well, it's like the damn cookie notices

[00:56:55] we get on every single site we ever go to because of the Europeans and the EU, it becomes noise. Right, yeah, totally. And boy, while I was in Italy, I certainly saw my share of that. I thought it was bad over here

[00:57:09] and I don't know if it was just, I'd like it might just be the same over there as over here as far as the notices are. No, it's more over there because when they write, some companies are smart enough not to ask you here in the US,

[00:57:19] but over in Europe, they all have to. And you were also doing to the domain, so even sites you've gone to probably had to reaffirm that you said, okay, cookies, fine. Right. Even though I feel like sites I go to all the time

[00:57:32] still ask me to do it anyways. But the point being, it is total noise. It's like, I understand the intentions with this, but I don't think it's satisfying the thing that it was created for necessarily. It just, you start to tune it out. It's just white noise really.

[00:57:49] I agree. And then had another block here, few stories about how AI is coming to the devices that we use or maybe the devices that we haven't used yet but might in the future is, is the case for me anyways with the Apple Vision Pro,

[00:58:05] Apple's WWDC last month unveiled Apple Intelligence. AI, get it? It's major AI strategy for its phones and other devices and Mark Irman from Bloomberg confirmed that Apple Intelligence is indeed coming to the Vision Pro, which is its mixed reality headset.

[00:58:23] Though it will likely require a separate paid subscription to take advantage of those features. We don't actually know what any of those features are necessarily as relates to the Vision Pro, but we know that the Vision Pro headset certainly,

[00:58:37] or we assume, I assume that it has the hardware to power this sort of infusion and to do some really interesting things about it. And I don't know, this is a sort of convergence like of AI and mixed reality.

[00:58:50] I am really curious to see how this plays out because I think both of them together could create, just from like a user who has used VR and AR headsets and I get nerdy about like, oh wow it feels real and oh this is really an interesting

[00:59:07] kind of like perceived experience instead of just like seeing it on a screen, I'm living it and feeling it. And there's a part of me that's like nerdily excited for how AI and mixed reality could potentially kind of come together in Reese's Peanut Butter Cup sort of way.

[00:59:25] Well, well put. I don't know that it's necessarily gonna taste great but I'm interested to see what happens. But it's been interesting for me watching the reception so far to Microsoft's AI inside. Yeah, the computer. Yes, that's right. Because it's the same kind of thing.

[00:59:47] We are phone, headset, computer, AI inside. And so far on the computer people are kind of shrugging I think. I haven't touched it and I haven't tried it and it's not fully delivered and there was what it could do.

[01:00:02] But I don't think that AI inside has been done because a lot of what we're doing, we can all do on the cloud. It's not like it's gonna be something so entirely new. I think your point about the headset is right.

[01:00:12] Is there maybe things that have to happen locally to make the full power of it? Okay. And on the phone there are things that will happen locally, including ad targeting, that may be more efficient or translation or maps or things.

[01:00:24] When you were in Italy if you had used your phone for translation and AI were inside and it could have the little acoustic model, maybe that would make it better. Okay, but I think it's all incremental and minor. I don't think it's, oh my God, they've reinvented laptops,

[01:00:38] they've reinvented phones. No, and I think at this point it certainly isn't worth paying more for. Oh, I got it. Honey, I had to buy a new laptop. I had to get AI inside. Now even as I say that though, I still have a six.

[01:00:54] What's your latest version of your phone? Well, so I mean Pixel 8 Pro is like my phone right now. So I'm still on a six. So as the nine comes out, the presumption is there's gonna be a lot of AI inside on the nine. Of course there is.

[01:01:09] Use that as an excuse to get one. So all these words I just said. Yeah, yeah, well we can kind of get to the nut of it here and see if any of these new features that you might see on the Pixel 9

[01:01:20] if you get the Pixel 9 when it comes out are appealing to you because we have an exclusive from Android authority that kind of talks a little bit about this. Of course I am an Android nerd. So I enjoy when we get little juicy nuggets like this.

[01:01:35] So essentially the kind of the AI bundle different than the test real bundle that will be included on the Pixel 9 series is going to be called Google AI. So my initial thought was, oh no, they're renaming Gemini at a Google AI.

[01:01:52] That is not my understanding after reading through this. It's like okay, cause I was like, you gotta be kidding. Like Google and it's naming. It just doesn't know what it's doing. No, Gemini is part of Google AI. So it's circle to search. We already know about those.

[01:02:04] And then a few other features, one called AdMe which make they say make sure everyone's included in a group photo. So there's no specific detail on what this feature is but it sounds like maybe when you're taking a group photo, you know, there's the best take technology

[01:02:20] where you could take the best faces from each person in a group photo if you had a series of them. Maybe this is like an extension of that. So the person who's actually taken the photo could be included somehow.

[01:02:31] I don't know how that would, this is total guess but what that's going to be through the world of AI. A studio which they say quote, you imagine it, Pixel creates it which really just sounds like generative AI, like image generation, maybe it's emoji stickers.

[01:02:48] Apple actually just announced their GenMoji which is part of Apple intelligence. So creating emoji with generative AI wouldn't be too far off base to assume that. And then finally something called Pixel screenshots and this says on this little kind of call out

[01:03:05] that Android authority got a hold of. It says find the info you need from your screenshots. When you take screenshots, I think they even have more about this. When you take your screenshots, it essentially is able to pull out details and information from the screenshot

[01:03:20] and allow it to be searchable. So, you know, it could take a look at the information that you have embedded in there, summarize that information, answer questions about it, metadata, web links, app names, date taken, all that kind of stuff which has some kind of similarity

[01:03:37] to kind of what you were alluding to, at least in part Microsoft's recall for Windows 11 which does this on a system wide level on PCs. You know, it recognizes how all the things that you're using your computer for and all the different images

[01:03:50] whatever it makes it all searchable. This is specific to screenshots but and process locally on device is my understanding. But any of these features roping you into the Pixel 9 or is this kind of like? No. Which I say with disappointment

[01:04:08] because I want to say honey, I had to buy it. I did. Well, but your Pixel 6, well my immediate reaction is your Pixel 6, it's time to upgrade but I mean Google does support their devices for longer so if you don't need to, you don't need to.

[01:04:20] When did the Pixel 6 come out? Well that's got to be three years at this point, right? Oh, it's ages. Honey, it's ages. Sure she had her iPhone for 20 years. Yeah, by the time the 9 comes out it'll be almost three years.

[01:04:35] October 28th, 2021 was the 6th and the 6th Pro. So we'll see. But don't ask me, I get phones every couple of months to review and stuff so I'm clearly not the everyday user at this point. I don't know what the right approach is there.

[01:04:53] And then finally, and I'm so happy you included some random things I kind of put them in a different order because I thought I could link them up. The surrealistic, or not surrealistic the surrealist moment of generative AI and I thought I'd start with this video

[01:05:10] that you included here that is really bizarre. So to those of you who are listening, Jason will explain. Oh yeah, it's an, so you included a link to a post on X by autism up, autism sock or whatever just anyways, I'll just tell you about it.

[01:05:28] There's a video and trigger warning if you're watching the video it is a bit kind of body horror-esque. They actually noted that in the tweet. So this might actually freak some people out but it shows a generative AI's take on what gymnastics looks like, you know, human gymnastics

[01:05:45] and which is very timely with the Olympics coming up. And I mean, the results are just bizarre and grotesque. Bodies with four legs and no head, bodies that disappear into the mat. Obviously doing things that are impossible to do. Spinning around, it's just limbs growing out

[01:06:08] of other places suddenly and adjust the right time to catch them on the ropes and the horse and all that kind of stuff. It's some sympathy to the machine. Gymnastics is miraculous and amazing what they do. Yeah. And for the machine trying to make of it

[01:06:22] but the bigger point here, AGI ate around the corner. I'm gonna say that just because- If it can't get gymnastics right, what can it do? Right, and I don't say just for that. I would say just to make a crack. I'm also saying that it has the disadvantage

[01:06:37] of having no reality. We get to test things against a reality. We know what reality is until a certain age in life and or unless on certain chemicals. And so we can say that's real, that's not. The machine has no mechanism to do that

[01:06:54] except what it's fed and what it's told. And so when it's in the process of making things up, of course it's not gonna have a tie to reality. And it's not just about facts, it's also about how many fingers you have. That's hard for the machine.

[01:07:08] Yeah, well, and it really is that, what you saw there if you're watching the video version is like the visual representation of what an LLM really does is it's, an LLM doesn't actually have a wide understanding of the words that it's putting down.

[01:07:26] As we've said many times on the show, it's predicting the next most likely word to come after the previous one that it just wrote. Or the next leg that's gonna go over the bar. Even if you already have three other legs.

[01:07:36] And so what you can end up seeing is just like seemingly total randomness, but maybe the two frames made sense together but on the wide scope it makes absolutely no sense. And the only reason that I put this here with another story that you put in there,

[01:07:51] the deluge of bonkers AI art is literally surreal. It's a Washington Post article that dives into the potential that what we're seeing is like this new kind of era of surrealist art, which I totally agree with. Like although I'm sure true art aficionados

[01:08:10] and art snobs would argue this until their dying day because it was created by a computer and it wasn't a true artist with the raw emotion and desire and all this, the other things that motivates a real artist to create something beautiful or interesting or surreal.

[01:08:33] This is really to a large degree, it's the result of humans interacting with computers to control the randomness of the computer's kind of programming or whatever you wanna call it. And so we end up with these really strange images

[01:08:49] that if you're on Facebook, you see a lot of, they actually call it in the article slop which they say is the image equivalent of spam and that's a term that I'd never heard before in this context. But anyways, long story short,

[01:09:02] I think that coupled with just this video that we're seeing of the gymnast, what it kind of where I got with it is I'm a horror movie aficionado. And there is- This is horror. I mean, it's totally like there is a subset

[01:09:18] of horror films called body horror, it is a thing. And there are a lot of people that enjoy that particular type of art and I could see something here in this gymnast video being compelling or kind of pulling on the same strings

[01:09:34] that lights a person like that up. So again, who are we to say that this isn't art that we're looking at, it's strange art and it was created very randomly by an AI but it's still compelling in its own weird grotesque. So weird, yeah.

[01:09:50] So I put up one more thing and then we'll end. Where I wanna recommend to our listeners and viewers, Lev Manevich is in my world a very well-known professor. He's at the City University of New York's Graduate Center. He's big in digital humanities and he and a collaborator,

[01:10:09] can you scroll up so I can get the collaborators in Jason there? Is it say, Lev and Emanuel? Emanuel Ariely. Yes. Ariely. Have written a book online, chapter of time on artificial aesthetics and Lev has braided the stuff and I've read almost all of it

[01:10:28] and it's compelling in a lot of places but it argues AI doesn't have a built-in aesthetics which is interesting because if you look at some of the stuff in the Washington Post piece it all has this kind of candy coated look

[01:10:41] that comes from I think how it's trained and how people are using it and we push it in our use into a certain way. But does the technology itself come with an aesthetic? It's like asking whether it comes with a bias. Is it neutral?

[01:10:57] So we argues that but the other part of it that's interesting is that the technology it becomes a creativity, the technology is just a tool and it may but it does have and this is part of what Emanuel was saying it does have some limited amount of agency.

[01:11:14] It is a collaborator as well, collaborator as well. You're trying to get it to do something but you're not in 100% control of it because there's something else inside this and so this question of creativity and tools and aesthetics become really interesting around AI

[01:11:29] because the user tool relationship changes and it'll be interesting to see whether the only time Sam Altman has had any contact with me and I can't find it, he must have erased it. I looked at some of the early online AI generated art

[01:11:52] and I made some crack about how it must have been trained by a bunch of softwares. And it really I think is what Altman said to me, hurt. But it does matter who trains this stuff. It does matter how it's judged as it goes.

[01:12:12] And so I wonder whether we're gonna get to a point where other people can train it in other ways with different aesthetics and different perspectives and maybe also find a way to restrict that the gymnast can't have four legs. Yeah, yeah. I do think you're right.

[01:12:29] I think people are going to explore this deeper and you will find the people who really get it on an extra level that isn't just the slop that you come across on Facebook. Which, I mean, as throw away as it is or feels like when I see it,

[01:12:47] it to a certain degree is its own art form too. It is. This is not an art form, I really dig but it works for somebody. Or maybe it serves a purpose, I don't know. It's a really interesting time for art

[01:12:59] and for the question of what defines art, especially now. I don't do drugs because I'm too old and pathetic, but if you do, don't look at that video while you're on anything. I'm telling you. Yeah, no, that's probably a good point. Yeah. I mean, even if you don't,

[01:13:15] you might not want to look at that video. You'll freak out. Just fair warning for everyone. Big thanks to our guest, our return guest, Dr. Emil Torres, wonderful conversation about Tess Grail and a great explainer on. Spread the word about Tess Grail. When people start talking about artificial

[01:13:31] general intelligence and all that stuff, say there's another thing you need to know and send them to ExpoSchaeology. Yeah, yeah exactly. And so thank you, Dr. Emil Torres. Thank you, Jeff. GutenbergParenthesis.com is the place that you can go and you should and you should stay tuned there

[01:13:52] because not only can you get magazine and the Gutenberg parenthesis. Soon enough. Soon enough. The wave. Show that again for video viewers. That was the web we went out in the fall. That's an eye catching cover. That's it is. And Leo LaFour was nice enough

[01:14:07] to give me a wonderful blurb for the book and I'm very grateful. Oh, that's great. That's awesome. I'm happy to hear that. Excellent. Can't wait. And then of course for this show, we record live usually every Wednesday at 11 a.m. Pacific 2 p.m. Eastern

[01:14:21] on the text bloater YouTube channel youtube.com slash at text bloater. Last few weeks, we haven't done live because it was pre-recorded. I was in Italy, but I'm back. So we're going to be doing it again. So just follow and subscribe on the text bloater channel

[01:14:36] and you won't miss that like rate review. Subscribe wherever you happen to listen or watch. Support us directly on Patreon if you want to make sure and you know, throw us a few bones every month so that we can continue to develop.

[01:14:49] We've got some ideas on different things we're working on to kind of expand on the show a little bit and see if we can offer a little bit more for you to get you through the door. That's patreon.com slash AI inside show

[01:15:02] and everything you really need to know about the show can be found wholesale at AI inside dot show. That is it for this week's episode of AI inside. Thank you so much for watching and listening. We'll see you next time. Bye everybody. Bye.