Jason Howell and Jeff Jarvis call OpenAI's 1-800-CHATGPT service, Google's Imagen 3 and Veo 2 updates, the anthropomorphization of Claude, MindStudio's workflow demo, and the implications of AI copyright exemptions.
🔔 Support the show on Patreon!
NEWS
0:09:38 - ChatGPT’s AI search engine is rolling out to everyone
0:13:13 - OpenAI launches ChatGPT Projects, letting you organize files, chats in groups
0:19:33 - State-of-the-art video and image generation with Veo 2 and Imagen 3
0:23:32 - Sutskever says we've reached peak data and the way AI is built is about to change
0:26:40 - NotebookLM gets a new look, audio interactivity and a premium version
0:31:35 - UK proposes letting tech firms use copyrighted work to train AI
0:34:00 - YouTube creators can now opt into third-party AI training
0:43:43 - How Claude Became Tech Insiders’ Chatbot of Choice
COMMUNITY
0:57:08 - DJ Rout points out how US-centric "guardrails" are
1:01:02 - Len share how you can get Perplexity Pro free for one year. Jeff shares another way.
Learn more about your ad choices. Visit megaphone.fm/adchoices
This is AI Inside episode 47 recorded Wednesday, December 18, 2024. Feeling lonely? ChatGPT's just a phone call away. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
What is going on, everybody? Welcome to the final live episode of AI Inside for 2024, which is not to say that we won't have episodes next week or the week after. That is Christmas Day and New Year's Day. There will be episodes. They're prerecorded.
We've got those in the can, so you can expect to get them. You just can't watch live. But if you are here live, welcome. I'm Jason Howell, one of your hosts of AI Inside, joined as always by my friend Jeff Jarvis. How are you doing, Jeff?
Hey, boss. How are you? Awesome. Happy to see you. I can't I really can't believe that it's the end of the year and that we're already at this point where it's at.
And we're gonna have not one, but as Jason said, 2 year end wrap ups because we we were gonna make 1, and it had so much we made it into 2. So that covered both Christmas and New Year's. I mean, it was your excellent idea. I have to give you full credit for that because it didn't occur to me. But out of it, essentially, we we skipped Thanksgiving, and instead, we we had so much stuff when we were looking back at the year for, you know, in the world of AI.
We had so much stuff there that we're like, this is kinda 2 episodes. So we've got kinda 2 back to backs along the same topics and, you know, different aspects of AI. So, yeah, thank you for that idea. It's really great. Say this again.
2 weeks off. Yeah. Yeah. We get 2 weeks off. You.
Dope. Because you're gonna watch. Yes. Indeed. Well, maybe not on Christmas Day and New Year's Day.
If you do, man, you are dedicated and we love you. Speaking speaking of dedicated family. Yeah. Yeah. You probably should spend, you know, first at the top of the pyramid is your family, and then it's AI inside.
They're nice. Yeah. Speaking of support, thank you so much for your support. Not there because that is wrong. I have the wrong thing that I'm showing here.
Okay. If you're watching the video version, sometimes I'm a human, and I make mistakes. Patreon.com/aiinsideshow is where you can go to, you know, share your support of the show. We've been doing this for almost a year now. I really can't believe that.
And there is no way on earth that we could even do this show without you. You have been driving the, the ability for us to do this for a full year, and we wanna continue. We wanna grow. That's our plan for 2025, and, we can't do it without you. So thank you.
Patreon.com/aiinsideshow. Friend of the show, Dan Patterson, is a patron. We're calling you out, Dan. You were on a what episode was Dan on? He was he was early.
He was, like, episode 7, I wanna say. 6. Blackbird AI. So, Dan, thank you for supporting and for being a guest way back when. Good to have you here.
And then if you are watching live, like I said, this is our last live of the year. But, just go and subscribe. That way you won't miss the 2 episodes that we have coming up. aiinside.show, then you'll be subscribed, and you'll get those. And then, of course, in the new year, we'll start back up with our lives, hitting the YouTube channel, which is youtube.com/@jasonhowell, which is my name.
I'm still working on getting a channel specific to AI Inside. But, hey, got a lot on my plate, y'all. Real quick. Before we get into the news, I wanted to float this by you, Jeff, just to kinda get your get your take on this. Last night on Android faithful, we were talking half the news, you know, had to do with AI.
We spoke with Shahram Asadi from, from Google's XR division who leads the Android XR effort. We had a really wonderful conversation with him about that, of course, all the AI that's interlinked in that that we talked about last week. And one topic that came up outside of that interview was AI fatigue and the fact that some people, let's say let's call them the normies out there. Right? Not the people who are so dedicated and driven by AI that they know everything.
Probably a lot of the people that watch and listen to this fall in that category. But there are a lot of people out there that are just sick of hearing about artificial intelligence. If it touts AI, they're immediately disinterested. And I think the the thought that I had during the show, and this is what I'm curious to get your take on, is that I envision that there will be a time, and I don't know if it's sooner or later, where the technology itself, where artificial intelligence is a thing that needs to be, like, pounded into our skulls every single time a new thing comes out. Oh, by the way, AI AI AI.
At some point, that's gonna fade out, and we're gonna get to a point to where it's just we have a new feature, and this feature is really cool. And not we have a new feature, and it's based on AI. That's what makes it cool. What what are your what's your take there? My first thought is the fatigue that I am suffering right now is drone fatigue.
I am so sick of all the talk about drones drones drones on TV. And I'm in New Jersey where they actually are above us. So that's point 1. Yeah. Point 2, I think you're actually right.
There's a lot of there's a lot of stories about about adoption curves. Right? Yes. There's certainly one around cool technologies. And and the risk is could AI become the next NFT or or blockchain?
Could it be the thing that people are so sick of? And then and then and it gets so many cooties from the AI boys and their stupidities. I don't think so because I think it's it's unlike NFTs. It's impressive and useful. But I think you're right, Jason.
I think that it will, fade in the background. And and I'll and I'll go a step further here. So in I should put my books over. In the Gutenberg parenthesis, I, I talk about, Do I have one? No I don't care about it.
I talk about Mine is upstairs. Obviously the adoption of, of technology in terms of print. Mhmm. And we've talked about this in terms of the responsibility matrix. Is is is the technologist responsible or the intermediary or the user responsible?
And and these are these are curves that I think things will go on. What's different about AI is that in a sense the technologist replace themselves with it. The technologist always fade in the background. Print at first was a technology and you knew the technologist do it. Then you just hired somebody who prints.
No big deal. Radio was was a woo technology, A telegraph, same thing. And then it became every day and no big deal. And the technologist fades in the background. Now what becomes interesting is what we do with the technology.
And the way I put it out in print, it took a 150 years after a movable type with Gutenberg, before we saw tremendous innovation with print. The invention of the modern novel was Cervantes, the essay with Montaigne, the market for printer plays with Shakespeare, and the newspaper. But we're not gonna wait that long now, but still I think there's there's time. But what's different here again is that you don't need to be a technologist to use this technology. Right.
Big time. And you you do need to be one obviously to create it to create a model of certainly. But the model is made for you to then tell it what you wanna do with no technology expertise whatsoever. So the technologists are obsoleting themselves quicker, I think, than other technological introductions. So I think you're right.
We'll get bored of AI PR. We'll definitely get bored of AI boys and their and their, overblown, promises. But if the tools remain useful, we'll use them and we'll just assume it's AI. Bam. Yeah.
That's that's exactly where I'm at too. Like, you mentioned NFTs, and I think, you know, as a comparison, I think there are a lot of people who are on the, I'm sick of AI, you know, camp who might make that comparison thinking that they actually do compare 1 to 1. But I think the real big difference is when I look at NFTs, a large reason why I think those went nowhere is because, a, they were incredibly confusing right out of the gate. Like, they were never there was never any clarity about about them necessarily. You almost had to already buy into certain ways of thinking in order to really understand NFTs.
And 2 Yep. That usability, that usefulness, NFTs never did anything for me. Never had really had the benefit or or the potential to do anything for me. AI, whether people realize it or not, they're using AI in certain ways, and it's improving the products that they're already using whether they realize that's AI or not. The usefulness is there.
They might not even real you know, know that they're using it. And so I do think that they're they're very different. Yeah. And and I think that that, you know, I've made this point about Google over time. I think they didn't I don't blame them for this, but they didn't I'll put it this way.
They didn't get enough credit for the AI advancements that they had. Going back to primarily, I think of translation. Translation was a huge leap when they saw how to do it differently, and and and it scaled in a way that they hadn't predicted. And it it was a new model for computing. And all that that Google has done with AI.
And then along comes chat g p t and it's forgotten. Which is a shame in a way, I think. Because, because now they're trying to catch up and that makes you do stupid things when you catch up. Yeah. Yeah.
Well, that's a really great way to lead into our first story, which for lack of a better top story, I mean, we might as well talk about OpenAI and and its continuation of its ship miss thing, which, you know, I got I gotta give, OpenAI credit. Pretty smart to do this release schedule around ship miss because holy cow, ever you know, it continues. It's a daily march to what what they announced next and the amount of big features and products that they've actually released this month. It's, yeah, it's been pretty interesting to to follow. But, a few things since, since last week's show, there is of course their search engine, which we've well, their search product, let's say maybe not a search engine, but search product, that we talked about a couple of like a month or 2 ago, I think.
And now it's, available for free for users. You just gotta be logged in in order to use it. And, and it has some options in how it works. They did a video for this as well, as they have every day. And, the it's seen as a, as a clear slap at Google along with perplexity that's doing search.
They're all doing search. At the same time, Google's under fire for antitrust around search. Yeah. They have a raft of new competitors in search. I'm still a little dubious, until I get enough experience with these.
Because, again, we know that that generative AI has been really bad with what they're called hallucinations, with a lack of facts, with the randomness it enters in. So I need it's gonna take some time for me to play with this and see how well this works. Mhmm. And is it really generative AI or are they just fit in an old fashioned search engine into it? Does it just call on an old routine that isn't really that that fancy and new?
It'll be interesting to see, as I play with each of these, how that works. Yeah. I mean, in in my interaction, it really seems like, you know, to to your last point, it's kinda like a search product built into the AI tools that we're using. Like, I don't think of it even though a lot of people will say now, I go to ChatGPT or I go to perplexity and I never go to Google anymore. You know, when I would have gone to Google, now I go to these things, so it seems like a replacement.
And I kinda think of them a little bit differently. If I go to Google, it's because I'm looking for a specific thing. If I go to a search product like ChatGPT or Perplexity is the one that I use, it's because I wanna get context around a certain thing. And if I didn't have the search product built into the tool, then I'd be doing the same thing. I'd just be doing more heavy lifting to make it happen.
I'd go to the pages that I wanted to pull in and copy the text, and I'd paste it into a doc, and then I'd feed that into perplexity. And I'd say, give me context around this, which in my mind is essentially what what these products are. It's like, tell me about Android XR. Okay. We'll go out there.
We'll find some articles or some sources that know about Android XR, and we'll bring it in. Alright. Now what do you wanna know about Android XR? Because I've synthesized all this stuff. And that's kinda what Yes.
Yeah. Yes. Which Google is doing now. Oh, they're all doing it. And I was dubious about that, And I find myself, floating to the first thing I read, which is often the, AI summary.
Mhmm. Yeah. Totally. Even even though we know that they are not always correct, but they're correct. At least my experience for for, you know, in my daily use, correct enough.
And often, I can sniff out when they're not. You know? Yeah. Yeah. So I have that.
And I think that a lot of people do too. Another another thing that OpenAI announced was projects, and, this is for organizing and grouping into workspaces. You can take conversations, your files, your custom instructions. In some ways, this reminds me of Perplexity Spaces functionality, though in this case, it's actually a bit more powerful. And actually, this article at VentureBeat compares it more to notebook lm, which we'll talk about a little bit later as well, minus all of the Right.
Podcasty type functionality. But, yeah, building a chatbot, building a virtual assistant, a tool that's specific to a very specific domain, and, you know, integrating that with programming libraries, frameworks, and apps, that kind of stuff. Yeah. And I'm not sure it's really a competitor to to to NotebookLM, which again, we'll get to later. Yeah.
But it is interesting spaces that is that when this when all this started, you'd go in and have a dialogue with the the chat, and then it was forgotten by both. And there was nothing to build upon. Mhmm. So I think this is an effort to, a, build upon, and b, ingratiate itself into our workflow, which is important as well. Yeah.
Yeah. Indeed. Indeed. And then finally So, break breaking here. Oh, yeah.
Yeah. Go for it. Did you I just put this in. So so the 10th day, Shipmas just went up. The video is not up yet or or it was live, and it's not up again.
But I I I saw the label was 1-800-CHATGPT. Oh my goodness. So, there's a report on nope. Nope. It rolled out a new way to talk to chatbot through the phone.
US users can dial the number for 15 minutes of free access via phone per month. Anyone dialing globally can access the number via WhatsApp. So I don't know what Jason. Should we should we try to can you hold your phone near the microphone? Should we try it?
I could certainly try here. Let's see here. This could be bad demo mojo, but, probably a lot of people are trying it as well. Yeah. For sure.
800 It's 1800 242. G p t. Oh, you got it. Okay. I always hate doing the letters.
Yeah. Hi. I'm Chad GPT, an AI assistant. Just so you know, our conversation may be reviewed for safety. By continuing this call, you agree to OpenAI's terms and privacy policy.
So how can I help you today? Well, hi, ChatGPT. I'm calling you on a telephone, and I'm wondering, are you busy today? Do you have a lot of people calling to use the 1-800-CHATGPT service? What can you tell me about that?
I'm all yours right now. I can handle multiple conversations at once, so there's plenty of me to go around. What's on your mind today? Are you what is the foundation of of your your being? Is it GPT o1, or can you tell me a little bit about the technology behind you?
Built on GPT 4 technology, which is a type of AI model known as a large language model. Okay. Yeah. It's designed to have a very good Can I interrupt you instead? Of course.
What's on your mind? You can ask it about phone equity phone phone etiquette. Oh, yeah. Tell tell me a little bit about phone etiquette because, apparently, I'm really bad at it. No worries.
I've got you covered. A few quick tips. 1st, always introduce yourself at the beginning of the call. Okay. 2nd, listen actively, and don't interrupt.
3rd, Okay. Yeah. I'll make sure and do everything that you said there. I will follow it to a t even though I just interrupted you. Okay.
I'm good. I don't know. Do we have anything else to tell ChatGPT? No. No.
Okay. We're gonna hang up now, but I I think you're great. Bye bye. Okay. How how many people do you think are actually gonna call a ChatGPT?
I I just don't see it. Hi, buddy. No. No. But, you know, in my in my car, right, I I have Android Auto, and I have a button on the steering wheel for the speaker and I hit it and I can say things to it.
That's which to me is more of a voice, activated world than the devices we had in our homes. The animates and and all that stuff. I guess it's their their desire to get into that. If it's a very chatty interface, I guess it's another way to demonstrate how chatty it can be. Yeah.
Or low or or, actually, you know, we were just talking about people who see AI as, like, this of out there thing, you know, this thing that has nothing to do with them. It's inaccessible and everything. And, you know, maybe even the idea of going to a web page and opening up a chat and, you know, thinking about something to say, This encourages you to use your voice for these things, and maybe it reaches people who are a little bit more versed to going to a web page like this. I don't know. That's an interesting one.
Well, I guess if you're gonna wear if you're gonna wear, AI glasses and all that, it'll become more of your habit to speak out. For sure. Absolutely, it will. Yeah. And, you know, AI glasses, now now that I'm thinking about it, OpenAI, Altman working with Jony Ive, that's that's probably what's what's on the horizon, right, At some point, that would be what there would be.
But another way to look at this, Jason, is no one ever answers the phone anymore. So if you get really lonely, the only person who's gonna answer the phone anymore is the fake one at ChatGPT. You could always call them. At least you are you always have ChatGPT. You always have ChatGPT.
ChatGPT is always there for you. Or is it Claude? We'll talk about that a little bit later. Claude might be there for you a little bit more satisfyingly than OpenAI, ChatGPT. Google doesn't thank you for putting that in, by the way.
I'm happy that we got that in there. That's, that's an interesting one. Right before we got on the show, I thought I better see what the day 10 is, and then the video wasn't up yet. Then I went back, and it says replay coming soon, but the title of it was 1800-CHATGPT. So that made me think I I envision some snarky headlines announcing that news.
I haven't looked yet, and I'm sure there are some some, you know, disgruntled tech journalists out there that are like, oh my god. Give me a break. So we'll see what those headlines say. Google not wanting to be left out in the dust continues to push its own avalanche of AI updates. Of course, last week, we talked about Gemini 2 point o.
We talked about Project Astro. We talked about agents. And this week, it has to do with video and photo among other things. After after the break here in a few, we'll talk about another kinda update. But here, we're talking about a v o two and Imagen 3 or Imagen 3.
I never know how to say that. Is it Imagen? No. Imagen? I don't I don't really know.
Anyways, Imagen 3, not a whole lot you could say about that. They they put it in their blog post, brighter images, better composition, more diverse styles, greater accuracy, which I mean, how much more can you say? Basically, all you're saying is things look better when it comes to the images. Yeah. I don't know what more you can say about that.
In the in the image generation, yeah, I feel like things are, to a certain degree, are plateauing at at a point. They they all look pretty good, but they also still have those tells. So maybe that's it. Vio on the other hand Valley, they're also short, and I'm I'm curious. What kind of what kind of compute it's gonna need to make the first movie out of this or even the first TV show?
Yeah. So Imagen or Imagen is still images. Veo 2 is is the other part of this, and this is where things still have a lot of of, room to really grow and get, you know, even more interesting. Here, in this case, Veo 2, can now do up to 4k resolution. And my understand that's pretty impressive.
My understanding is if you're doing 4k, you get up to 8 seconds of generation time. But I think they also set up to a minute of generation time. So I don't know if the minute is applied to 1080p or what, but we're you know, that it used to be that it was like a 3 second thing, and now we're getting that extension Right. Right. Trained on the length of the entire That's gotta be an exponential increase in compute because it's gotta hold on to this stuff.
And the the error the possible error rate of not knowing that you can't walk through a wall, is greater. Yeah. But yeah, these look good. I mean, they, they really do look good. Like this, this one of the, the beekeeper surrounded by a swarm of bees.
Like if you lock onto one of the bees, I mean, I I realize they're moving really fast, but this looks pretty real to me. It's pretty impressive. To me, it's at this length, there's just no question we are gonna see huge amounts of commercial advertising using this. 100%. Yeah.
The one thing that I well, I I I I moderated a panel at the Barrelsman Investment Group, a year ago on this topic. And the ad people in the room said they were very cautious about public use because of fear that they would get sued because of how the models were trained and was it copyrighted? Now they're going to get in trouble. So the first copyright safe model, whatever that means, because it's not really set here and we're going to find some more news about that too today. I I think we'll be, we'll just be glommed on to by Madison Avenue.
Yeah. Yeah. Very interesting. I like, when I see things like this, I think I think we are kind of at a point to where it would really be great if the AI companies, and I'm sure eventually they'll either be compelled to do this or they'll start to do it just to to gain trust. But start telling us a little bit about this dataset.
How are you getting here? Are you are you filling, you know, your are you training your models from public domain? If so, like, what is the what is the domain of the public domain? Like, where did that come from? You know what I mean?
Like, I think people really do truly wanna know what the source of this stuff is because there's a large contingent of people that simply won't use it because they assume it was collected, unfairly. What however you It goes to a story. To find that. You wise, they didn't put it in the rundown, but it's worth mentioning because it was I think it was Sutskever, said that we've reached peak data in the way AI is built. And basically they're saying there's no more data to be had, so it's all gonna be, synthetic data, which we've talked about before.
Mhmm. Which I I hadn't thought of this till this minute. It's not just a play to get more data to train on. It's also a way to make your own safe data. Now I think that it's going to be more questionable because it has even less of a tie to reality and when you choose to make it, you're setting those parameters of what you're making and that obviously includes biases.
But, yeah. I think I think we're probably just as we're getting into the peak fight about about people using data for training, I think we're gonna find a lot less use of data for training. Yeah. Yeah. Interesting.
Love to get someone on, I think, at some point who is really knowledgeable about the idea or the concept of synthetic data and to Yeah. I would too because I don't get it. Kit pick their brains about that. I hope to do some data and see if I can find someone. And if anyone out there has someone in mind that they think would be great for a conversation like that, contact@aiinside.show, or you can go to our web page.
We have a contact form there. Let us know because I think that would be a really fascinating conversation. I would love to learn more about that. Well, I would imagine too there's gonna be companies that are specialized in making synthetic data to spec. Oh, yeah.
They they already exist. They have to, I imagine. Yeah. You know what I mean? They're out there.
They they gotta be. And then finally, just real quick, Google did also announce a new tool, called Whisk, which is kind of an experimental thing, which essentially allows you to take it's it's using images to create images. So you'd have one image that you feed it for the subject, so this person. You'd have another image that, defines the scene and then another image that defines the style. So as you see, if you're watching the video version, that guy, pick your scene or you can roll the dice and it'll choose a scene for you, you know, this kind of like greens science fiction landscape.
And then you can kinda give it a style. Is it anime? Is it whatever it is? And it squishes all that stuff together and makes you an image or assortment of images that fall into line with that. And then you can go in and you can pick pieces of the image and redefine them.
So in this case, this guy's suit, turn it pink or whatever the case may be. And, you know, it's whisked because you're mixing all this stuff up, I imagine, is where Google's going with that. But Into a froth. Into a a whipped cream of artificial intelligence. Interesting stuff.
Yeah. The creativity aspect. Hopefully, someday, we'll talk to some some guy about creativity and AI. It's good. It's bound to happen.
Yep. Alright. Let's, take a quick break. And then when we come back, we've got another, actually, another update to, a Google product that we talked about a lot on this show that's coming up in a second. NotebookLM.
Do you have you used notebook LM like for for any of your work? I I find myself using it more and more the more I understand it. I I am. I I was involved in, proposals for for a university program and there were 2 people's programs. 1 of them put some of it into into ChatGPT and the stuff was new.
Mhmm. But there were like 4 or 5 versions of this proposal. And so I finally said, oh, heck with it. I'll put it in NotebookLM. And I and I gave it instructions.
I said emphasize this more. And the first sentence that came back, I couldn't beat. It was perfect. It was better than I would have done. It was exactly what I wanted.
Not the whole the whole rush to the thing. Not necessarily. Yeah. But it was very impressive how it took the instruction and the data available and used it. So it's, it's it's pretty it's it's the best, to me, proof of concept application of AI that we have now.
Mhmm. Yeah. It's super flexible. It's super powerful. The interface makes a lot of sense.
Once you once you spend, you know, like, 30 minutes with it just to kinda get a a lay of the land and everything Makes a lot of sense. They now pushed out an update that actually changes it to a 3 panel layout. So one panel, you would have your sources, another panel, you'd have your actual chat, and then another panel you'd have is is like your studio panel so you can kind of refine. Actually, this is probably a better image of that. Sources on the left, kind of your chat, area in the center where you ask the questions where it gives you kind of some of the information.
And then the studio is kind of a place where you can kind of expand upon that or they have, you know, some buttons. You could press a button and turn this information into an FAQ or turn this information into, you know, a study guide or a briefing documentary or put it into a timeline format and stuff. And it's more and more, I think, we're seeing, you know, Canvas is kind of along these lines. We're seeing more of these interfaces that isn't that aren't just a chat window, but actually kind of a workspace within to to work and to kind of move your information in and out while it's still leaning on the the power and the benefit of the tools themselves. And it's more intuitive.
You could you can edit it. You can you can give it new instructions as you go. We're gonna we're gonna show you another application at the end of the show that that does the same thing. I think that's really important. You can't just say, okay, leave it.
Here's what I here's what I got for you. And then start over again. You've got to take what it is and be able to iterate with it. And I think that that matters. So Steven Johnson, who we've had on, who was the, editorial director of NotebookLM wrote that post.
It's worth noting as we did last week, that the, the 3 founding team members of NotebookLM have left to for a startup. That's right. So we're going to see a lot of competition in this field, but this is a really good push. And the interesting that the fun thing that notebook LM did in this, in this rollout is that you can talk with, you can enter into the conversation in the podcast. Uh-huh.
So you you you make it you make it a a a an audio as we've talked about before in the show, but then at some point, you can act like it's a call in. It's not just a podcast. It's a call in talk show, and you can tell it that you wanna enter in and ask a question. And so they have a a brief demo of that. Yeah.
That was that was interesting. I played a little bit with that, and, you know, it's playing the podcast. Put that in air quotes. Playing the podcast back. And then I hit join.
They're like, oh, oh, we have a caller. Hello. Welcome. It was it was hilarious. I was like, wow.
This is this is, at at the same time, cheesy and intriguing. Right? It's it's kinda like the perfect balance between both. That's kind of AI today. Cheesy and intriguing.
All at the same time. Yeah. Yep. Totally. Yep.
But, yeah, it's neat. It's neat. The more I remember to use NotebookLM, the more I think of new ways to use it the next time. And, that's how I know it's a it's a tool that kinda has my imagination and, you know, starting to really understand what what I find it to be good for, you know, which is kind of error error and fact checking of stuff that I've come up with on my own or in other tools and, you know, just it has a really nice kind of way of presenting what it what it finds when you feed it information. You know, when I feed it information, I say, fact check this against these sources.
Am I right if I say this or whatever? And it gives at at first glance, it gives what seems like a wall of text, but it's formatted to a point to where you can get it gets really comprehensive as you read through it. It's like, okay. I I feel like it did a better job. You know, some some of the other tools like ChatGPT, you open it up and you do something, and you feel like you get that very banal basic kinda, like, output.
And I feel Mhmm. Mhmm. For whatever reason with with NotebookLM that it feels more, more comprehensive. I don't know. Good from my perspective.
UK government proposed new copyright exemptions for AI companies to use copyrighted material freely for model training. This is really interesting. There's 2 stories that kinda fall into this, but this is the the first story. Creators have the ability to opt out or, quote, reserve their rights. So you're kind of in until you tell them no.
I found this really interesting in a time when almost all regulation is anti technology and presumes the worst. This is I I think already people are yelling at me online because they're they're stealing my soul. But, and I get the arguments. Mhmm. But I think this is really enlightened.
So all it is simply is saying that there's an opt out rather than an opt in for training. Not for quoting, not for for major use, but for training. I think it's important, a, because as I've long said on the show, there's a right to learn and a right to read. And not only journalistic companies have that right, but so do should AI companies. But it also says we're gonna end up with better models if we do this.
For sure. If if the model makers have to go over and and pick and choose, we're gonna end up with more biased and worse models that are in turn gonna gonna create more problems down the line. So I think a forward thinking way to do this is what the UK government, and I would say in a rare fit of wisdom, has done here. And it's just an open for discussion. It'll probably get changed.
They'll get yelled at by every author there is. There'll be much gnashing of teeth and being abreast from various associations of writers and journalists. But it's the right way to start. Because it gives you the right to say no. I don't wanna be used.
Okay. You could do that. And it's a it's a, robots dot text that has to be followed under under this, you know, by law. So that's that's a strengthening of rights in that sense. But it also says that, you don't have to get explicit permission to teach the machine how to talk.
And a machine that can talk is an interesting and important thing these days. So I think this is really forward thinking that's smart. Already, we are seeing major complaints from, writers unions and such. But I think it's good. Yeah.
When you when you said that about robots dot txt, that helps me when I think about this. I because I I find myself conflicted by it. And and the next story that I found, I actually discovered this morning because I opened up YouTube Studio, and sure enough, I got this little message that took me to an FAQ about, opting into third party training for my videos on YouTube. And so, you know, I read through that and then I looked it up, and sure enough yesterday, this was, you know, news that YouTube, creators, as we're seeing on the ZDNet article, can now opt into third party AI training. So Google doing the opposite way, basically saying if you want your if you're okay with it, you can tell us yes, and we will include your content in AI training.
And, you know, and then I put it into this context where I'm very invested in in the content that I'm creating right now, and I was hesitant to opt in. But then when I think about it, I'm like, but what am I what am I worried about that an AI is going to out me me? Or, like, you know what I mean? Like, I don't know what the fear is there, but when you do create something, I can understand why people are hesitant initially to say, sure. Go to town.
You know what I mean? Yeah. It's it's it's an emotional thing. It is. I'm reading a an academic book right now called American Literature and the Culture of Reprinting.
I read all the fun stuff. I used to do. And and it's actually fascinating. I don't mean to make light of it by Meredith McGill. And I and I and if you're interested in this topic, it's really really interesting.
Because what it says is before 1891, America did not have international copyright. And so publishers in the US could take anything from Britain or anything around the world and translate it and publish it in the US and not pay anybody. And, Charles Dickens came over and personally lobbied trying to get rid of that because he wanted to get paid for his famous work. Of course, Some publishers would pay Dickens, just to be nice, or to get the plates, or to try to have some relationship so that others wouldn't do it. But there was no copyright.
And what this book goes into, that's interesting to me, is that it led to a flourishing of literature in the US. There was more material to read. There was more out there. And to me there's an analog here to this question of AI. And part of the legal basis before the 18/91 copyright was the view that when you chose to when you when you I'll I'll do it this way.
Handwriting was seen as ultimately private. If you wrote out something that was yours because it was your handwriting, it was your labor, it was your words, your thinking, and that was there was no question of the ownership. But once you chose to have it printed, the public had some right to it. Had an interest in it because you were choosing to make it public. And in doing so it entered into a larger, we would say today, ecosystem of information and culture.
And that you were relinquishing some of your rights in doing so for the good of the culture. And the copyright was an effort to limit. In one view you may in your handwriting you had a perpetual right to this. So copyright didn't grant you a right in print. It limited your right in print to the length of copyright.
It's a really different way to look at copyright. So if you if you if you make that an analogy to AI, you put something on the web. You put something out there. Anybody can buy your book and read it. Buy your newspaper and read it.
They can learn from it. They can use it. That's part of the point of all this. Only the treatment of it. Only the quotation of it can be copyrighted.
Yeah. So, then it can lead to a flourishing of culture through AI if we think about it the right way. That's again why the why the UK thing. Now for your your in terms of your own reaction to this, Jason, it's interesting. On the one hand, if you did something so unique Right.
Only you do it. Right? Yeah. And and and the AI found you're the only one who did it and then when somebody asks a question related to what you do, it comes up with the answer you gave because you're the only one who does it. Mhmm.
Then that, I think, would be cause for resentment. Mhmm. Because it does duplicate That would feel directly impactful. Yeah. But how much of what any of us do Oh, totally.
Is unique. No. It's just And this show absolutely fair. Is us exactly doing what AI does. We are reading others' news.
We are bringing we try to credit it. We try to be fair to it, but we're doing exactly what AI does. Maybe dumber, you might say, but we're doing the same thing. And so, it's hard for us, I think, then to say, no. Don't you do that to us because we're doing it to everybody else.
Yeah. So true. I'll yeah. That's yeah. I what I need to do is I need to clip that, like, last couple of minutes and set that to the side on my desktop so that every once in a while, I replay it for myself because sometimes I forget this.
We have these conversations and they make so much sense. And then I presented with a situation like this and I'm like, woah. That's mine. It's so true. You're so you're so wise in the in that perspective.
I I totally agree. As you as you spell that out, that makes complete sense. But I got I have writers who when I when I praise this this UK thing, I have writers yell at this one woman got mad at me and went on, you're you're gonna give away your book. Okay. I said, yeah.
You know, you can I would be happy to give you the book? You can read the book. It's okay. You're gonna learn from it. It's alright.
What do I do? I read other books. That's why it happens. Yeah. Yeah.
So interesting. Yeah. Yeah. And, I mean, it you know, this is this is happening in the UK as they're going forward. I mean, is it it's confirmed to be, or is this a proposal?
No. No. No. This is this is the beginning of consult a public consultation. Yeah.
So this is just just Yeah. It's kind of a policy question with a presumption put out there to react to. So there's gonna be plenty of reaction. How that how that plays out. And, of course, if this were to happen, the influence that has on how others go down this road.
Yeah. There's a lot to learn in the in the upper couple of years. Different from the the, the, the out of the, gate presumption here is so different from what everybody else is doing. Yeah. Right.
When I testified before the senate commerce committee, a subcommittee, the senator, the chair of the committee was, you know, yelling at me and saying well, he didn't yell, but he scolded me. And said, you're gonna expand fair use so much there's no copyright left. And I said, you're gonna expand copyright so much there's no fair use left. And I've quoted that in the show before, but that's the argument we're going on right now. And his presumption from a policy perspective is, no.
Technology companies shouldn't be able to do any of this. And the lobbyists, no. Top them. And I was the only one in the room who said, actually Mhmm. Mhmm.
Interesting. How much are can I do that? Fascinating. Well, how do you all feel? Contact@aiinside.show.
We are gonna read some emails at the end of this episode, and it's something that we're gonna play around within the year 2025. So, you know, send us some, send us some of your love, and we'll, maybe include it in the show, and it can spark more conversation along this line. Put this story in last minute. It's about, NVIDIA's new supercomputer dev kit. They call it the Jetson Orin Nano Super Developer Kit.
That's a mouthful. But, basically, it kind of reminds me of, like, an AI specific, Raspberry Pi. It's this tiny little device that actually Jensen Huang pulled out of a, an oven, I think. In at least He made a shtick of one of the prior models of taking a whole thing out of an oven, and it was as if they were cooking it at OpenAI. This is what he's pulling out, the big one.
He's pulling out the big one. Out of his old oven. Oh, the tiny This is the new one. The nano, Apollo sized device. There you go.
So, you know They're calling it a super an AI supercomputer. Now I don't know what whether there's legal definition of supercomputer. Yeah. But it's it's a pretty incredibly powerful thing for $249. $249.
1.79. Retail AI hardware. Yeah. I I I'm just I'm just a gawg by this. And what he's what he's talking about here, I think the main use of this is robotics.
Okay. Yes. That's kinda how he's So you can have the computer in the device. It's mobile. It's low power.
It can do what you want. But clearly at that price point, imagine what hobbyist could start to put together. Yeah. Well and I think, you know, even Raspberry Pi Raspberry Pi has an AI. Is it an AI app?
They think they do now. Right? Yeah. They have an AI app, which is, you know, a lot less a lot less expensive. You know, it's meant to, I think, attach to your Raspberry Pi because it's a hat, but, $70 for that one.
So, you know, NVIDIA is not the only one, but NVIDIA is also, you know, one of the biggest names in in AI hardware right now, continues to be a hot commodity as far as that's concerned. So I think they'll get a lot of pickup on that. I'm I'll be curious to see, like, as people you know, same with the Raspberry Pi, how are people going to, you know, take this design, this kind of open design, do, you know, do what you want in the on the playground with this thing? Like, what are you gonna put it to use for? What you know, I'm I'm super curious to see how that gets implemented into people's designs.
Yeah. We shall see. Kevin Roose. I think one of your favorite people at the New York Times. Right?
Well, he actually got mad at me for for I I I tweeted, skated, etcetera ed, this story. You wanna describe it first, then I'll say what it says. This particular story. Oh, okay. I had no idea.
I didn't I didn't catch that. He wrote on the New York Times about Anthropic's Claude chatbot, basically saying, you know, it's rising in popularity with tech aficionados, says it's it has better emotional intelligence, more thoughtful answers on personal matters. Some consider it to be a best friend sort of thing because it has sensitivity, warmth, it's witty, and it's all related according to the article to character training of the chatbot to modulate, personality traits. And, you know, of course critics are are critical because using chatbots for emotional support, especially if they're already in a vulnerable state is, you know, potentially a risky endeavor. But, yeah.
So what are your thoughts here? So my complaint was that it's and Roose, let's remember, was the guy who, when ChatGPT came out, it fell in love with him and tried to break up his marriage supposedly, and he he lost sleep because of all it was doing. He anthropomorphizes the hell out of this stuff. And he's doing more than anthropomorphizing it here, in my view, is he's celebritizing it. He's making it into a celebrity, into a star.
And that's that's kind of another step. So he got mad at me when I said that. And he said, that's it was lazy of me. But I stand by my judgement when I went through the quotes you just came out with. Yeah.
It's about of it's a lot of anthropomorphization. Well, a lot. Yeah. And and yeah. That's the problem I have with it from 2 perspectives.
From the journalistic perspective, I think that it, it ill presents this technology. And and and it and it leads to people, not just moral panic and all that, but it also leads to people thinking that it's he makes a point. Nobody thinks it's real. They don't wanna be the guy who thought that Lambda at Google was sentient. He makes that point.
But he then goes on and people people are are are are are giving it he's giving it human, characteristics. And I just think that's, I think it's it's it's not a proper presentation of the technology of the public. On the technology side, well, people are saying, yeah, it's really cool. Wonderful. Because they're always trying to pump it up, and they're trying to argue that it's gonna be artificial general intelligence.
And they're gonna argue that it's gonna be human like. So it plays into their PR hand, and I think that's the wrong thing to do. So he's not happy with me, but fine. Cory Doctorow said when I when I put this up because I put it everywhere. I put it in Twitter, and Bluesky, and Threads, and Mastodon.
Cory Doctorow's response was, he's the ideal tech patsy. So have your complaint with Cory, not me, Kevin. Yeah. So so kinda hearing hearing what you're saying about that and having read the article and everything, I mean, I can admit okay. So I have not used explicitly.
I have not spent a lot of, like, focus time with Claude, so I can't speak to any of these claims. But I can say that having used different LLMs for different things, that some of them have I hate to use the term personality, but some of them have a presentation or or something along the lines of a personality that others do not. And so it becomes clear the more you use them that certain ones are good for certain things or better for certain things than others. And to that end, I can kinda see where he's coming from if Claude kind of if people are are opening up Claude and asking it certain types of questions and they're getting the answers that, you know, kind of fall in line with the the genre of questions that they're asking, I could see why people might might feel that way. But I did it did really strike me the and and it's something that I'm more and more aware of and trying to, like, balance inside of my self, not anthropomorphizing these systems because I I I do think that, you know, that goes down a slippery slope, and this article is full.
And and I do it too. I talk about them learning. Are they really learning? Right. Do they think?
No. They don't. Do they know? No. They don't know.
As you're talking I'm thinking that maybe another way to look at personality, this is my terms as a writer, is voice. Oh, yeah. It has a different voice. Right? It has a different sound to you.
Mhmm. And and that's programmable, and and it makes a difference in how you, how easy it is to interact with it, and and how clear it is. Mhmm. And I think that's legitimate. I think that's okay.
But personality just goes over the edge to me. Yeah. Right. Because it doesn't really have one. And if it and if it if it does appear to, it's because it is manufactured.
Yeah. Well, and as they said in there, you know, it's been, character training, you know, to to kinda give it more personality traits. That's exactly it. It is it is manufactured. But does that manufactured personality trait you know, from from a business perspective, does giving it those traits actually bring in a certain type of user that's looking for that in that solution.
Yeah. We've we've seen these concerns about character dotai. And, you know, when when when it tells you to kill somebody, a, it shouldn't do that. B, as we've talked many times in the show, it's gonna be hard to get guardrails that'll prevent all such actions. And c, if you then go do it, you've got bigger problems Yeah.
Than the technology. It's probably not the AI that made you do anything. No. And that's that's an issue. And if it if it affects vulnerable people, that's an issue we have to figure out.
But, it's just a machine. It's just a word prediction machine. We gotta keep coming back to that. It's all it is. It's all it is.
It's it's amazing what it can do, but that's all it is. Mhmm. But this relates to the next story that I threw in at the last minute. Yes. So Business Insider story, the weirdest job in AI, anthropic is attending to the welfare of chatbots before they grow up and turn on us.
See this is what drives me nuts about the organization. This is what it goes to. Yeah. Right? Is that is that they're gonna the the the company, maker of Claude, is making such a big deal out of well, does it's so human like.
Does it have rights? Does it have feelings? Should we worry about its welfare? Come on. And that's laying groundwork for AGI, which is, you know, this Yeah.
It is. I mean, you haven't cleaned your lawnmower in 3 years, and you know, don't don't be acting like the machine, is worthy of, of this kind of empathy from us. Right? How do you know me, Jeff? How do you know I haven't cleaned my lawnmower in 3 years?
No. I'm I'm betting you have cleaned it. You're an organized guy. I'm betting you cleaned it. That's one thing that I don't know will ever be on my task list until Okay.
Until I see us with the moment, like, oh, okay. Yeah. So so now it's a job. Right? Now this is a job of of taking AI welfare seriously.
And no. No. I'm sorry. This is this is this is goes over the over the border. But that's what that's what I I tie that with the Roose story.
I think they're all over the piece. And he's playing into their p r angle and then getting angry when he gets called on it. Yeah. Sorry, Kevin. Yeah.
You're a good journalist. You're a good good tech reporter, but you and AI have a weird relationship growing. And he's, you know, leaning into it. That's that's Oh, yeah. His beat.
And then before we get to the break, this was something a video that you shared with me last week, actually tied to the folks at, MindStudio, which we had them as a guest, I don't know, probably, like, 6 months ago. Mhmm. Dimitri and Sean Thielen. I can't remember Dimitri's last name off the top of my head. Anyways, CEO of, MindStudio.
Tell me tell me what about this video. I watched the whole video. It's definitely interesting. Tell me what caught your imagination because it seemed like you So Dimitri sent me this video saying it's gonna blow my mind. And and and it did to an extent, not because it was otherworldly, but because it was just, so well organized.
They could they say you can make a worker. Well, it's an agent. It's a program. It's whatever. Right?
But MindStudio can can is is a layer on top. In fact, they use Claude. And from a program framing perspective, I get it. People like Claude for this reason. And and so he he they assign it to take a 10 k from a company, extract a bunch of data, make an analysis, present that, then make a PDF.
It cuts it into those pieces. You can see those pieces and see how they operate. And then if you see a problem, well gee, I think the data here is not quite full enough. You can go to that piece and then go to the instructions. And still in English, it'll show you the code, but you can be blind to the code.
Still using your own company, your own your own language, you can tell it, don't forget about this or do this or do that instead. Or at the end with a PDF, it said use a cooler font. It was waiting to see what what how it would define a cool font. Wasn't that cool? Or do this with the design.
It was in English. So again, to what we said earlier about about all the things we talked about at the top of the show, it's iterative Mhmm. Where you're working with it. And what what impressed me about this was that it again, it's a it's a layer on top of Claude, so it can do complex task. You cut it up into multiple multiple tasks, which are kind of agent agentive, agentic, and it had the openings for iteration.
That's what impressed me about this. What did you think of it, Jason? Yeah. No. I I I've I think it was a really great example of of and, you know, really tied into my understanding of MindStudio from when we talked to, Dimitri and Sean, you know, a handful of months ago.
And it's it's cool to kinda see and get a better understanding of of the interface and the approach. Having worked with, the AI models that I have, it like, kind of kind of like this this kinda like this then that then that approach really starts to, demonstrate kind of the capabilities of something like this because I've done stuff like this, but I've had to be the one taking data from one point to another and then taking the output of that over to another and and to do that. And this Yes. Yes. Exactly.
You now get to a point to where most of it can be automated and the stuff that it doesn't get right like, I I think what really surprised me and impressed me is from the onset, it got almost everything pretty much nailed. It was just a couple little details that it really didn't get and that it overlooked, you know, a couple of data points. The formatting was a little funky. And like you said, just use the words. You know, use your words to tell it exactly what you wanted to change and at what stage of the process you want it to change and run from there, you know, and it it makes the change.
Yeah. It's a it's a cool way of thinking about how to build these systems so that you don't have to go through the complexity of it all. But it also emphasizes how much human, checks are needed because I wouldn't have noticed where it missed 2 data points. I wouldn't have noticed that. I wouldn't have known that, Dimitri Shapiro is his last name.
Yeah. You know, and obviously it was a demo, but he realizes that. And once realized, can get it fixed. The question is, do you realize it? Do you just take and say, okay.
It's done. Good. I'm done for the day. I'm gonna go get coffee now. That's the issue.
Is that you still have to check its work. For sure. And there are definitely people that will just take the output and be like, oh, see. I'm done. This is better than what I had before, which was nothing.
Okay. It's good enough, versus, I think it's Sean who's doing the demo, I believe. I recognize his voice. And, when he discovered that a couple of those data points were missing, I mean, he discovered it pretty quick. I was like, okay.
Wow. Like, you've obviously done these things of many times. You know, he's obviously I mean, it's their product. Right? He he helped design it.
So he's very proficient. But but also if you if you were a, an analyst who does this all the time, you're gonna be more likely to recognize that it's missing. For sure. I mean, as with any But I I would the more you use it, the more you know what to look for and and, you know, the faster you get and all that kind of stuff. Yep.
Yep. So yeah. Yeah. It was a cool cool example of, of kind of these agentic tools, and, yeah, I appreciate you sharing that. That was really neat.
We're gonna take a quick break, and then when we come back, we've got a couple of emails that caught our attention. So we'll read those before we round things out. We'll see you in a second. Alright. We haven't done emails on this show before, but we certainly can.
And we're we're happy to kinda play around with it and bring a little bit more of the community into the show as it makes sense. Contact@aiinside.show is a direct email address if you wanna send stuff to us. You can also go to ainside.show, the website, and there is a form that you can fill out if you wanna send, an email to us that way. But I have this first email. I thought it was an interesting, thing sent in from DJ Route.
This is a little while ago. I've been holding on to this. It says, I've just discovered that Copilot can do art, pictures based on descriptions you write for it. Sometimes. The guardrails are up, and it refuses to do things which it considers problematic.
Now I will accept that Robert e Lee is problematic. Not in Australia, he isn't. This led me to the idea that guardrails are another way of imposing American culture hegemony on the rest of the world. It won't discuss Robert E. Lee, any modern brands or anachronistic food stuffs, fictional characters with the same names of real people, and other stuff I haven't bumped into yet.
Copilot doesn't do things that some Americans find objectionable. So all of us, even those of us not writing descriptions in English, are being guarded or corralled into thinking not only like Americans, but that we are Americans. Robert e Lee isn't problematic in Australia. Well, maybe he should be. I'm getting my money's worth out of patronizing you 2 too.
Thank you both. Thank you, DJ, for being a patron. Thank you for sending this in. I just thought it was an interesting thing. I feel like we've kind of talked a little bit about this in the past, but I thought it was, it was worth bringing back up.
Right? The No. I thought it was really, really interesting because it hits 2 things. It hits the whole discussion of guardrails, which we've had often, but I hadn't seen it enough in this cultural context, which is the same problem that exists in moderating social media. That unless you have a local sensibilities, you're gonna miss things in certain countries.
There are things you can say in Australia that would be offensive, but American setting up the guardrails wouldn't know that. That's one side of it. And then the other side of it is the things that, are, offensive here in the US aren't elsewhere. And, you know, there's one I'm not gonna say the word, but there's one word that is impressively, offensive in the United States that is used constantly in England and Ireland as both an insult and a and a joshing word. I'm not gonna say it, because I would get in big trouble here.
But if I were in the UK, nobody would bat an eye. Once I was on the BBC live, and I was thinking the s word, but I had to stop myself and edit myself. And instead, I said crap. Well, it turns out, I apologize to all of my British, fans right now. That was horrendous, and I got cut off here.
Oh, no way. Yes. Yes. Okay. I got off here.
Here, that's like PG. That's the safe version. Yeah. That's the safe version. That was terrible there.
Interesting. That's so fast. So cultural sensitivities next to AI. It's a really that's why it's a really fascinating email. Because, if you think you could build guardrails, the machine is gonna fool you and the users are gonna fool you because they're gonna try to get around them.
But you're also gonna be have to build them with cultural sensitivities, and nobody is talking about that that I know of an AI right now. No. Totally. It's it all starts with US. Yes.
Yeah. That's that's exactly why when I read this email, I was like, that is a really, really interesting point, you know. Talk about accessibility, Talk about representation. You know, all these things that are wrapped up in in these models and who's creating them and and who's behind them and everything. This is yet another example of that is is that the the cultural sensitivity aspect is going to depend on the locale.
And, yeah, that doesn't always line up. And so what does that mean? What are we what are we teaching others when they use that tool, you know, and they can't Yep. Find out this information Yep. On something they didn't even know was, you know, in the US was was off limits or whatever.
Yeah. It's interesting. Thank you, DJ, for sending that in. Is it DJ, or is it Rout? I'm not sure.
Either way, we appreciate having you on board. Thank you. Len has been, has been frequently, sending me some really interesting feedback and stuff via email. So, Len, always good to hear from you. Sent in this that I that was completely off my radar.
Len said, I wanted to tell you about a free year of Perplexity Pro, which, by the way, happens to be the AI, service that I use probably the most, through Xfinity Rewards. That's Comcast Xfinity Rewards. He says, I accidentally saw this while setting my dad up for the military discount at Xfinity. You do not need a military discount to get this perk because my dad is waiting for verification. I didn't know if this might be useful to you, Jeff, or your viewers and listeners.
Xfinity has many customers in the US. That is true. And probably many of your viewers and listeners have it. And, so I am actually a Comcast subscriber, unfortunately. And When when his video blurps, blame Comcast.
Yeah. No. Come back. Come back. Oh, dang.
And so I logged in to my Comcast account, realized I was not signed up for Comcast Rewards or whatever they call it, so I did. And sure enough, there at the top of the page was the thing that basically said get a year of Perplexity Pro on us. And, unfortunately, because I am already a Perplexity user, it does not apply to me. It's only for new users. But you can get a code through there.
There's also use a different email address, I think. Oh, totally. Like yeah. And this was interesting. When you read the fine print of it, it says the code is good for up to 5 activations.
So you can give it to your friends and family. And so, essentially, you can spread around the love. You know, I'm gonna get my wife a a perplexity account. Actually, it's down here. What does it say?
This promotion is open to active Xfinity Rewards members. Existing pro account holders are ineligible to, sign up by August 29, 2025 using the code that you get. Your unique promo code can be used up to 8 times. 8 times. So, basically, each code is good for up to 8 subscriptions, which kinda blows my mind.
But then at the same time, I'm like, okay. Is perplexity hurting? Or I'm like, that's crazy. To build it well. It's no.
So let let me tell you what. Let me add to this, then we'll discuss that. So, I found out when I when I went looking for this, because Jason talked to me, and I'm a Fios user so I don't get that. However, there's free Perplexity Pro out of colleges. Earlier this year, Perplexity ran a competition in which any college student with, an e d u email address could claim a free month of Perplexity Pro.
If more than 500 students from the same campus signed up, the entire campus got a free year. 45 schools made the mark, and I went looking. And one of the schools I'm now affiliated with, Stony Brook University, where I have an email address, is and so I got a year of Perplexity Pro Right on. Through Stony Brook. Yeah.
Look at that. Back to your point, Jason, is I think what we see here is the beginnings of perplexity trying to be a consumer media, platform and brand. And they've got to build audience and build and then I can say this from having worked in the business my whole life. When I started a magazine, when I worked on digital sites, when I worked on anything, subscriber acquisition cost is what nobody ever talks about. That's the it's the marketing cost.
When I started Entertainment Weekly Magazine, we spent $45 each to acquire a subscriber with all the market, all the joke mail we sent out, everything else we did. So what Perplexity is trying to do here is subscriber acquisition. So they're hoping that if they got you hooked for a year, you'll keep paying. They're also putting in advertising, so they want an audience for that. Yeah.
So I think this is the beginnings of a consumer b to c strategy. Interesting. You know, meanwhile, my recurring, you know, cost for another year is gonna be $200. Either that or I or I sign up on my personal account for the next year. But then but then I gotta move all you know, I gotta, like, reenter all my, like, spaces and set all those up, which isn't the end of the world.
But so I don't know what I'm gonna do. But, but it's interesting to me nonetheless because I I really enjoy using perplexity. Like, I literally use it every single day for my work. And, so, you know, worth knowing and, worth checking out. Yep.
That's great that you got it. I'll be curious to hear how it kind of if it catches you the way it has caught me and how you use it and everything. I'm sure we'll talk about that in 2025. Cool stuff. Well, Len, thank you so much for sending that in.
Hopefully, that helps at least a couple of, viewers and listeners. I'm sure it does. Puts it on your radar. Go get it. We've reached the end of this, our last live episode of AI Insight for the year 2024.
Jeff Jarvis, what a fun year. I'm so happy that we do this show together. What an amazing year. Yeah. Yeah.
It's been a lot of fun. I've learned a lot. I've learned a lot, with you and through you. So thank you for being Oh, and same back, boss. Same back.
Right on. I appreciate that. Jeff Johnson, I'm a the audience. We're we're grateful to you all and and for your contributions and for being here and for helping out. This is a new venture we're trying to do, and thank you.
Yeah. No. We're 100% And you can buy the book. Yeah. Yeah.
Yeah. Yeah. Well and, of course, Jeff also has books. The web we we've, Gutenberg parenthesis magazine, all found at jeffjarvis.com. Yeah.
Well, I mean, I think, you know, we've said it many times. We're learning along. We're learning via this show, and, hopefully, I'm imagining a lot of you are learning along with us. You know, the people who are really steeped in AI, maybe they aren't the ones, you know, because they're like, oh, well, they're not talking about this or that or whatever. But, you know, I like to think that we're talking to people who are kind of learning along with us about all this stuff.
Well, we had people from Google who watched This Week in Google even though Yeah. They knew more about it than we did, but Totally. It's about the reaction to things the work they're doing. I think it matters too. So I bet we have some AI folks in here.
Yeah. Yeah. Oh, yeah. I'm sure. And tell and when and when we get things wrong, tell us.
Yeah. Please do. Contact@aiinside.show. Everything you need to know about this show can be found at our website. Go there.
aiinside.show is the place. You can subscribe to the podcast so that the next 2 weeks of prerecorded episodes, you don't miss those. They'll, you know, end up both on the site itself but also in your RSS feed, and you won't miss that. So do that. Also, if you want to support on a deeper level, you can go to patreon.com/aiinsideshow.
We have some amazing patrons, supporting us. Right now, 82 paid members. More than that when you consider people who are following for free, and then, you know, that's okay too. That gets you through the door, and then maybe we provide value for you through, you know, ad free shows, Discord, Hangouts that we do. If you want, you can give even more, and you get a t shirt, an AI Inside t shirt for becoming an executive producer.
And we've got good news. We've got a new one, DrDew, Jeffrey Marraccini, WPVM 103.7 in Asheville, North Carolina, Paul Lang, Ryan Newell, and our newest executive producer, Dante St James. Amazing. Thank you, Dante. So many bosses.
We need them. So many bosses. I need 2 hands to show it on my fingers. So When you go to that page, by the way, you have to go to the right to get to the executive producer one. It doesn't show that right away.
So you have to you have to scroll to the right True. To see that. There you go. It's it's called Mega AI. That's where it is.
Maybe I should call that executive producer because that really is what we what we say on the show. It is. Then that's what I'll that'll I'll make that change. But, yes, we you all, enable us to do this show, and we just can't thank you enough. So thank you so much for being here all year with us, for helping us build and grow and be what we are, and here's, looking to 2025 in the hopes that we can build and grow and be even more than we already are.
Thank you, Jeff. It's been a fun year, and thank you everybody for watching and listening. We'll see you next time on another episode of AI Inside. Take care, everybody.