Jason Howell and Jeff Jarvis dive into the ongoing debate around AI hardware devices like the Rabbit R1, licensing deals for AI training data, and a child learning study that challenges assumptions about machine intelligence.
Support AI Inside on Patreon: http://www.patreon.com/aiinsideshow
NEWS
Rabbit R1, a thing that should just be an app, actually is just an Android app
Amazon expands enterprise AI play with wider availability of its Q chatbot
Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models
The Financial Times and OpenAI strike content licensing deal
OpenAI’s Sam Altman and Other Tech Leaders to Serve on AI Safety Board
Title: AII_015.mp3
Stock Symbol: Symbol
Date: Click to add current date
TRANSCRIPT
PRESENTATION***
This is AI Inside Episode 15, recorded Wednesday, May 1st, 2024. Should it just be an app? This episode of AI Inside is made possible by our wonderful patrons at patreon.com/AIInsideShow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. Well, what's going on, everybody? Welcome to another episode of AI Inside, the show where we take a look at the AI that's inside everything. Sometimes I want to say the AI that's hiding inside everything, but that sounds kind of mischievous, and I don't know if it's necessarily always a matter of mischief, but I'm sure sometimes it is. Anyways, I am here joined as always by my co-host, Jeff Jarvis. How are you doing, Jeff?
Hey there, the secret AI, the AI you don't know about, which is actually true when I think about a lot of cases. It's in a lot of things you don't know about.
Well, yeah, that's exactly it. More and more, yeah, increasingly with every month in the year 2024, it seems to be everywhere, that's for sure.
Mail, translation, maps, search, advertising, tons of it, yep.
It's everywhere. Yep, it's in your smartphones, it's embedded in your smartphones. And we're going to talk about how these AI devices are being embedded by Android, actually, which actually isn't a huge surprise.
I'm super curious to hear your take on this because people are freaking out about this, and we'll get to it in a second, about the Rabbit R1 and the Android roots of it. Before we do, I do just want to say that AI Inside, it's an independent podcast. We rely on the support of our amazing fans and followers to keep the show running.
So you can do that. If you want to help us with this show and support the future of AI Inside, go to patreon.com/AIInsideShow. You get lots of cool things, ad-free episodes, of course, early access to select AI-related bonus content, AI Inside merchandise, which we just added to the Patreon. I don't even have that yet.
Jeff doesn't have that yet. Oh, no, I didn't. Oh, I can't wait.
The cool thing about the merchandise is the shirt says AI Inside. And so my thought is when you're walking around town, everybody wonders, do you have AI Inside of you? So that's the cool run-on effect of that. You start moving like a robot. Yeah, right, exactly.
So patreon.com/AIInsideShow. And we also do call out the name of our community of supporters each and every week, like Jon Baglivo. Jon, thank you so much for your support. We couldn't do this without you.
We really do appreciate you. Before we get into the news, well, you've had a big couple of days. But yesterday, you had an event. You were part of an event or holding an event that we actually talked about on the show a number of episodes back. Tell us a little bit about it.
So yesterday, finally, with Common Crawl Foundation, we held the event in New York called AI and the Right to Learn and the Open Internet. About 120 people came, amazing group of people, AI people, researchers, publishers, journalists, policy folks. Mike Masnick gave a phenomenal presentation about the risk to the open internet.
150 slides in five minutes, talking about as fast as I do. Gard Steiro, the editor of VG, the Schibsted chain, came to talk about what they're doing. We had an episode two of this very podcast we had on Schibsted's CTO and talked about the different attitude there.
And it really was a great presentation of how the Norwegians are doing things right and we're screwing up. And so there was no effort to form an organization or no agenda. The idea was to get together and talk through these issues about how there's dangers to free expression and fair use being presented by this fight over AI. There's issues on both sides. We wanted to hear both sides and talk that through. But it was a great event and I was really delighted to be there. So I moderated the whole day. That's why my voice is a bit bass today. Yeah, yeah.
A little crackly, a little lower than normal. It's good for podcasts, character.
But what I want to thank, it was held at Civic Hall in New York, which is a new place that Craig Newmark helped fund. And he gave a welcome to everybody. So it was a great event and I'm grateful. And today I gave a keynote fireside chat at Bühler Tech's ad tech conference.
So it was quite a bit of a difference. But that was fascinating too, Jason, because especially the conversation after mine, talking with the people, the impact of AI and advertising, what's happening with advertising and programmatic, which is all AI folks, how you get your ads in the instant that you open your browser and what's that's causing in terms of the quality of advertising, the quality of content. There's a lot of fascinating stuff going on around all of us. So that's been a busy two days.
Man, no kidding. Well, I have no counter as far as how busy I've been. I mean, I've been busy, but I haven't been doing events, man. You make me look bad. I love how active you are.
That's the next revenue stream for you is conventions, I think.
There we go. AI Inside, the show. AI is inside everything in this hall.
Pay $2,000 to get a ticket and you get a free t-shirt included. An AI Inside t-shirt, which you can now find.
Sorry. Yeah, totally worth it. Totally worth it, right? Cool stuff. Thank you for recounting that. And it's cool to kind of see the closure because I know you were just kind of entertaining the idea in episode one and here we are. And it was a full day event and it went fantastic. It was great. Yeah, good stuff.
All right. Well, we've got some really interesting things to talk about. And I wasn't certain that we were going to talk about the Rabbit R1 two days, two episodes in a row, but this is just kind of the way things work. And there is some really current news about the Rabbit R1 right now that I'm curious to get your take because you also, for those who don't know, I think a lot of people who listen to the show know this already, but for those who might not know, Jeff Jarvis also does another show called This Week in Google for the Twit.TV network and talks a lot about Google and Android and everything Googly as well as many other things. But this story really crosses both paths because we've got the Rabbit R1 that we talked about last week.
It's the hardware device, AI companions, little orange thing with a scroll wheel and a screen. I think the reviews embargo lifted this week, so we saw a bunch of reviews. Not good. I mean, marginally better than the humane AI pin reaction.
Only marginally, really? Wow.
I mean, the AI pin really had a lot of negativity around the fact that it was so dang expensive and it required a monthly fee on top of all the other hardware issues that it had and software and the delays and lag and all that kind of stuff. This thing's a lot less expensive, right? It's $200.
You don't have a monthly fee tied in with this, but I think a lot of the reviews really reached a similar conclusion, half-baked, bad battery. And the question that seems to keep coming up around this stuff, and my hunch is that we're going to be here for a while before we move on from this question with devices like this, is does this need to be a hardware device? Could it actually be an app instead?
Why would I want to carry around or wear or whatever this thing to do this AI interaction when we're already so used to carrying around a smartphone to do a lot of these things? So that's the first question. Before I kind of get into some of the late-breaking news, how do you feel about that thought right there? Are they proving themselves to you as far as realizing that it's an early stage right now? These things don't go from zero to 60. We got to start somewhere, but what are your thoughts?
Well, it was so great last week to have somebody who'd had a hands-on, and that's the question we focused on, was does this need to be a device? Which is, I think, a legitimate question. However, I think we also said last week we'd like to see some experimentation with these things. And Apple and Google are going to be afraid of releasing something that's too rough.
And so I think it's worth playing with. So at the event this week, I ended up at dinner sitting next to Parag Agrawal, formerly CEO of Twitter. And I look at him, and I see RayBan on the side of these glasses that are pretty thick. And I said, oh, are those the MetaGlasses? And so we talked about that. Did you get to try them on? No, I didn't. I thought that'd be cheeky, because it says glasses with his prescription and everything.
I was tempted to ask.
I didn't. I did ask whether he talks to it, and he said, no, no, no, I don't talk to it. It's mainly for pictures of his kids. And audio. Those are the two things. He likes them for that. So there it is. There's one device that has a satisfied customer. It's not doing basically any AI.
Well, it is now. They've got multimodal.
Well, he's not. But he's not. It is. Yes, it is. That's right. So it'll be interesting to see other devices come down the pike. Rabbit talked about a wearable, a watch. I always forget the name of the damn pen. Humane AI Pin. Humane. It's a dumb name. Okay, anyway. It's a machine. It's not humane. It's anyway. And the rabbit is cute. And the physical UI is fun. People like it.
I mean, it's got teenage engineering, which has a lot of cachet in the technology design now.
So on that first point, there was a long-winded answer to a simple question. I think it's worthwhile to see experiments. I wouldn't want to spend $1,000 for one, as I did with Google Glass. $1,500, damn dollars. You learned your lesson. I'm still bitter. I'm still bitter. But I'm all for companies trying stuff. Having said that, now the issue is?
Well, so the question that I mentioned just a few minutes ago, this does it need to be its own hardware? Can't it just be an app? And really, at the end of the day, I think in this early stage, that's exactly what it is. Mishaal Rahman, who's my co-host on Android Faithful, was sent the R1 Launcher APK, which is like the install, the app install file for Android. He got it from a source, got it running on a Pixel 6, which seemed right on its face to prove that, okay, well, this device that really hinges on the power of AI, it really is an app. If people are asking this question, at the end of the day, that's exactly what this is. It's an app running on a bespoke piece of hardware doing exactly that.
And so, okay, so then do I need this versus just having this app, let's say, on a phone? And actually, while we were doing the show yesterday, it was really interesting because we were talking about the story. And in real time, the Rabbit CEO or Rabbit CEO, Jesse Lyu, responded to Mishaal during the show on X, saying in the response, it's weird because he says it's not an Android app. But yet, he goes on later to say Rabbit OS and the large action model run on the cloud with very bespoke AOSP, which is like the unbranded kind of foundation of Android, and lower level firmware modifications.
So really, at the end of the day, this is an Android device, and that's how it's doing it. The Verge called this, and it made me laugh, AI's Juicero moment, which I know that's a harsh one. For anyone who doesn't know what that means, Juicero was this juice making hardware that you had to buy specific made for the Juicero juice packs. And what people realized eventually is the Juicero is actually not really doing anything special.
It's just the juice packs are the juice packs, and the Juicero is just a fancy way of delivering it or serving it. Squeezing them, yeah. Squeezing them, essentially. Yeah, exactly.
So a story went up. I don't know if you saw this, Jason. A story went up half an hour ago on Android Authority, saying the Rabbit R1 has Android 13 under the hood, and saying that it is Android running the device. Is Rabbit disputing that at all, that it's running Android?
Well, that's a good question, because I hadn't seen this article right before Showtime. I wonder what their response is to this, if they've responded yet. I'm kind of scanning through to see. I mean, lining up with the response that we got from the CEO of Rabbit yesterday. I mean, it's obvious that it is Android. And actually, here's another question that I have. What's the problem with it running Android?
I don't think there's any problem at all. Is it a problem? It doesn't seem like a problem at all. Though the question would be, at a philosophical level, why can't I run Rabbit on my phone? Because they're selling a device. When I think what Android Authority is trying to say, come on, folks, it's an app. It's an app. Yeah.
It's a launcher. It's essentially an Android launcher, is my understanding of it. Oh, I see. Okay. Right, which is an app, but it's an app that's running by default. I mean, the Pixel launcher is an app, essentially. I mean, it has a lot more to it than just that, but it is an app that runs by default on a Pixel device.
But Jesse Lyu said in that message you put up on the rundown from Twitter that you couldn't run that app because it wouldn't connect to Rabbit in the back end.
So I'm just confused. Right. Well, and he's true, and he's right, and he's not right. He's right in that you can't run it now. But when Michal first was given this app, he was able to run it, and he recorded video of it running and pulling data from the server.
So it didn't have access to everything because obviously this implementation of it is tied into the specific hardware that's on the Rabbit R1 device, and so that might not be routed correctly on a smartphone if you're running the app. But he could query it, and he did get responses. And then after the show, or rather during the show, Michal discovered that it no longer was querying. It had been blocked. So they rolled out an update, and I don't know if the update was timed specifically with this or if it just happened to be at the same time.
I'll let people decide for themselves. But because that update also had other fixes and addressed other things. But it seems like after that update, it was no longer possible. So anyway.
What do you think? If it could run on Android, should it run on Android? Is that an issue?
Well, I think the question that I feel like comes out of this for me is people are questioning whether a device like this needs to exist or should even be created if this can all be done on an app. And I think at the end of the day, do I believe that devices like this are going to continue to be developed and become things more than they are right now?
Yes. I think that especially when you've got like Johnny Ive and Sam Altman collaborating together on some sort of AI hardware. These are all opportunities for, in many cases, these upstart AI hardware companies, companies that want to carve out a portion of this very popular, very suddenly growing aspect of technology. They want a place in it. And one really great way to find your place and make your place is to release a piece of hardware that does this. Because if you just release this as an app, no one's going to care.
That's a really good point. They got tons of attention they would not have otherwise gotten. Absolutely. Now, if you were that $1,600 device that we showed last week, then that's just trying to rip people off. But the Rabbit is decently priced and you got the flexibility for a year out of it too. So you've already found value, full value in it, Drew.
Personally, I have found full value out of it. If I never got the Rabbit Heart R1, which I'm still excited to get, I'm still interested to play around with it.
And my understanding is I'm going to get it early summer, hopefully sooner. But even if that were to have happened, I'd be perfectly happy with my purchase because I use Perplexity every single day now. The strategy worked. The Perplexity folks, it totally worked on me.
They got me hook, line, and sinker. And I find so much value out of that that once that expires, I will be renewing it. And the R1 is a nice bonus. I'll be curious to play around with it.
It was clever too because rather than having to wait months for the R1, hey, I'm already getting some value out of this. It's okay. You don't have nothing to do with the R1, actually. It's Perplexity.
Yeah. Well, and I do think that the R1, my understanding is part of its cloud data access, part of what it's solving around and with is that collaboration with Perplexity. At least that's my understanding of it. And that's another question that people had is like, well, it's not all happening on device. It's like, well, no, because these large language models that we are relying on in many cases, this deal with Perplexity, that should tell you right there that it's not going to be all on device because Perplexity exists in the cloud and it's a separate company. So, of course, it's going to be ping in the cloud for that stuff. So, yeah.
I'm happy we have something new to talk about. Phones just got so boring. Yes, totally. Devices got so boring. Right. Yeah. We'll see what happens at IO. I wish Google would surprise us with a new device.
They don't surprise us with anything these days. No. But it'd be fun to have – we got two different pairs of glasses, two or three different pairs of glasses. Yeah. With this. And plus, this is good. Yeah.
I mean, the phone development kind of has gotten boring over time and it's nice to kind of see new things. So, it's very easy at this early stage to point at it and be like, well, it doesn't work or it doesn't deliver on its promises. And I think that's totally valid and I think that's important that we point that out because these companies are releasing a product that does cost money now based on promises that it might not be able to deliver at this point.
So, you got to review it in the state that it's in. I don't think that any of that invalidates a product like this in the long term. I think that there's so much development that's going to happen here. And at some point, we don't recognize right now what that device looks like or can do. But at some point, I feel like it validates itself.
And I don't know when that is, but we'll find out eventually. Yep. Yeah. Amazon first announced its Q chatbot last November.
You may have remembered, recall that. Limited access for a small group of users. Now, Amazon is releasing Q – it's still a horrible name, by the way, in my opinion.
Yeah. For AWS users, its power is learning and operating on data and workflows from company data. So, the employees, for example, can interact with this around business questions, logistics, information, coding, all tied to their AWS, their bucket on AWS of data. And there are some new features that are being announced and released here as well.
Coding assistance, app testing, security scanning, troubleshooting, also something called Q apps, which is for building generative AI apps by voice, which sounds very familiar. It seems like that's becoming kind of table stakes for these systems as well.
So, I'm trying to get my nomenclature straight here. Is by giving a trained model to AWS customers to then query their own data, does that fall under RAG, retrieval augmented generation?
Oh, that's a good question. I see this separation where you use data to train the model, then it knows what it's doing, then you can fine tune that and change the model. That's one way that you adjust that. But then RAG is using the model to retrieve from a set of authoritative data that you give it and limit it to that.
I mean, within the confines of what you just explained, that sounds pretty spot on. So, that would seem to be what the Q use is then.
And it makes sense because we shouldn't rely on a raw model, no matter how much fine tuning it's had and no matter how many guardrails have supposedly been built, it cannot do facts, damn it.
But RAG seems to work fairly well. And I think we've got to get to a better spot in nomenclature so people know, am I talking to a raw model or am I talking to something that is using the model against a known and authoritative source of information? Yeah. And which may be limited by that source then, but will be more credible at all.
Right. And more specific to the actual application or use case or intention, as far as it is there. And how much of that data set is actually populated by things that have nothing to do with the information that you feed into it? I wonder, it's not just a blank data set until you feed it stuff or is it?
Well, it has the training, but as we're talking, Jason, I'm coming to think of this as, I'm trying to come up with the right analogy. It's using, it's not, the cloud is not the analogy, but I'll put it this way. The model should be a backend and the RAG and chat is front end. So in terms of web talk, right? There's a database back there, you ask for something on the web, it calls it to the database and it gives it to you, but you don't really deal with the database. You're dealing with a front end to that. Sure.
Yep. So to me, RAG is the front end and models should be the backend and we really shouldn't see models hardly at all. Unless we're asking it to create, that's okay. But if you're trying to interact and ask actual questions and to get credible answers, you should not deal with a model. Models should be considered the backend with a front end application and that application carries with it authoritative data. I'm just starting to see a different model of how we see the presentation of this to end users. Yeah. It's interesting that Amazon is of course doing this only in that sense, right? The queue is only available to its AWS users to in turn do what they want to with it. I think that's a sensible model. No, I think so too.
Yeah. I think so too, given that Amazon is putting a firewall between the information that's loaded into it and not using that for anything else that the enterprise, the users can trust that they can throw their source, their valuable wealth of information into it to gain what they can from it for the employees and everything. I think it makes a lot of sense. Yeah. Yeah. And actually this news, I thought ties in nicely with an article you threw in from Ben Thompson, who I just, I love his writing. He's so damn smart. If you have a few hours. Yeah.
It's always the case with Ben. But this post is called Meta and a Reasonable Doubt. Yes, it's meta. We were just talking about Amazon, but it kind of touches on a few things that are very similar between these two stories. Amazon, Microsoft, Google, all three companies have their cloud services and their enterprise business wings to attach their AI innovations to immediate access to a wide range of customers who could benefit from that because they're already paying for AWS or whatever the case may be.
All of them really painting a rosy picture to investors right now about how it's going because of that direct correlation. Then as Ben writes, you got meta on the other hand, asking its investors for patience again. It already did this a handful of years ago with its switch to the metaverse and saying, you know what, just give us some runway. We need some time to really get to the point to where we start realizing the benefit of this stuff. And now it's doing it again, increasing expenditures on artificial intelligence infrastructure so that they can see some of those longer term potential gains. You got Google, Microsoft having clearer short term opportunities for revenue, things like cloud and software and all that kind of stuff.
Meta just basically saying, hey, this is going to pay off when the metaverse becomes what we think it's going to become and these things integrate together. Just give us a chance. Give us some time.
We'll get there eventually, but for right now, it's going to hurt. And investor, the response to the earnings call, my understanding is that investors were a little skittish. Investment went down, all that kind of stuff, although I don't really follow that stuff very closely.
I try to. Meta is up 3.5% today if we look at it over the last five days. It's back to where it was. Okay, so it took a second for a six month.
Everybody's immediately reacts and then goes right back to it's down a bit.
So at the high on April five, it was 527. Now it's 444. So yeah, that's down. Sure. And it took a drop or put it another way. It took a drop on April 24th from 493 to 444.
So that's a considerable drop. But it's that is also, I think that is taking a different route than the others because it's doing open source. Number one, I think that could be winning in the long run and it's doing smaller models. It doesn't agree with all the boosters and all the AI boys and other ways. So I'm, I'm actually kind of impressed where Meta is heading with their AI.
I don't know anything, but I think that they're still worth paying attention to here. I think you would think that Meta as a social media company would be an also randomness and they're not.
They're just, they don't have the immediate kind of ways of benefiting from this right now, you know, as far as earning a lot of revenue on the current kind of position of AI development at this point and the way that Microsoft and Google and Amazon do, they've already got all this other stuff that they can attach it to and they can immediately see some sort of revenue gain, some sort of return on that investment. And Meta is really in a position where they have to say, all right, you know, it's going to come, that ROI is going to come. You know, and that also, you know, hinges at least according to what Zuckerberg is kind of spelling out hinges on the success of whatever the Metaverse is. And I mean, in my mind, like does the Metaverse, you know, become what, you know, Zuckerberg and Meta want for it to what, you know, what Metaverse, you know, fanboys or whatever you want to call them, want for it to be.
I'm not certain, but I do know that when I think of AI and what's possible and what we've seen, and then I think of immersive experiences like VR, like the Metaverse. And I think about the possibility of bringing those together into, you know, into a kind of a unified experience. I think there's a lot of potential there. And so I can see the long-term vision. I can see how those things come together. It's just, will it, who knows?
Yeah. Yeah. And I hope that it's not just a few big companies controlling AI. And that's why I'll also salute Meta and putting stuff out open source. I think that's so important.
For sure. A lot of the discussion at my event yesterday with Common Crawl was, if you really want to control things, then you're going to limit this to like three companies that you can control and you keep an eye on. But do we really want that? Do we really want to create another oligopoly of technology here? I don't think we do. Right. No, agreed.
Totally agree. And then speaking of Microsoft, the company announced a free and lightweight AI language model called Fi3 Mini. This is really meant for consumer devices. So, you know, think smartphones, think laptops.
Wouldn't need the internet to operate. Microsoft actually claims that its performance, quote, rivals that of models such as Mistral 8X7B and GPT 3.5, which power the free version of chat GPT using 3.8 billion parameters, which is in very stark contrast to some of the largest LLMs right now. You know, you've got Palm 2 by Google, 340 billion parameters. GPT 4 is rumored to have around eight interconnected 220 billion parameter models, which is just mind boggling right there.
This is like the absolute opposite of that. And you might ask yourself, well, how can you fit so much and so little? And this is definitely a topic that we've talked about in episodes past. You know, maybe it's not quantity, maybe it's quality. And that's what Microsoft is saying here.
They've basically curated the database from very high quality data pulled from textbooks to really just jam pack it with some really like core essential data that encapsulates the power of, well, they're saying chat GPT 3.5. So yeah. Interesting. This goes back to our rabbit discussion.
Is it going to matter that the Apple phone you buy or the Samsung phone you buy, or the Google phone you buy comes with a model or because of the internet doesn't matter what models on it, you're just going to use whatever, communicate with whatever model you have. You said Microsoft says this doesn't need to be connected to the internet. It can do things locally. And Google is certainly going to be trying to do that and is doing that with a tensor chips on his phones. So I don't understand right now how device dependent LLMs and models will be. Yeah. Hardware.
I haven't got my head around that yet. Yeah. Yeah. No, I hear you on that one. This is my week for saying, I don't know. Hey, that's okay. Like we've said so many times on this show, this show is an opportunity for us to learn along with y'all because we really are. That's why we talk about these things. That's why we voice them out in the open.
I'm sure some people who listen might know to a deeper extent, some of the answers here, but this is how we learn too, is we throw those words out in the open and we figure it out in real time. So yeah. Interesting nonetheless.
And yeah, 5.3, that's PHI three, by the way, available now on Azure, also partnerships with Hugging Face and Oyama. So there you go. We do have more and you don't have to wait very long for it. Just give us a second.
All right. The Financial Times signed a licensing deal with OpenAI for access to the publications content for chat GPT queries. That's summaries, quotes, links to Financial Times articles, all generated based on user prompts, source attribution, all the good things that you want to see out of something like this, in my opinion. CEO of Fast Times, John Ridding asserts continued commitment to human journalism and then also the importance of reliable sources for AI platforms. So yeah, that's good.
So at the same time, Google reportedly is paying a news corp, Murdoch, $6 million for new AI content. And so these are continuation of these licensing deals. I don't think they're real licensing deals. I think they're don't sue me deals. Not unlike what happens in the news industry.
I think they're saying, okay, here's a bucket of money. Now let's be friends and just shush. And when your lobbyists go to Washington, just don't mention us. Yeah.
We're your good guys, right? That's one way to look at it. Another way to look at it, which I saw one story that I don't think was very clear. So I didn't put on the rundown, but it was brought the idea that said, this is going to screw small publishers because just the big guys will do these deals.
They're in, they hope getting rid of the lobbyists for the industry because it's the big guys who paid the lobbyists and little guys won't get paid for their content. And the truth is that none of it, none of their content is absolutely necessary to the model makers because it's fungible. You can find somebody else's news content to teach it business.
If you don't use a financial times, you can use America's any business journals. You is it the same? No, but can you conceivably train the model in those topics with other sources? Yeah.
You could also translate huddles blot from Germany. And in terms of using that, there's a lot of things you could do. My point is, so it's not going to come to the point that every news organization is going to have their content licensed. It's not going to happen. Just simply not going to happen.
A few big guys will use the clout they have to get these deals done. And I don't think it's going to help news overall. I think it's going to help a few bottom lines. And I also think that open AI may not be that smart and doing these deals because it's setting a precedent that's going to make difficult for them.
That in order for them to continue doing what they want to do, they have to make these deals like
continually country, they're going to say, Hey, you paid them.
Yeah, that's a really good point. I'm going to see which leads to the next story in the rundown, which is that a lot of newspapers presented. This is eight major newspaper suit. No one hedge fund that has ruined newspapers across the country. The worst hedge fund in news, which is Alden Global Capital sued open AI as the New York Times has sued open AI. So now these are two separate suits going against them. So that becomes the choice.
Either you negotiate, you get a bag of money, you say, Oh, thanks. That's smart for me. I'll go home.
Or you say, I'm going to sue you. The truth is that Alden's newspapers, some of them were great. I've worked for them. I worked for the Chicago Tribune.
I worked for the New York Daily News. They're crap now. They've been cut to the marrow by Alden. Their stuff is not worthwhile. But what it's, I heard this at the ad conference where I was today, that the industry has split apart. Barry Diller tried to put together a consortium that would negotiate together and sue together or one of the other. But then the New York Times cut off and said, no, we're going our own way because we're the New York effing Times.
And then once they did that, Alden said, well, we're a greedy hedge fund. We'll do the same thing. And so now we have this ridiculous kind of fight going on about AI and content, which was the topic of my event yesterday with Common Crawl. And I think this is going to be problems. And I don't think it's going to be good for the news industry, ironically. A few people get some money. I don't think that's much.
Well, and what's interesting when I hear you say that is, there's that, like not good for the news industry. And at the same time, we want these AI systems to be filled with good data, good knowledge, a good feed of data from all these sources. It's so at odds with itself in so many different ways. Well, this goes back to the discussion we just
had about training versus rank, if we accept that as a framework, right? So what do you need to train the model so that it understands how a bill gets passed? Of course, again, it doesn't understand anything I know. But so it has the associations to put together something credible around a topic like politics. But then to have legitimate data, it's got to have an API to that data. This is why I keep saying to the news industry, nobody listens to me, nothing unusual there, that the news industry should come together to create an API for news.
And should make it easy for their data to be in a rag and be called upon by models, but with a condition. Here's the key, but here's the money, or here's the credit we want, or here's the link we want, here's the branding you should do. Those are things that can happen in negotiation that's far better than suing them and say, just pay us a bucket of money, or FT, getting paid a bucket of money. That's not helping news as a whole, and it's not helping the AI industry. That's a really good point, Jason. That doesn't help the credibility of AI in general.
That's what I hoped we would start to accomplish, and I think we might, of bringing these sides together, sitting them in the room together, saying, this is hurting both of you, and it could help both of you if you do it right.
Yeah, interesting. I realize as we talk about these things like this in general, like a one-hand, I'm like, oh, right, great, deals. People who feel like their data is valuable and want to be reimbursed for it or want to be acknowledged in the process, great, they're getting what they need, but then at the same time, there is the data, the quality of the data that's fed into the AI systems. I think it's a really good point that you make as far as the biggies really getting the benefit of this and the smaller players getting edged out, because I think that's a real big threat in all of this and the way it's going down right now, and I hadn't really considered that.
I'll put it another way. It doesn't set up an infrastructure for any news organization to work, and the same thing has happened basically in advertising, right? The big guys get money, and the little guys, podcasting, get left out. Yeah. So let's stand up for the little guys. Yeah. Independence. Sounds good to me.
Speaking of OpenAI, they announced that ChatGPT Plus users will now gain access to a new memory feature. So this is something that you can activate if you are a ChatGPT Plus subscriber. So you have to actually turn it on in order for it to retain any information about you, and then if you want it to remember details, you have to specify, you have to tell the bot what you want it to remember after that option is enabled in settings. So in doing so, then as facts, important things about your life or how you work or all these kind of things, you can set that into the ChatGPT memory, and then you can actually go back there, you can see a whole history of your memories or its memories, in air quotes, memories, and you can edit, you can remove anything from the list. You're essentially training the chatbot to be more knowledgeable about you, the user, for future queries. And OpenAI said, as one example said, you've explained that you prefer meeting notes to have headlines, bullets, and action items summarized at the bottom. ChatGPT remembers this and recaps meetings this way. And I like that a lot because I feel like often in my interactions, I'm starting from square one or copy and pasting some sort of verbiage from another doc to try and inform it about the things, the ways that I want data presented to me or organized or explaining things that potentially with a feature like this, I wouldn't have to explain every time. So that's really useful. I like that.
I wish this thought were mine, but I saw it somewhere else, but I can't credit it because I lost it. But I think this is, as I thought about this, this is the necessary step toward agents. Yeah, for sure. Right. So I wonder whether they purpose, because it's not hard to imagine doing this. It's just a database that remembers certain things about you so that it can call on that. Easy for me to say, I don't build this stuff.
It's not that hard, but I can't imagine this was a huge technological task. And I wonder whether there was a purposeful decision not to do this in the beginning so that you wouldn't start to build something ongoing. And you look at the infamous Kevin Roose falling in love with, or being seduced by, or seducing Sid. I think it was Sidney was the original ChatGPT being a personality, right? And that came to a great extent because he had a very long exchange. And I think it complicated the guardrails efforts to say, cause it got too many layers and too many levels. So I wonder whether a ChatGPT or opening, I could have done this in the beginning and chose not to, or whether no, this actually has some subtlety and nuances that I can't see. And it took a lot of technological work to get here.
In any case, if you want it to always do a task to your taste, this is good. So I talked to a session yesterday, Kevin Delaney, who was an editor at Quartz. And he's now at a new startup, whose name I'm forgetting, come on, come on LinkedIn, at Charter, which covers the future of work.
He said, the event was on the record, so I can say anything, that one primary use he puts models to is to take a story that he wants to promote. And he goes and tells it to create a LinkedIn post. Cause he finds that just stultifyingly boring to create that promotion. And, but every single time he does it, he has to say, Oh, calm it down a little bit. Don't be so enthusiastic.
I said, don't have so many exclamation marks that are right. And now he can say, hi, it's Kevin. This is how I like my LinkedIn. Here's another story. Do it. Right. And that in that sense is an agent. Yeah. Yeah, for sure.
Cause it knows what you want or find me an itinerary for blank.
If it has access to current information, write me a grant proposal and I can remember things. So I think that's really a next, an important, if small next step.
Yeah. Or this is the tone in which I normally write, you know, maybe, maybe saving the tone or whatever. What I wonder as we talk through this what is the difference of, from between this and at the beginning of every query, me just listing out these rules, like, is that basically the same? And if that's the same, then you're probably right. This is probably not a very difficult thing to implement. You know what I mean?
I don't think so. Unless, unless it's remembering, unless you go through a session where you go back and forth and back and forth, then you say, Oh, I'm happy now. Yeah. Oh. And if it remembers that it's like, Oh, I figured out what made him happy. Jason happy. Yeah.
Well define happy, define what change it was that worked. If it's simply one instruction that you could go to cut and paste every time and say, that's easy. I don't know. I mean, I think, I think this requires, I don't have a, I don't have a pro account to open AI. I think it'll take a little effort to see how well it remembers you. Yeah.
Right. And I mean, being limited to just remembering you when you say, remember this about me kind of makes me think that it probably doesn't work quite in the way that you're describing, but the potential is probably there for it to work like that, that the more you work with it. And I'm sure at some point when you're talking about agents, I'll almost guarantee you, this is the direction that they go at a certain point where it's like, it learns, it learns all of your preferences over time. It learns that when you're writing this particular kind of thing, this is how you like things presented versus when you're doing this other task, this is how you like it. And it, and it picks up on that versus you having to say, all right, put this in your memory bank. I like this or whatever.
Do you have different personas? I'm doing this for the purpose of work. I'm doing this for the purposes of family. Um, there's interesting now they also get into privacy issues because if it forgot you every time it's gone, now this stuff can be subpoenaed. If you are in there getting better and better and better at making bombs with the help of a large language model, um, you know, then that's something that could be discoverable. And, um, it's interesting to see where that'll go to. Yeah. 100%.
A new federal advisory board in the works as the Biden administration works to build up, um, regulatory framework around AI in the U S this is the, um, you know, related to the AI executive order that was, uh, directed last fall. Um, and now this new advisory board has a lot of you recognize Sam Altman, Satya Nadella, Sundar Pichai, Jensen Huang, uh, just to name a few all joining really aimed at my understanding anyways, aimed at the responsible integration of AI technology into critical infrastructure. So things like water facilities, transportation systems, banks, that sort of stuff. Um, and the first meeting is going to take place in May.
So, uh, yesterday I had a conversation with, uh, this was a private conversation. So, uh, I won't name, um, about the story we did a while ago. I think in March you found out when NIST appointed a, uh, Doomer TESCREAL Doomer to head up a new AI safety thing. And the person I talked to is appalled at that and people were appalled at what happened. So then this group came along and I thought, oh no, here we go again, the department of Homeland security doing this.
And all the report was Sam Altman's on it. Oh boy. Celebrity. But my, um, my friend said, no, this one's different. The problem is the media coverage of this one.
It was every, every story since Sam Altman's on a committee, Sam Altman's there and it left out and then by a bunch of other people. Well, the bunch of other people are people who actually know what they're talking about. Um, and, uh, uh, and aren't just politicians or AI people, but our researchers who know their stuff. So, so this group gives me a little more hope, uh, only on the basis of that conversation yesterday. Yeah. Good. Good. Yeah.
I mean, Jensen Huang and Sam Altman really going to show up at every meeting and, uh, it's your turn to take minutes, Jensen. Yeah. Right. Okay. I'll use my to do that, you know? Uh, so yeah, we'll see what it actually does.
Exactly. Yeah. With anything like this, that's the, the real big question. Uh, and then finally you put in a New York times article, uh, into the rundown, which was very interesting to read through. It talks about a Dr. Lake, who's a psychologist at New York, uh, university, who's been recording their 20 month old daughter's perspective in life in learning for the past 11 months. It's like one hour per week puts this helmet on his daughter's on their daughter's head, um, with a GoPro on top, really following every move, uh, that she makes from her perspective, tracking how she learns, how she interacts, and really ultimately getting a sense of how a child's brain connects the dots between things that happen in their lives to, you know, grow their learning, their foundational knowledge of, of the world that, uh, that surrounds them. And then, you know, and, and by the way, it's not just with his daughter, he's working with 25 children across the country doing, uh, doing this, this study and how that can inform and train a language model. He's actually building a language model called the Luna bot that uses the same data an infant received. And, uh, I guess influences how, you know, potentially how some of these, uh, large language models and other AI systems, uh, can learn and maybe better understand the world in the way that, that humans do, I suppose.
So, um, uh, I love this story because the problem was this came out before, I think I might've mentioned on the show at one point, but it was, Oh, those crazy technology people doing it where they're doing this to their kids. It's an hour a week. It's not very much. Uh, I love, I love, by the way, the name Luna for a child. I think that's great. Yeah. Uh, and, uh, the, the, the, the part of it in there was when she was pointing to now I've lost it. So she's pointing to, uh, a bunch of blueberries and she calls it Babuga.
Oh, right. She's pointing with round figure, Dr. Kwon, uh, gave her the rest of the, we're in a bowl and Dr. Lake looked at the empty bowl and abused and said, that's like $10. Um, but how she came up, how she will turn into the word blueberry and what does that? It's really interesting. I'm going to come back to Yann LeCun who says that a four year old child has seen 50 times more information than the biggest LLM. They have. Wow.
That's crazy to think about that. Right.
So how do you, and so, so he's been using this as a, as a, as an important metaphor, I think, um, that also resets our idea of AGI, um, and intelligence and even the sentience and all that's BS. Um, but just to say how intelligent is the machine? The fact that, uh, again, a four year old has more training data and these things have huge amounts of data. We think that's amazing, but to respect what all of our senses bring and how we learn. And I don't think when I was talking to the same person about, about NIST and all that yesterday, I, we agreed that, that, that this idea of, of setting our own intelligence as the guide and goal for the machines is, is hubris.
It's kind of ridiculous. The machine should do what the machine does and we do what we do and they're different things. And that's okay. Nonetheless, there are lessons to be learned here.
And I love the kind of softer humility. This isn't like Sam, um, uh, saying AGI, the machine is going to be smarter than all humanity. No, the machine might be lucky to be almost as smart as a four year old. Yeah, right. Right. That's a, that's a much better way to look at it. I think it's a much saner way to look at it.
Yeah. And I mean, when we're thinking about, you know, the advancements of multimodal, you know, artificial intelligence and everything, I mean, even that, right? Like we're, we're talking about one, maybe two senses potentially, as far as what the AI is pulling from in order to do some of that versus a baby that has, you know, all of these, you know, the sense of taste, the sense of touch and the, the signals that, that sends and the, the understanding about the world that surrounds an infant because of those things. And that's just knowledge that, you know, a machine at least at this stage is incapable of, of understanding.
Absolutely. It doesn't understand all that context. I don't know if you want to play it or not. I put the clip of Yann LeCun in the rundown right underneath the story. If that doesn't complicate your life with having to turn on audio and God knows what.
No, it's, it's, this is the Instagram clip. Is that right? Yeah. Yep. That's it.
And it's about 20 megabytes per second going through the optical nerve for 16,000 wake hours in the first four years of life and, and 3,600 seconds per hour. You do the calculation and that's 10 to the 15 bytes. So what that tells you is that a four-year-old child has seen 50 times more information than the biggest LLMs that we have. And, and the four-year-old child is way smarter than the biggest LLMs that we have. The amount of knowledge it's accumulated is apparently smaller because it's in a different form.
But in fact, a four-year-old child has learned an enormous amount of knowledge about, about how the world works. And we can do this with LLMs today. And so we're missing some essential uh, uh, science and new architectures to take advantage of sensory input, um, that, you know, future AI systems would be capable of, uh, of, uh, of taking, you know, taking advantage of. And, you know, this will require a few scientific and technological, uh, breakthroughs, which may happen in the next year, three years, five years, 10 years. We don't know. It's hard, but let me, I want to make sure I understand now.
So what's great about that too is that, you know, one of the things that LLMs can't do is know that a, a hand can't go through a wall. Right. Uh, it doesn't understand turning around, doesn't understand things that a four-year-old child really does understand. Kind of the tactile quality of, of the world. And, and context. Yeah.
And context.
Right. Um, uh, this is a hand, that's a wall. It won't work. Right. The LLM doesn't know what a hand is. It doesn't know what a wall is. It only knows pixels and their representations. And so I think it's a really, really, this is why I like Yann LeCun. I think he's just a sane leader in the field, uh, as opposed to some others who are nutty and, and, and it's just a different way to, to, to, to grok this and understand this and the challenge and the opportunity.
I, yeah. And I think what that does for me is that gives me a better, a better personal understanding of the, uh, of, uh, you know, you've said many times, around AGI, um, you know, and it being not quite as, as obvious or achievable as some of the stans would, would, you know, really, really, truly believe this is context that I think I was missing and really getting a better understanding of that for myself, like AGI from a knowledge based perspective, like purely just like, yes, it can think like a human in air quotes or whatever.
Okay. That's, that's one thing. But if what we're talking about is a, is a machine or a system that can truly replicate what it is to be human and to be human, like the, these machines right now are missing out on a whole dimension, multiple dimensions of what it's like to be human. It's knowledge is one thing, but this experiential perspective, the, uh, tactile perspective, all of these senses and everything. And to that end, I don't know if we're going to see AGI, uh, in, in my lifetime, because that's a lot of development that needs to happen. And then those connections need to be drawn together.
Again, I think that's the wrong goal. I mean, and I'm not saying that having the machine with the intelligence of a four-year-old is the right goal either. The point is I want a machine that does what it does. Well, right. Right. That's all.
That's all right. If this machine is going to help write a news story that better understand what the fact is, and that's a hard enough challenge right there. If it's going to be a robot, that's going to, you know, make a pizza for me. That's hard too. Uh, and I wanted to do it well and, uh, please extra pepperoni. Um, and, and, uh, so I wanted to do that.
Well, I'm not saying that it's a single use machine. Maybe the pizza robot could also be a hamburger robot could also be a milkshake robot and McDonald's. Um, uh, so I think that the goal that it can do everything we do, right. Um, is the hubris of, I use that word a lot. These days is the hubris of the AI person saying, I made something more powerful than we are.
I think it's the wrong goal. You be a good computer, you be a good machine. We'll be good humans. Okay. Jason's law. There we go. You do what you do when we do what we do too. All right, Jeff, thanks so much. It's always fun. I learned so much today and I do every single time we do this show. What do you want to, what do you want to plug dude?
Good bird, princess.com good word, princess.com. And also if you're at all interested in copyright and, um, AI and news, uh, if you go to jeffjarvis.medium .com and a few posts down, you will see, uh, my, um, uh, Oh no, where is it here?
Um, the times is broken. Newspapers can be jerks. This is not the way to save the news AI and reflection. Oh hell, I don't know what's up there anymore.
Well, anyway, if you go to, uh, uh, that, that first one about newspaper, people can be jerks. Newspapers can be jerks. And there is a link to my 40 page paper on the California journalism preservation act right there at the top. Yeah.
And, uh, if you're interested in copyright and AI and news and all of these fights around that, there's some fun, um, history in there about the history of newspapers and copyright and, uh, competition.
Excellent. So look for that. New papers, newspapers can be jerks. There were, there were a couple of different places for you to go to, but, uh, you can find Jeff Jarvis on medium and get a link to the California journalism preservation act paper that Jeff was just talking about. Thank you, Jeff. Such a, such a fun opportunity to hang out with you. I'm so happy we do this.
Same here. AI inside records live every Wednesday, usually at 11 AM Pacific 2 PM Eastern. You can see on the Techsploder YouTube channel, a little spot here that shows, um, well, this says it's up upcoming, but that's just because I didn't refresh. If I refresh now, see, there we go. And now we've got the live stream of the web. Exactly.
Here it is. It's total inception right now. You're watching the live stream. If you're a lot on the video, if you're watching the video anyways, we stream it live to the Techsploder YouTube channel, uh, every Wednesday, 11 AM Pacific 2 PM Eastern, the show publishes to the podcast feed later that day, which you can get, you can find the ways to subscribe to the audio podcast by going to AIinside.show all the ways to subscribe. Many of them anyways, uh, including episodes listed on the page.
Uh, that's aiinside.show And then finally, if you would like to support us directly on Patreon, we would actually really love that. We offer ad-free shows, early access to videos, discord community, regular, uh, hangouts with me and Jeff and the rest of the community. And if you are of a certain level, a certain tier, then you become effectively one of our executive producers. This week's episode, executive producers are Dr. Do and Jeffrey Marachini.
We could not do this without you and the rest of the fantastic folks in our Patreon at patreon.com/AIInsideShow. doctor and Jeffrey. Thank you. Thank you. And thank you to everyone for watching and listening and just supporting us in whatever way you do. We can't thank you enough for that. We'll see you next time on AI Inside. Bye everybody.