Jason Howell and Jeff Jarvis dive into OpenAI's "Strawberry" project for autonomous web research, discuss Google's selective indexing practices, explore Disney's partnership with AudioShake to deconstruct its music catalog using AI, and more!
🔔 Please support our work on Patreon: http://www.patreon.com/aiinsideshow
NEWS
Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’
OpenAI teams with Arianna Huffington to create AI-powered ‘health coach’
In Constant Battle With Insurers, Doctors Reach for a Cudgel: A.I.
Perplexity planning revenue sharing program with web publishers next month
As the web is filled with AI junk, Google switches to selective indexing
Mashable, PC Mag, and Lifehacker win unprecedented AI protections in new union contract
Disney Music Group Teams With AudioShake to Separate Stems of Classic Songs Using AI
[00:00:00] This is AI Inside, episode 26 recorded Wednesday, July 17th, 2024. OpenAIs Strawberry Agent. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside show.
[00:00:17] If you like what you hear, head on over and support us directly and thank you for making independent podcasting possible. Hello everybody and welcome to yet another episode of AI Inside, the show where we take a look at the AI that is, what did I say last week?
[00:00:38] That's sprinkled inside everything. Trying to replace that hiding word because that just puts it into a negative space. AI is sprinkled into so many different places in the world of technology and that's what we like to shine a light on with this show.
[00:00:52] I'm Jason Howell, one of the hosts but definitely not the only one because my co-host is back. Jeff Jarvis, good to see you sir. Here I am doing AI Inside like raisins in a scone.
[00:01:04] That's right. Oh it's just so delicious. It's like chocolate and peanut butter and all the things squashed together. I don't know. Yeah your analogy was better than mine. Good to see you sir. You've been busy.
[00:01:20] Sorry I was gone. I was so last week I was learning how to set type the old, very old fashioned way. One little letter at a time. Yeah. And we're all about type and topography so it was a venture into the past.
[00:01:33] So now here I am back to the future. Back to the future of artificial intelligence. That's cool. So what was it? Was it like a line of type conference or was it a workshop?
[00:01:44] It was a sad group. There's a group called the Rare Book School that runs out of University of Virginia but also Princeton and the Grollier Club and lots of other places around the country.
[00:01:59] And they run a couple dozen courses in all kinds of things like bibliography in the 18th century and early printing of the Bible and stuff like that. So this was a course in 19th to 20th century
[00:02:11] topography and that means that's when the line of type came in and that's what I'm writing about and researching. So it was the perfect time to look at the impact on topography and design
[00:02:20] and books and so on. So it was actually fun. I was in a class with people who are professors and librarians and I had imposter syndrome but it never stops me from joining in of course.
[00:02:30] Good. That's good because as I've felt that, as I'm sure we've all felt that and sometimes that can be a stunter, that sounds awesome and it sounds completely Jeff Jarvis. You go to Italy and eat pizza and I go and set type and get a key.
[00:02:47] That's exactly. Still, that's awesome. Well, it's good to have you back and it's good. I gotta say it's good to do a news focus show. I've been missing it. I've, you know,
[00:02:58] take your foot off the gas in the news cycle for a month, almost a month and a half because we had a bunch of interviews with me gone in Italy and last week and the interviews are great
[00:03:08] and everything. But I just, I learned and I learn a lot from the interviews, but I love knowing kind of the machinations of exactly this development of AI on a weekly basis. So I'm happy we're doing
[00:03:18] that this week. And I think we'll probably be doing that a lot more kind of getting back to the newsy approach as we play around with things with the show real quick before we do
[00:03:29] get into the news. Just a couple of quick housekeeping things. Of course, there is the patreon, patreon.com slash AI inside show and we appreciate you if you go there and you help us
[00:03:43] out and support us with the show. That's the easiest way to keep us doing what we're doing patreon.com slash AI inside show. You could be like Brian Morrison, the second who is just
[00:03:55] as great as any other Brian Morrison out there. The first, the third doesn't matter. The second, you're awesome. It's good to have you on board supporting us directly from the inside.
[00:04:06] Oh, I like that. I might have to make that a thing from the inside. The AI inside insiders. There we go. And club and also real quick. Right. Just get it out of the way.
[00:04:21] AI inside.show slash survey still doing an audience survey and you can find it very easily. If actually, if you just go to AI inside.show, you'll see at the very top it's called out
[00:04:32] with a link to the survey. It's a pretty quick maybe take you about five minutes, but if you are especially for something new like this, we can't tell you how valuable this is. Yeah.
[00:04:41] Because it brings us a story to go to go get some sponsors so we can get the money and keep the show going. Absolutely. And all you're doing is sharing who you are. It's all
[00:04:48] we need because we want to brag about you. Yeah, exactly. That's exactly it. And we're not, when we say who you are, it's not specific to exactly who you are, but it's whatever information
[00:04:59] you feel comfortable answering. It goes a long way in allowing us to continue to kind of grow the show and potentially monetize. And there's also options, the opportunity in here for you
[00:05:10] to tell us what you're liking about the show and maybe some changes or ideas that you have for us. It's all super valuable and helpful. So AI inside.show slash survey, we would really
[00:05:23] appreciate your support there. Let us know what you think. And with that, I think it's time, I think that it's time for us to talk a little bit about the news. Let's start off with actually
[00:05:37] what ended up being a pretty huge chunk of open AI stuff, which is hard. It's hard to avoid when you're talking about AI in the news right now. They're just making news left, right,
[00:05:47] and center. But let's start with strawberry. This is a Reuters exclusive that hit a couple of days ago that kind of actually it's more like, I mean, it's an exclusive to Reuters, but it really
[00:06:01] seems like an extension of that thing we heard about months ago called Q, which I thought even back then was a horrible name. Apparently now, pretend it was pronounced Q star. That's right.
[00:06:13] Q star. There you go. Yeah, I forgot about that. There was good branding as all of us as Twitter, X or whatever. Yeah, Meta. I don't know if strawberry is a better name necessarily, but I do like it at least a little better Reuters posted this exclusive basically
[00:06:35] highlighting this project happening in open AI right now. The idea is that the strawberry models would not only be capable of answering queries, but also performing deep research and quotes by navigating the internet autonomously. So answering science and math questions that have been
[00:06:54] out of reach of current models, having more human like reasoning skills. And actually friend of the show, Mike Elgin wrote on his wonderful machine society, a sub stack newsletter. That's a good explanation. Yeah, that it's really at the end of the day, this is potentially a big
[00:07:12] step for open AI towards agentic AI and maybe agentic AI in general, having the agency to think ahead and to look for what it needs as a result of what it's finding. Though the key to this,
[00:07:26] as I learned when I went to one of the AI events I went to in San Francisco, is you have to have trust in it to trust it to do something on your behalf.
[00:07:37] And without the knowledge of what it's doing. And I think that we're at the exact opposite spot still when it comes to AI. So I had to write all this stuff, it doesn't know anything. Oh,
[00:07:45] I guess I've got a limit what it works with to a rag, a smaller database, I've got to understand what it's doing before I can trust what it does. So now to think that this thing is ready
[00:07:56] to go out and do something on our behalf and find things on our behalf or take actions on our behalf, which is what agents really do. I don't think we're there unless they're going to show something
[00:08:07] that's just amazing, that presumes that it has an understanding of right and wrong. Oh, that's the wrong thing to find. That's the right thing to find against an assignment that it has an understanding of meaning and understanding of understanding. Right. And I don't
[00:08:24] so far, I don't know, even though I think, you know, generative AI can do a lot of neat things, I don't see the evidence that it's there yet. Yeah, yeah. Yeah, the right from wrong. That's
[00:08:34] definitely a huge hurdle that we haven't really at all seen AI in this capacity anyways, being very competent, you know, or capable of doing. And we'll talk a little bit more about
[00:08:49] kind of like the levels of AI in a second, but I did want to point out before we move on that they, this model likely the one that Open AI was testing internally anyways. I think there
[00:09:03] was a little bit of guesswork on the part of Reuters as far as this part, but tested internally on the math data set more than 90%. And the math data set is pretty big benchmark when it comes
[00:09:19] to like championship math problems, let's say competitors to Open AI have reached between 57.8% to 76.6% at its highest. And so it's assumed that this what they're working on strawberry is probably the thing that other sources have said, you know, they have a model internally that tested more
[00:09:39] than 90% on the math data set. It's probably this if we had to guess. And so, you know, that doesn't say anything about, you know, what you were talking about Jeff as, you know, as far as knowing right from wrong and everything, but it does.
[00:09:53] And it wasn't long ago. I mean, I haven't tested it recently, but last I checked, it couldn't do basic arithmetic. It wasn't built to do that. Right. It's a word predictor. And that's a different skill set. If they found a way to combine
[00:10:05] those things, I don't think that's out of reach to imagine. Yeah. But a generative AI as generative AI as a word prediction machine was crappy at this. So it'll be really interesting to
[00:10:17] see demonstrations of that. Of course, I won't know since I did really badly at this kind of crap on the SAT. I hated that stuff. That's why I had to go into journalism. The SATs, the, I mean, I hardly even need the SATs anymore. That's where we're at,
[00:10:33] at least here in California. Someone was telling me to schools here, the SATs is like a thing of the past. I don't know what that's true or not, but anyways, speaking of open AI, Part Du,
[00:10:47] the company according to Bloomberg believes that it is approaching the second level of five that it has identified on its path to AGI as they say at a company meeting last week, the company told staff about its metrics here and they created kind of five different levels.
[00:11:09] There's the chatbots, and here I can throw it up on the screen for video viewers. There's the chatbots. This is conversational language, AI with that conversational language sort of thing, which we're seeing a lot of right now. There's reasoners, level two, this is human
[00:11:27] level problem solving where open AI actually currently believes that it's really on the cusp of this, if not there. Whether you agree or not, that's what open AI seems to think. Level three, agentic or agents, these are systems that can take actions and I would say
[00:11:46] this is probably where that strawberry project would potentially sit, potentially, at least according to Mike Elgin and what he was writing. Level four, innovators, that's AI that can aid in invention, creating new things entirely, and then organizations,
[00:12:05] and this is really AIs that can do the work of an entire organization. Don't know what this actually means about autonomy or autonomous systems. I suppose that's probably part of agents or I don't know, is it? Well, but it seems to be a mix
[00:12:20] of levels two reasoning and three agent. And so the strawberry seems to already mix the two. Yeah, yeah, that's true. Where it can do human level reasoning and again, let's see the demonstration and then can go off as an agent and do things on your behalf. That's autonomy.
[00:12:39] That's where you're trusting and doing it. And then to innovate is to I guess, this is aid in innovation and invention, but I guess, well, in a sense, you could do that now in the sense of getting brainstorming going. Yeah, totally. Yeah, what does that mean?
[00:12:50] I'm not sure what that means. Yeah, I think when I read that my assumption was that that meant beyond the brainstorming, it is well, no, I think you're right though. How does it know
[00:13:05] what it's creating now versus what it would create when it's at this level four thing? How would it know the difference between the two things? What would be the difference? The screenshot from the Bloomberg report we're looking at is under a post from
[00:13:21] Benedict Evans, who I quote often because I think he's a really good analyst, formerly of Andreessen Horowitz. And Ben says this is all very well as a thought experiment, but is there any prior reason why we know that these are the steps and the order?
[00:13:35] No. And I think as usual, Ben is quite right there. A, I don't subscribe to Bloomberg so I couldn't see the full reporting. So I'm going on this, but I need to see more definition of all these. Yeah. Well, and I realize I'm pulling from a
[00:13:55] comment that's on his Twitter thread, but I think it sounds pretty darn accurate. The response to the question that Ben Evans posed was because this is aligned to their road map so that it seems like their products are leveling up. It's like this is the confirmation
[00:14:12] that we have for us about our own systems and so we'll make it confirmation for everyone. And yeah, that doesn't matter. Or it's their hope. Or it's their hope. Yeah, exactly. It's kind of their North Star of sorts. Yeah. And we could be going,
[00:14:26] I think a few weeks ago we had stories with, we had Bill Gates, as I remember saying that we got a few more iterations where we're going now. And then the next progress is going to be something different. It's going to be some
[00:14:42] other view of some leap in a different way. And I think that's probably true. Gee, we've wowed the world with the talking machine, the literate machine. That's amazing. And there's a lot of things we can do with that, but we clearly know
[00:14:55] its limits and I think its limits are probably insurmountable at that model. So is there reasoning? Is that an entirely different model? How would you teach that? Do you don't teach that with just a whole mess of words? What's for that? How do you teach math?
[00:15:13] How do you teach right and wrong? I don't know. It's all about teaching machines. It's all about learning. It's all about being able to do more than a programmer
[00:15:22] can explicitly tell it to do. But I don't know that we yet know where that is. So to look at the chart like this, I think makes a huge number of presumptions about where those
[00:15:34] leaps are going to be and if we reach them. Yeah, indeed. I would agree. This next bit of open AI news is interesting. I'm super curious to hear your thoughts on this. This is Open AI and Thrive Global. Thrive Global is founded by Ariana Huffington, by the way.
[00:15:58] Announced Thrive AI Health. This is an AI powered health coaching startup that's trained on quote the best peer reviewed science. Also trained on, you know, if you are a user of this, trained on your biometric data, your lab results, your own personal medical data that you share
[00:16:16] with it. So essentially the idea is like feeding this agent, this AI system, all of the pertinent information about your health and then being able to, I don't know, is it used the bot to
[00:16:32] converse and understand? Is it used the bot? Well, Ariana has all been giving you advice about how you should sleep better and take care of yourself better and all and have been
[00:16:41] mindful and so on and so forth. I say that with a mocking tone, I shouldn't. But from Ariana, it has a certain sense. Okay. And I suppose at that level, it's okay, but I would not.
[00:16:55] Dr. Google gets replaced with Dr. Bot and they're both limited and dangerous even. Because we'll see earlier discussion. They don't know right from wrong. They are just going to put in words that make sense at the moment. It's random enough the next time they ask the
[00:17:10] question. They will answer it differently. There's no consistency. So if there were a health motivational coach that says, get off your lousy lazy ass and go out there and walk, okay, that's pretty easy to do. And Apple does that in your watch now. Yeah, current LLMs could
[00:17:27] do that very easily. Hey, motivate me to get off my couch. Go. Right. Or give me somebody new to ignore. Yeah. But beyond that, I don't know. You know, it's interesting, Jason, one of the
[00:17:40] stories I didn't put in the run out because I didn't think it fit in. But now that I think it now it does, the New York Times said the doctors are using AI in their war with insurance
[00:17:53] companies. Oh, right. I saw this headline. I did not get a chance to read this though. Tell me a little bit about it. Because the insurance companies are constantly turning them down for things. There was a man that was making an appeal for a prosthetic leg,
[00:18:05] and they turned them down. Why would you do that? Right. A stroke survivor who was re-hospitalized after a fall. And the insurance insurer determined as we have to be done at home. The doctor found these stories more common, the list of treatments needed,
[00:18:21] preapproval, so what does he do? He turns to AI to argue with their AI. And so in a sense, our health is already in the hands of this technology. Because it's not transparent. It's being done in this kind of circular way. You hope you have a doctor who's
[00:18:41] going to fight for you because that's what it takes. And if the doctor can use these tools to fight more effectively, good. But you know that the insurance companies are using the exact same things to deny. And so we're stuck in this really weird vortex here.
[00:18:57] We hope the doctor's AI is better than their AI. And if you want to start a new startup for AI, start one around this to empower doctors to deal with evil insurance companies. So is it that far off from Ariana saying,
[00:19:12] this is Darly, and she never says Darly, actually, but I don't think she does with the accent. Here's what she should be doing for your sleep. Okay, but maybe you have sleep apnea. And
[00:19:22] maybe there's you need to go see a doctor and you need to understand what's going on with it. And the AI can't know this because it doesn't have the evidence. It doesn't have the data.
[00:19:30] That particular data. Yeah. So on the other hand, one year I had something wrong with a vascular thing wrong. And I went to a series of four doctors and it was classic blind men in the elephant. Right? I'm a pulmonologist. So I'm going to look at pulmonology things.
[00:19:46] I'm going to think it's pulmonology if it's not that. Right. And each one in their way. AI I think can broaden perspectives, give other possibilities. It's close to being human control here. So we'll see what Ariana does. I think basically half of AI right now,
[00:20:04] half of open AI is their PR company. They're doing things to get them stories exactly like this. Right. Yeah. We're talking about these kinds of stories are great for a company like Open AI. Yeah. And from a marketing and awareness perspective, it's great for open AI because
[00:20:24] it seems like they're doing the right thing, whatever the right thing is. Everybody can get behind keeping people or making people have healthier lives. And especially here in the U.S., we've all experienced how difficult and frustrating the medical system is here.
[00:20:47] And so I think we're thirsty, we're hungry for things to shake up a little bit or to become more effective or more inclusive and fair. And so on its surface, something like this really does seem like, oh, well that could maybe level the playing field. That could maybe
[00:21:06] empower me as a receiver of health practices to feel like I'm being taken care of and being noticed for what brought me there in the first place when the previous system doesn't. But then the flip
[00:21:24] side of that is, yeah, but is it actually providing the appropriate care or appropriate information and that sort of stuff. Another thing that comes to mind is, what is the look
[00:21:36] for to find out if it's appropriate? This is what doctors do. I had two uncles who were doctors and they were diagnosticians. Diagnosis is a logical process of knowledge and reasoning.
[00:21:50] So maybe you got there but it ain't there now. Yeah. And so is this any different? And I think it probably is to a certain degree, but I'm curious your take on this. Is this any
[00:22:00] different than like a publication putting its archive into an AI agent and then allowing people to interact with the information there? Is it elevated because, you know, like the risk is elevated here because it actually has something to do with people's health versus just like
[00:22:17] information and knowledge. I think that a publication puts their stuff in and said we talked about this with Shipstead, you know, show number two. Ann says that's the only source we're using and the site's it. Then it's a new kind of search really. It's a way to get
[00:22:36] to information that is in a form we understand and know and has been vetted by human beings as best we can do it. I think that feels better. What about you? Yeah, I mean,
[00:22:49] I do think like I really do think that when it comes to publications putting their information into a chatbot that I can interact with, like we've had enough conversations with the Shipstead interview and it like that makes sense to me. I think it's really just
[00:23:10] and so I guess where I'm getting at is if it works over there, why wouldn't it work over here? And they seem like similar things, just different industries, different approaches. And I think possibly the difference is that the stakes are just maybe a little higher
[00:23:24] when you're talking about someone's health versus talking about an information source that you're searching because you want to review a product. You want to know what publications take on a
[00:23:35] product, you know what I mean? I guess that's kind of where my mind is at on this right now that I'm kind of struggling with. Yeah, like I could see how it could be unsafe for someone
[00:23:49] to trust the AI with their health data and because the consequences are so much greater there if it gets it wrong. And I don't know that I have an answer for you, I guess. It's just
[00:24:04] kind of two sides of the coin that I see similarly but differently and I don't know why. Guess is what it is. And actually along those lines, I'm just skipping ahead a little bit,
[00:24:18] the Washington Post is doing exactly what we were just talking about, launching its own AI chatbot integrating Washington Post articles about the climate as the data source. And then as users, you can query about the climate that are those answers are pulled from the data pool.
[00:24:40] Climate answers is actually the name of this. It's been in the works for six months or so and they're working with different AI services like chat GPT, MetaLama. So they've got the open source in there. And really just the goal being that the chatbot answers
[00:25:00] that are generated here can be backed up by their journalism. That's like their overstated goal. Yeah, it's interesting. It has a whole bunch of questions you might ask. Does sun screen hurt coral reefs? Can you recycle pizza boxes? Does recycling really work?
[00:25:22] And it's fine. And the answer comes back and going to give you what they have from their reporting and that's all fine. The thing I don't know is whether this question medium will be the one that people want to use. Right.
[00:25:37] They may well, I think it's just it's only going to come up in the data to see whether people make use of this. And do they find it provocative and useful and interesting and informative that they keep going with other questions and more questions like a three-year
[00:25:48] old one? I don't know. But it comes back to this as we found, here's a paragraph and we found these three articles that address this. I'm eager to see what the data comes back with. Yeah. Yeah, it kind of looks like they're playing around with it
[00:26:04] in a way that allows them to kind of again kind of dip their toes into what it's like to integrate their own information into a chat bot like this and provide answers. Not going so wide that it's
[00:26:16] like here's our entire corpus of information, you know, kind of keeping it limited. It's a good test case I guess for Washington. And they're not using it to try to create more content to
[00:26:27] feed on the web, junk up the web and they're trying to use it to get more utility out of the content they already have. Which I think is much smarter. And actually this isn't the first
[00:26:39] AI driven feature that they've had last month apparently. They started rolling out a summary product, a summarization product on about 10% of the post stories. So you could interact, you know, with the AI to generate a summary of the article I guess they are reading.
[00:26:58] So they're playing around with that as well. Yeah, reporters, I heard this from Chip Stead too at my event. I had New York reporters hate writing those summaries. Yeah. Yeah. And academics have to write abstracts and they hate writing them. But they're useful because they're way into
[00:27:13] larger article and they're good to have. So that's a case of labor slash irritation saving that may be useful if it's good. If it's good. Yeah. But if I just wrote the story,
[00:27:25] I can then judge whether the summary that it gives for that purpose is good or not. And then I think that's fine. Yeah. I mean as a reader of sites, yeah sometimes I really appreciate
[00:27:36] there being that summary. So I'm on the other side of it. Although you know a lot of the prep that I do for these shows is summarization. So I get it, you know, sometimes summarizing
[00:27:46] summarization can be a little bit of a slog. But I guess at the end of the day as a reader of a site, when you read something that has been generated like that, is it good enough or does it have that
[00:27:58] like AI glaze to it that you're like, oh, look at the way Chip Stead did it is they were acquiring the writers to do it. Now here's a tool that helped you do it. But the writer is still held
[00:28:08] still accountable for whether or not it's good or not. Yeah. Yeah. Interesting stuff. And then finally before we take a quick break here, Microsoft and Apple giving up their open AI board seats regulatory scrutiny is to blame of course, antitrust attention being paid to big tech companies
[00:28:29] and their partnerships with AI startups. Looking closely at those partnerships Microsoft has admitted that it has you know, done a reversal in this regard in an open letter to open AI after getting a non voting role post the big Sam Ultman dust up that happened last year,
[00:28:50] which seems like eternity ago at this point. Apple was expected to take an observer observer role post iPhone integration of open AI. But sources are saying that it's now backed out Apple isn't actually commenting on this publicly currently, but such is the case.
[00:29:10] So, all right, well, we are going to take a really quick break and then when we come back, we're going to talk about a few other stories that caught our attention. Some of them actually relating to the conversation that we just had about Washington Post here
[00:29:26] to a certain degree that's coming up in a sec.
[00:30:01] All right. Before the rest of the news, we get to talk a little bit about perplexity my AI crush. And I honestly know how much I care about this, you know, from a personal perspective,
[00:30:39] because it's not like this would benefit me in any way shape or form. But apparently perplexity is going to start a rev share program with its web publishers beginning next month. They plan to run ads with their search queries. And one of the quotes I saw is if
[00:30:58] they, so that would be web publishers, if they are contributing a source input for an answer and we're monetizing that answer with advertising, we're going to share that revenue with those publishers that contributed to that. What do you think about that?
[00:31:12] I don't know. I guess I'm not opposed to it. I mean, sure. Honestly, when I saw this, I wasn't entirely certain how I felt about it, but I was super curious to know how
[00:31:26] you feel about it. Because, you know, I'll have an opinion. I knew that you'd have an opinion on this one. So I put it in there. I didn't until this very second, but I actually think this is
[00:31:35] a good model. It's better than the here, Rupert. Here's a bucket of money. So you don't sue me and lobby against me for things that I don't necessarily need or want. This is on merit.
[00:31:44] And this says if we used this in the output, they're honest about doing so. And I think they need to be because they need to cite things. Then there's a way to know
[00:31:54] that that had an impact. And if they make money on the advertising, then to share that on some equitable basis is a model we know from search and other things. And I think that that
[00:32:06] makes a lot of sense. I guess a far smarter way to do it than what OpenAI has been doing with basically what is voluntary or otherwise boxish. Making the big deals whether you like it or not.
[00:32:23] Yeah. And this, by the way, would not just apply to one product and not the other perplexity. So they're talking this includes the standard perplexity product, not just perplexity pro, and not just media organizations also, WordPress, newsletters, etc. So,
[00:32:44] yeah, perplexity continues to be an interesting service. Definitely the one that I use the most actually related to that I should also mention I did end up getting. You got your rabbit. I got my rabbit R1. It's a funny device. That's all I can really say about
[00:33:01] it at this point because I haven't used it an insane amount since I got it. The problem with this device or just where I'm at with it right now is again what, you know, and it's by no
[00:33:11] means an original take, but it's really difficult to in the moment say to myself, this is a perfect opportunity for me to use the rabbit R1 instead of using the phone that I always
[00:33:23] have with me. Like it's, it really truly is an extra device you have to remember to bring with you. And then when you have it and when I have it with me, then I'm, I feel like I'm constantly
[00:33:33] like on guard. Like I have to, I have to justify that I brought it with me and I have to look for the things to use it for. And then it does okay. Like it doesn't even, you know,
[00:33:46] really do great in a clutch situation. So yeah, but it's, but it's a unique looking device, you know, the orange is very catching the scroll wheel as somewhat useless as it is is kind of,
[00:34:02] yeah, I mean, it's a perplexing product. I have to say it actually makes sense that they partner with perplexity because it's very perplexing. Yeah, are you, do you feel like you're missing out Jeff?
[00:34:19] Well, you know, two months ago, yes, all of the blowback less so, but you've gotten perplexity. I'm glad that you convinced yourself all along. It didn't matter how well the rabbit works because you have a reflection. That's exactly it. Like, like I really see and I
[00:34:35] keep, I keep threatening to do a video about this just because I'm sure just from a, from a controversial standpoint it might actually do well is just the thought that like, yeah, this is, this is not a great product. Like as far as tech products are concerned,
[00:34:50] this thing does not live up to its promises, to its expectations. It's, it's really kind of confusing and a lot of the choices don't make any sense. So as a tech product, it's horrible.
[00:35:01] And I'm perfectly satisfied with my decision to buy it because it was my entry way to actually, like really buying into an AI platform like perplexity. And I didn't realize it at the time.
[00:35:15] I was like, oh, I'm curious. I'll check it out. And now I've grown to really use perplexity a lot for a lot of things that at least in my current incarnation of this business come in really handy and really do save me time.
[00:35:27] You know, it's, it's weird in a way. It should have been a business model for perplexity as a company. Hey, subscribe to our thing and we're going to send you this little free thing. Totally. Yeah. Yeah. Right. Have fun with it. Yeah.
[00:35:38] And then the pressure wouldn't have been nearly as much. You're absolutely right. I totally agree. That's a great point. But perplexity, I don't know what the deal was there. I'm sure the perplexity made money
[00:35:48] off rabbit. Yeah, for sure. Well, and I think, I think some of the answers, the answers on the rabbit are derived from perplexity, I believe. Yeah, I got to look into that a little bit more. But yeah, I think you're right.
[00:36:00] Do you see any other AI services that are so far that are ad supported or is that kind of news in and of itself that perplexity is planning to be? Yeah. That's a really great question. I have not engaged with any AI services that are ad
[00:36:15] supported. At least that I've noticed. No, like that's part of what caught my attention on this is I don't know that I had really seen that before and not just running ads, but running ads and then sharing the revenue with people. It just seems like a really interesting
[00:36:33] approach to go for with this. So I'll be curious to see once they start rolling it out to see how that surfaces. Will I see ads even though I'm a perplexity pro user or is that isolated
[00:36:48] to just the free users? That'll be a thing, I think. But yeah, we'll see. Yeah. Interesting stuff. You put an article here about selective indexing. I thought this was a really interesting read and
[00:37:03] kind of sad. You may wonder what this has to do on an AI show, but I think it's exactly about the kind of implications and aftermath of AI and content creation. Yeah. So what's his first name? I'm suddenly forgetting. Vincent Schmalbach. It is an eponymous blog,
[00:37:21] VincentSchmalbach.com says that Google now defaults to not indexing your content, that Google is doing selective content. I didn't do it in time, but I DMed Denny Sullivan at Google to see whether they would say this is true or not. Yeah.
[00:37:36] But Schmalbach is an SEO expert consultant. So he said that from his experience, Google now seems to operate on default to not index. It only includes content in its index, but it perceives a genuine need. And this is his speculation. This decision appears to be based
[00:37:53] on various factors. One, extreme content uniqueness. Two, perceived authority. Three, brand recognition. And four, temporary indexing and deindexing that sometimes Google can index things very quickly to avoid missing out on breaking news or something, but then made deindex it because
[00:38:14] they think it's ephemeral. Which all of this to me makes complete sense because the web is being loaded with crap, with junk. We did that as human beings. Before we had generative AI,
[00:38:29] we know it's going to get a lot worse with generative AI. We know it's being used for all these fake reviews and crap. And so for Google to become selective to improve, sir, everybody's saying Google has gone to crap. Well, that's because the web has gone to crap.
[00:38:45] I stopped saying it was crap. Google has gone to hell because the web has gone to hell. Well, who wants the chicken and what's the egg there? Who is to blame for that? I think that flooding the web with junk is the essence of the problem. Now,
[00:39:03] what's frightening about this is you want to start a new product? Hi, Google. I'm here. Well, I don't know until you get into a catch 22. Well, until you're decent and popular, I'm not going to index you. Well, how do I get popular if nobody can find me?
[00:39:15] At the same time that Facebook is turning hostile to links and not just news. The other day, I wrote a blog post on Buzz Machine and I wrote a paragraph summarizing it. And I said,
[00:39:29] this is what I wrote. And I linked to it. And then Facebook took it down that I was spamming. Oh, really? So they've turned really hostile to links of many sorts, not just news. When
[00:39:44] the news bill passed in Canada, C18, and meta took down. Now you cannot put news from news sites on Facebook or Instagram in Canada. A publisher I know there said, how do I start a new
[00:39:57] site? The way I got traffic was to have social exposure. That's gone now. So if you can't get links out of social and you can't get links out of search and you're something new like,
[00:40:13] oh, I don't know, something called AI inside. How the hell do you get discovered? So it's understandable where this goes because people were spamming it because we, this is why we can't have nice things. But it also the implications I think are potentially disturbing. Yeah. So I think
[00:40:32] it's an AI story and impact to me. For sure. I mean, yeah, like you said, the AI systems that exist right now, right now, not very long since the beginning of this kind of current
[00:40:45] trend of LLMs and authors, you know, the AI being authors and all that kind of stuff. Right now it's already putting out an insane amount of junk. Isn't there a word for it? I can't remember. I feel like it came up on the show not too...
[00:41:01] What's more they call when you feed sharks? Not slop or chum or something like that. Yeah. But sometimes it's called chum. That's also tabula and uprains and not junk. Yeah. Well, that's true. I mean, and that is a really fair point that you make is like it's not
[00:41:16] like LLMs and this current moment in AI is the beginning of articles like what you see on tabula that are sure they were written by a human, but it's kind of the slop of the slop, you know? Right.
[00:41:34] And it's everywhere because they know that they can generate a lot of eyeballs and potentially come up on search if they fill the web up with junk. And now AI suddenly is very, very good at writing unlimited amounts of it very quickly. And so Google, yeah, was forced,
[00:41:56] seems to be forced into a position where it has to make those decisions. Vincent, who wrote this piece, Vincent Schmelbach calls Google's search product an exclusive catalog over a comprehensive search product as a result. And that's unfortunate. But he also does point
[00:42:16] out earlier in the article and I think you alluded to this as well that, you know, even from the beginning, sure, the mission was to organize the world's information and make it universally accessible. But, you know, along the way people, humans, not AIs, humans learned
[00:42:34] how to game the system and learned different ways to make their content appear at the top of Google's algorithms and its search product. And so Google had to make changes even then. It's
[00:42:49] just now it's to a severe degree. Yeah. That was that was that's what led to Panda, which was the first major Google search algorithm update. I go back many years when New York Times bought about.com. They brought me in to consult, try to make it more journalistic.
[00:43:08] And it was a brilliant model. People ask questions on Google. They got sent to about.com. About.com had the answer. Half the ads that were displayed on about.com or Google ads is a beautiful business. But then everybody else discovered it there was no barrier to entry,
[00:43:23] content farms emerged, all kinds of junk emerged. And Google had to come up with a rule set that degraded the content farms and about.com was the dolphin caught in the tuna net. And it went down too. And eventually became no longer about.com. It is now
[00:43:43] the company that bought Meredith. If I go to about.com, does it actually dot that dot dash? Dot dash. Thank you dot dash. And what is dot dash? Dot dash is a content farm.
[00:43:54] And it owns all kinds of brands like entertainment weekly, my old magazine and people and lots of other brands homes and garden where all they do is create all this content to try to
[00:44:05] fill the web with this content to try to get the traffic. And that's what happens. Same thing happened with BuzzFeed. We discovered a new model week. We know virality. We're going to prove it works and then we're going to sell that skill to the advertisers. And then
[00:44:17] everybody else, no barrier entry. Everybody else figured out how to do it. What happened? Facebook and company had to degrade that behavior and down went BuzzFeed with it. So it's a really interesting problem coming forward in an AI age now.
[00:44:33] How do you stand out? And I think small box list is not a bad list of things that one should be doing. Do you have extreme content uniqueness? Do you have authority? Do you have recognition?
[00:44:47] I would add one more thing. Do you have value? And if you have those things, can you stand out in what we're where we had now in search, in social and next in agent,
[00:45:01] agentic AI? We'll see. Yeah. So I thought it was really interesting story as a result. Yeah. Super interesting. Yeah. It was a great read and also kind of sad. Yeah. Oh, Google of days. Meanwhile, speaking of junk content going on the web, the next story
[00:45:20] I put in there on the rundown, which I found interesting. Mashable life hacker and PC mag have a new contract. They're all part of the Zip Davis. Yeah. And the contract says the Zip Davis,
[00:45:31] which is a good company. I know folks there cannot lay off workers or decrease their salary due to generative AI. Right. And also that AI can be done, must be done at the direction of
[00:45:46] and with editorial review of human beings with editing responsibility. So humans must be involved when there is a generative AI. Right. And you can't use it to make playoffs. So I think this is
[00:45:58] a very good step forward. Yeah, absolutely. It still needs a lot of latitude for Zip Davis to figure out how they want to use it. But if just use it to get rid of people,
[00:46:06] we got a union know you can't do that. I think that's a good outcome doesn't control how the generative AI is used in its editorial process necessarily. They're going to form an AI subcommittee to discuss their plans with union members. I think that kind of collaboration is only
[00:46:20] going to be to their benefit requires disclosure goal. Yep. Yeah, is to create a space legally binding in the contract for our union members to be part of the discussion. Good. Cool. So hats off to you as if Davis for what I think sounds like a
[00:46:40] model for others. Yeah, indeed at a time where boy, it's rough out there for people working in this industry. You know, and the CEO is Vivek Shah and very smart guy, very forward thinking
[00:46:55] smart business executive. So hats off. Yeah, yeah, I think that's great. A couple of months ago at Google IO, Google showed off or teased anyways, it's as they do and then you got to wait months
[00:47:11] and months and months. And if you're an enterprise subscriber, you might never get it. Although, I think in this case, you probably will Jeff, I'm guessing I have faith that you'll get a
[00:47:21] hold of this whether you don't jinx me, man. It's vids product is what we're talking about back in April announced to Google IO. Now it's being tested out in workspace labs so you can opt into
[00:47:35] it and basically what this is is this allows you as a user to, you know, feed it your docs, your slides, voiceovers, video recordings, and it lays it out on the timeline and creates
[00:47:49] presentation out of those pieces. So you can actually do things like use Gemini to generate stock footage, generate a script doing AI voiceover, voiceovers and it creates video not like, you know, Sora type stuff necessarily, but it's really it creates a presentation video
[00:48:11] that you can then go in, edit it down if you see any points that you need to edit and show it at your next Google Meet meeting, I suppose. So it says coming soon to Gemini for
[00:48:25] Google workspace. Oh, this is the other problem. You have to pay for Gemini for Google workspace. I pay for workspace, but I don't pay for Gemini. Yes, yeah. It's the that extra level corporation and I don't I'm not sorry Google. Yeah. Wow. Wow. But that's kind of neat.
[00:48:41] I'm sure you know, I would be curious to play around that product just to kind of see like how it does in, you know, chopping up this things and putting it into a sequential order. And
[00:48:52] you know, again, it's probably like having not even played with the product at all, I'm guessing it's probably the kind of thing that, you know, as long as you don't look at it as a
[00:49:03] replacement for your need to do anything, you'll probably be fine. If you'd look at it as a great starting point, I'm guessing it'll be a great tool for you, you know, it's probably not
[00:49:11] going to get 100% of the things right, but it might get things going and save you some time. That's neat. So just FYI, to get Gemini business level as an add on to workspaces, $10, $14 a month
[00:49:25] on sale from $24 a month per user per month. But I have two friends who are still on my, who still use email on my account. And so I'd have to pay for three. Oh, so oh,
[00:49:37] it requires you to pay for the amount of users you have. Yeah. Yeah. Gemini Enterprise is $30 per user per month, AI meetings and messaging $10 per user per month and AI security to raise security posture for automated data protection. I'm not sure that does $10 per user
[00:49:56] per month. A lot extra if you want to use AI with your Google products. Yep. This is the way it goes. And then finally, Disney teamed up with a startup called Audio Shake and they are going
[00:50:13] to allow their music catalog to be deconstructed by AI the stems they say can be used for things like remix for sampling you could separate the vocals from the music etc on recordings where those
[00:50:27] specifically it's really useful for where the multi tracks don't exist. So when we're talking about like those those ancient kind of, you know, Mickey Mouse cartoons from, from, you know, many decades ago, all they have all that all that you have at this point is just the
[00:50:45] final, you know, the film or whatever. And tools like these now that they are, you know, capable of doing this, allow them to break those apart that might be useful for things like sync licensing for remastering new formats is like lyric videos and other things. So anyways,
[00:51:03] interesting that that Disney is getting in on the separating the stems of music AI train, I suppose. I don't know. It's interesting to me. I like audio. Yeah, you're a music guy.
[00:51:16] But but I do think, you know, they've got, you know, this is just another one of those examples of a company that has a massive back catalog of things and tools like this suddenly
[00:51:28] opened the doors for new things that you couldn't do with that back catalog before because they were just so old, you know, they did things differently back then versus now. And now, you know, tools
[00:51:39] like these really, really make new things possible. And I'm curious to see what comes out of that. There's a lot of effort. I went to a BDMI, Berlsman investment thing six, nine months ago. And what you can hear from the entertainment companies is we have this huge
[00:51:58] asset we want to try to extract more value from it. Yeah. Yeah. And so they're constantly looking for ways to pull stuff out, which is okay. I understand that though I wish they'd also worry about, you know, future thinking creativity more instead of trying to take
[00:52:12] existing franchises and existing property and milky debt, which is the way they think first because it's cheap. I get it. There's less risk. They already own it. But it's, I would say, only so creative. Yeah. Interesting stuff. All right. Well, we have reached the end of this
[00:52:30] episode. We've talked about a lot of news. You are of course going to talk about some of this stuff, I'm sure, and some of the stuff that we didn't get to that you put in the
[00:52:38] rundown on your next show a little bit later on today as we can Google who knows. I'm not in charge. It's not, it's not a democracy. You put it in there. Here it is. Here it is. That's right.
[00:52:49] I specifically said this morning, anything you really, really, really want in there, move it up and you did. And I appreciate that. Jason said, I got to go out on errands. So, you know, the cats away, please. No, and that goes for any week, man. Anything you
[00:53:04] absolutely want. Put it up and we will talk about it. This is inside, you know, this is inside AI inside here. Yeah. Jason, you have full license to say, why the hell did he put that here? See, and erase it. See, but I probably wouldn't even do that
[00:53:17] because I go, if you put it in there, it's in there for a good enough reason. I'm curious to know why it's in there. No guarantee. No guarantee. Anyways, this is AI inside where
[00:53:26] it is in fact a democracy. Jeff Jarvis always a pleasure to get the chance to hang out with you and talk a little bit about this topic that is so endlessly fascinating to me. GutenbergParenthesis.com is the place where people can go to find your magazine and the
[00:53:41] Gutenberg parenthesis. And I keep looking for the coming soon, but it's not. I got to get around to making the page. I just got a discount code. So, I'll figure out how to add that to the page. The web we weave. Right.
[00:53:54] The web we weave. Yeah. Right on. Good stuff. Good to have you back. Great to talk with you about it. Great to be back for a friend. And totally related to the topic of this show, I actually did a video
[00:54:05] yesterday released it yesterday on the TechSplitter YouTube channel, which is, so UDO, UDO, I really truly do not know how to pronounce the name of the company, but they are one of the
[00:54:18] many kind of AI music generation startups out there. And I as a musician look at a tool like that. And I don't want to create songs from start to finish with those things the way a lot of people
[00:54:30] do. I look at it as an idea machine for my own music. And so I created a video, which was like, if I start an idea the way I normally do, and then I run it through UDO and I say UDO, come up
[00:54:43] with like five different ideas or directions I can take my song. And it gave me all of these, this stuff back. And then I show you, you know, me integrating it into my workflow and
[00:54:53] everything. And it was a lot of fun. It was really cool. It was like the AI wasn't, you know, there to replace anybody. It was just there to offer suggestions kind of, you know what I mean?
[00:55:03] And that's what I love about the AI tools is that's the possibility partner. Exactly. What is truly a help. I went to a BDMI, Barrelsman Investment Conference, the bigger conference and moderated a couple of discussions about a month ago.
[00:55:17] And there was a tool there. I don't think we talked about it here. I got to remember the name of the tool where an illustrator, kind of a comic book and anime illustrator
[00:55:27] got up and explained that he could feed it all of his work. And then it would know his style. And then he could instruct to do these. One thing I hate doing is backgrounds. It's just tedious.
[00:55:38] Yeah. You're the mountains. So it could present a background to him. And then he could modify it. He even took photos and said, okay, do this photo in my background style. This is what
[00:55:50] I want to have here. I want to have this seashore, but it would redraw it. Then he could put in a rough sketch of a character and it would give it back to him in a certain way. So, okay,
[00:55:58] but he could modify it to his heart's content. And so if you could say this is me as an artist and help me do stuff, come up with that base track. I don't like doing base tracks.
[00:56:11] Right. And you're in control of it and you can modify it and you can create more. I think that's pretty cool. Yeah. Yeah, I think it is too. Yeah. I will say like this is not the only video that
[00:56:21] I've done along this line. So I did another one that's actually done really well. It's kind of the best video, the most watched video on my YouTube channel so far. And what happens when
[00:56:31] you get to that point is you get a lot of new people, which is amazing, you know, seeing your content, but then you get a lot of comments from people. And it's been interesting to read some
[00:56:40] of the comments that there's definitely the line of people that are like, thank you. This is how I see AI tools too. It's a collaborative partner. It's an idea machine. It's like,
[00:56:51] and then there's the other people that are like, you're making a deal with the devil and you, you know, you're the reason that everything is going to go to hell and blah, blah. How dare you use a typewriter instead of a quill?
[00:57:03] I know. It's like, do you feel this way about Photoshop? Like literally it's just a tool that I'm using to like get ideas. I'm not even like using the audio that it's giving me. I'm like
[00:57:13] recreating it myself. So anyways, I'm writing the typewriter chapter in my book about the line of typewriter now. And when the typewriter came out, people really resented it because they thought it was a junk mail if people use it for personal letter or one great thing in
[00:57:32] one of the books I read is, you know, you're insulting my intelligence. You're making, I can't read handwriting. How dare you do this to me? You know, and this is the adjustment we make. Yeah, it's an adjustment. Right.
[00:57:45] Any, anytime there's something new, if yeah, there's that, here's my walkie trivia for the day. I didn't know that until the 1820s and 30s, everything that was written was still written with goose quills. It wasn't until steel nibbed pens were not made at scale until the 1820s
[00:58:04] and 30s. Interesting. So geese were farmed for their quills. Ouch. Wow. Yeah, this thing of all the geese we saved when we moved to it while they're all flying around and crapping and
[00:58:16] use that word again on everybody's golf courses, but fine. That's okay. The way it should be. Yeah, the way it should be. Oh, that's fascinating. Anyways, this random factoid brought to you by Jeff
[00:58:30] Garvis, AI Inside, which you can watch every Wednesday. We do this show 11am pacific, 2pm eastern, again on the TechSploter YouTube channel, youtube.com slash at tech sploter, all one word.
[00:58:44] We do publish this show of course to the podcast feed later that day. So if you don't catch the video version, that's fine. You don't have to. You'll get it in your podcast.
[00:58:54] Do be sure please to like, rate, review, subscribe, whatever you can do for this show. It goes so far to bringing in new people to the show and we would really appreciate it. Tweet about it.
[00:59:07] Yeah, share it. X about it. Share it out. Anything, any little tidbit, anything you disagree with. Yeah. It's provocative. Please, it helps. For sure. Yeah. We'll hit you back, you know, if it is something that you disagreed with and you want to spark a conversation about it,
[00:59:23] like we're there to, we'll, we'll, and don't forget to go to the site and fill out the profile. Yeah, the survey, the AI inside.show slash survey. Go there. Let us know a little
[00:59:37] bit about yourselves and it's really going to help us out. And then finally, Patreon.com slash AI inside show to participate in, in helping us. We've, you know, got some exclusive content. Actually that, that UDO video I posted early for certain tiers on the AI inside Patreon
[00:59:56] before it went live on YouTube and you can become an executive producer like the amazing Dr. Do, the amazing Jeffrey Maricini and the amazing WPVM 103.7 in Asheville, North Carolina. Y'all are awesome. Thank you so much for watching and listening and for learning with us each and
[01:00:15] every week. We will see you next time on another episode of AI inside. Bye everybody.



