Jason Howell and Jeff Jarvis discuss the latest AI news, including OpenAI's new Sora video generator, legal issues for an Air Canada chatbot, missteps in publishing AI art in a medical journal, and Jason gives a hands-on demo of Perplexity Pro including a solid example of why it's necessary to scrutinize the output of LLMs.
NEWS
- Release of OpenAI's Sora for video generation
- Debate over whether advanced AI capabilities indicate nearing AGI
- Meta's new AI model V-JEPA learns from video like LLMs learn from text
- Effect of Sora release on Adobe stock valuation
- Air Canada chatbot provides misleading refund info, company gets sued
- Scientific journal publishes AI-generated rat diagram from Midjourney
- British painter Harold Cohen spent 40+ years refining an image-generating robot
PERPLEXITY PRO DEMONSTRATION
- Integrates multiple models like GPT-3.5 and Stable Diffusion
- Capability to search across internet sources with up-to-date results
- The need to verify its output: Got its own capabilities wrong by claiming it uses model "Sora"
Hosted on Acast. See acast.com/privacy for more information.
This is AI Inside Episode 5, recorded Saturday, February 17th for Wednesday, February 21st, 2024, The Chatbot Effect. This episode of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, hop on over and support us directly.
And thank you for making independent podcasting possible.
Well, hey, hello, everybody. Welcome to AI Inside. I'm Jason Howell, sitting here on, it's actually pre-recording this on a Saturday, just a little bit of insight. We did have to record this a little bit earlier. So, but you're still going to get this podcast the time that you're used to getting it on Wednesday.
I'm skiing on the slopes of Park City, probably at this very moment that you're watching or listening to this. So that's why we had to do it early. So I'm here. I'm happy to have you here. And I'm happy that Jeff Jarvis was able to join me a little on the early side to make up for my vacation.
And I'll tell you why I can't ski, which we talked about in the last episode. I was walking on a sidewalk with a slate incline and they over-salted it with stuff. It was like walking on marbles. What did I do?
Boom. Fell on the wrist that I injured the last time I was on ice. So I am too damn clumsy to go up above my height in any way and slip and slide. Not doing it. Yeah. Not doing it.
See, the salt's supposed to prevent that.
That's just the idea. That's just unfortunate. I was visiting a university and I said this. I said I was going to suit the university and every other person there said, oh, I happen to meet too. I said, we got a class action suit. Let's do it.
Darn. Yeah, damn you, salt. We need the snow so we can be slippery like naturally without any sort of human intervention. Well, I hope you never fall on snow or salt again, Jeff.
And I hope you come back. It's no fun. Everybody in the family comes back in one piece.
Me too. So far, so good. Real quick, thank you to everYanne who is subscribing, everYanne who is telling your friends, everYanne who is leaving a review or even just a rating in Apple Podcasts. I'm loving seeing those coming through and I really do appreciate it.
We really appreciate it. And then of course, if you are so inclined and you wish to support us directly, why you can, there is a way to do that. It's patreon.com/AIinsideshow. That's the place where you can go and support us directly. It really helps this independent podcast out if you do that.
So thank you so much for listening to that. And with that, this week, we do not have a guest. It's kind of similar to last week's episode. And I'm actually really enjoying this, by the way. I love being able to kind of rattle through some of the news and everything.
I feel like I'm learning a lot just by doing this. And this week, like in the short period of time between our last episode and when we're recording this one, there's like a mountain of news anyways, including some really big stuff that we're going to talk about today.
So do you give us feedback on which formats you like and which things work in the show? It's brand new and we're figuring it out. By all means, let us know. Totally.
I mean, this is a work in progress and we want to create a show that you want to listen to and you want to watch. And if there are formats that you appreciate more than others or maybe it's a mixture of them or whatever, contact@AIinside.show is an email that will send directly to me. And I can forward those that feedback on to Jeff. But let us know what you're thinking. It's really important. You know, we want you to feel ownership in this as much as we do.
And I think it's most successful when that happens. So thank you. OK, so let's let's just dive right into the news. And actually, this seemingly came out of nowhere. It's not the first time that we've seen, you know, video generation artificial intelligence. I've certainly talked about it, you know, from time to time services like Pika, which is one that I've used. But when OpenAI gets into a particular, you know, angle or aspect of AI that others are doing, it becomes a big deal. And actually, in this case, it really is kind of moving the goalposts forward as far as generative video AI is concerned.
The service is called Sora. And OpenAI says, you know, create realistic and imaginative scenes from text instructions, which is kind of what we expect. But these videos can be up to one minute long, which is by a far cry, much longer than any of the other competitors that have been out there. So that's a that's that's an impressive duration and pressure impressive length with which to get the output. But also, just the quality and the the fidelity of a lot of the output is just really of sight to behold. Have you have you taken a look at some of this stuff? And while you talk about I'm going to pull up some for video listener, video watchers, I'm going to pull up some. Will narrate this ago. Yes. Yeah. But I mean, what have you thought?
Kind of agree. I think the Sora. I think it's pretty amazing. And it's the video says the videos you are about to watch are not real. So we're seeing a dog. And this is going to be commented on a second moving from one window to the next. Define. Yes. Behavior, gravity and physics. Now we have a chipmunk scheme, but that's obviously just fun so it could do whatever it wants to.
Yeah, it's a cartoon. It's very cartoon like, you know, looks like a dream or like like a Pixar movie.
And now we have a close shot of a chameleon, which is striking the beautiful colors. The background is pretty real. A dog with a selfie camera and the ocean waves in the background and a bird flying by at a very low altitude, I might add.
And it does look like the selfie camera is growing out of the dog. But yeah, two weird little things like that.
Yeah, walking through through the woods. We'll see whether it's going to kill any animal here. Will it? No, it ends before that. A real view of a scene. There's another scene in this video of a gold rush town.
Here's kind of a Bob the Builder view of construction. So all of these are intricate, busy, fairly realistic. Yeah, they're visually compelling. Right. And and so one cannot help but be impressed by them. Now, of course, already I saw a headline that's that somebody is terrified by this because it'll be used in deep fakes and a little fool the world.
And she's calm down. I come back, Jason, to the idea that I've had for quite some time now, which we've discussed before, is that I wish all of this has been presented originally as a creativity machine. Because this is phenomenal. People can use it in amazing ways to tell their stories and to invent things and create things. And it brings the power of creativity to more people. I love all of that, but it's going to be used for deep fakes.
It can't be used for news. All that's true. It's it was I think I think the discussion about generative AI just started off completely wrong. That's what I think about those videos.
Yeah, I tend to agree with you, Jeff, as far as especially kind of like the tool with which these systems can be utilized if looked at through that perspective, you know, by a creative or by a creator or whatever.
Like and I can also completely understand, you know, OK, so to take a step back from just really briefly, I've been working on some YouTube content, like trying to kind of get the YouTube channel together and do some review stuff. And, you know, there's all of these services that cater to creators like Envato Elements is one example. There's there's a ton of these sites that have, you know, like, you know, Adobe Creative, Creative Cloud, you know, has lots of stock stock imagery, stock sound effects, all this kind of stuff so that when you are creating something, if you have a need for, say, a video of, you know, sharks flying through the air.
I only say that because this apparently has it. If you have a need for sharks flying through the air, you know, you can go on to these services, you can find it, they're licensed, they're licensed to you because you're paying a fee to use the service. And someone created those things. And now we're kind of facing a situation where, you know, potentially, eventually these kind of generation systems will be good enough that, you know, just using this as one example, a creator can go here and instead just say, this is what I need, give it to me.
And and I know, I the creator that no one else has used this video before, at least not the way that it is right now, because it was created in real time for me. And, you know, and so I can understand the worry and the concern that some people have about like, well, wait a minute, this really changes things. I don't know that necessarily that worry means that it shouldn't happen, though. I think it just kind of changes our skill set.
Hopefully it changes how we approach these things and opens up new opportunities, new doors, new kind of supercharged abilities for us to be even more creative than we could before potentially.
Yeah, I was talking to an English professor at Montclair State University here in New Jersey yesterday. And it was wonderful to watch her talk about. She one of the videos is, I think, a couple of mammoths going through the snow. And she dropped what she was teaching for the day and said, let's concentrate on this and let's look at the prompt that did this. And then she assigned the students to write prompts for what they would want to make. And she said they got terribly excited. And she said to them in the end, you know, this is English, you have to express yourself well, you have to say what you want to get. And and and that's about the skills of learning to write.
And it was a great moment in, in I think seeing that. So we move from the we have the we have the one poll. We have people saying, I'm terrified because of the election disinformation. The other poll we saw people saying, well, now, I honest to God, AGI is only months away.
And in the middle, we are, yeah, I keep seeing that. Why? Why is that? Why is it just a video generation? So impressed with what they've made. And fine, it's impressive, but don't overdo it, boys and their boys. So in the middle, you have that English professor who says, this is a cool tool that we can use to learn things with. Then you have people like Gary Marcus, who is has been working in AI for for decades and is properly skeptical of the powers on a regular basis. So Gary went through some of the videos and noted, for example, the dog going through a window the next in a way that a dog wouldn't or couldn't. And he says that that that
this does not show that AI is capable of creating a model of the world and a model of the world is necessary for what they call AGI, artificial general intelligence, which I'm also terribly skeptical about. The idea that AI could just take on something that we would otherwise do because it's a generalist and it can understand the world and do that. This shows that it can't.
It doesn't solve for, as he said, space, time or causality. I saw one of the videos had some puppies playing and then they kind of morph into each other. There'd be three of them or five of them or six of them or two of them. And it made no sense. I saw some other things where, as you said, Jason, the the the selfie was growing out of the dog. These are things which to us as humans don't make any sense. We know that, but the AI has no such reference. It can it can figure out pixel to pixel.
It can figure out frame to frame, but it can't figure out the larger context. And that I think I agree with Marcus that militates against this idea that we're almost an AGI. Then, nonetheless, it's still it's still wonderful. It's still a great tool.
It's still amazing. Why can't we just settle these AI boys for that rather than thinking that they're going to take over the world? It's not about power, boys. It's about it's about tools and tools in human hands.
Yeah, yeah, it is really interesting. I kept seeing the the reference to AGI and trying to understand like what it is about, you know, video generation that suddenly draws the correlation, you know, suddenly draws that line directly from one to the other.
But yet I kept seeing it come up. So which, you know, possibly to a certain degree, stems from just the general impressive quality of what we're seeing here. Once again, when we look at how far things have come in a single year. Another thing that I'm reminded of is, you know, a year ago when we were seeing image generation or even just a couple of years ago before the big explosion of OpenAI and, you know, their LLM chat, GPT and everything. But, you know, it was not very long ago that we were seeing images coming from Google, you know, probably for like a decade where it was showing off deep mind and the imagery had the really funky interpretations that we're seeing in some of these videos and that I feel like modern, like now image generation systems aren't doing as much of the seven fingers on a hand thing.
Or, you know, three limbs growing out of the neck for some weird reason that that can't really be explained. Yet some of these videos do exhibit some of those behaviors. And, you know, I think it's indicative of the fact that, like, it will not be very long before that is solved. And I don't think that the people who are concerned about this and drawing the line from this to AGI, I don't think that, you know, I don't think anything tells me that that quiets them down. I think that only perpetuates as these things get better. That becomes more and more the talking point.
I just put it in search of the Google for Sora and AGI and came across a Reddit post in the in our OpenAI that is interesting because it says that it yeah, it says it'll accelerate AGI. I disagree. It also says it will accelerate the metaverse. And as I think about it in these 30 seconds, that makes sense. And I start to maybe see why Zuckerberg invested so heavily in AI, and generative AI because of his metaverse fetish. And if you can make the images you see in the fake world, all that better, all that more realistic and believable and use the technology to do that. I kind of get that it might make.
I still don't think people are going to want to strap a television set to their eyes and their forehead for very long, if at all. But I start to I start to understand that.
Connection I think you're spot on. I think the two things make a lot of sense together if you jump. And I don't know how far you have to jump forward in time to get to that point. But if you if we've got systems that can dynamically create these ultra realistic things in real time in front of us, at some point, I'm sure that's where we get where that destination is somewhere in the future. And this metaverse experience that takes you inside of this as it's generated in real time, that's just mind blowing.
That's that's other other worldly and, you know, probably will be a really impressive feat once we get there. And I guarantee you the AGI thing will still be, you know, the major kind of correlation drawn between them, because it's getting better and better and better all the time. And what you mentioned is perfect kind of segue into, you know, between video and AGI and meta. Apparently, meta has an AI model that learns from video the way LLM's learn from words.
Personally, I would have assumed that's how systems like Sora are trained already, but apparently not in the case of this system. Instead of mimicking what it sees, it's kind of forcing it to fill in the gaps through some sort of quote. And I put those in there quotes understanding, which at least according to, you know, what I read here, this article with Fast Company, by the way, it's called V-JEPA is the name of the model, Video Joint Embedding Predictive Architecture.
But Fast Company basically says that, you know, LLM training often they will employ a method of masking certain words in an effort to force the model to locate the best words to fill those spaces over time and kind of, you know, air quotes learn over time and that sort of stuff. This system does that, but with video footage. And so it's not a generative model necessarily. It's more like a conceptual model that is really the effort is to detect and understand, as they say, highly detailed interactions between objects and ultimately make them videos better, I imagine.
And what's interesting to me is, so this comes out of Yann LaCun's labs. Yes. And LaCun often equates the abilities of AI to age of a child. And he often talks about how a three-year-old learns language.
We had a story on a fake or we've talked about it where I think somebody put cameras on kids, a helmet on infants so they can see what they see and how do children learn? So rather than making the model the adult brain, the interesting thing here is that they want to see how he makes this point or they make this point in their press release. He said, even an infant or a cat can intuit after knocking several items off table and observing the results, that goes what goes up, must come down.
You don't need hours of instructions or read thousands of books to arrive at that result. And so it's an interesting way to look at what LaCun has been arguing in smaller models, different models for learning. And so that's what I think this hits toward. And beyond that, I will confess that I can't read the paper because I won't understand it.
Yeah, it's definitely out there for my understanding too. But I was a little fascinated by the difference that what I read about it insinuates about how these video systems are actually learning to do what they do and how is this different than the way it's happening already? And I don't know, it seems to all point to just these systems getting better at better, better and better the way that LLMs have. I mean, these large language models are not perfect. As a user, they're not perfect. But, dang, they have really improved in the past year. And that will continue. Yeah, because of things like this.
Well, I mean, analyzing protein folds is more impressive than any of this.
It's something we couldn't do. What it made us is it starts to speak in our languages, our text and audio and visual. And so now that's what makes us understand this progress and fear it as well. Yes. Right, because we're not, as humans, we're not comfortable with anything being as human as us if it's not already a human.
I also think it's important as we discussed today how important it is to keep in mind that these are tools. They're not taking over the world. They're not doing anything to us that we don't want done. People do things with them. We talked last week, last show about responsibility and where that lies with the model, with the application, with the user. Where does the power lie? And these tools put more power in people's hands. But I think it's also worth noting one of our stories in the rundown is that it also affects other tool makers.
I was fascinated to see that Adobe's stock took a dive on the announcement of Sora. It's all right. I think it was, I think it was like 7%. It was quite a bit.
That's a big drop. Yeah. And this really touches back on kind of what I was talking about maybe 10 minutes ago as far as tools like Sora being potentially a replacement, at least in some people's eyes, a replacement for things like Adobe Creative Cloud and their stock library and everything. And again, Adobe is going to have its own video generation product. Right?
It's going to happen. And that's going to be part of their Creative Cloud suite at some point. I mean, it's not like I'm talking with knowledge of what the company is doing. I'm just saying like that's, it behooves them to do that because that's where the market is shifting. And so, I don't see these things as immediate. Oh, well, there goes that industry off a cliff.
Never going to need it again. I just see it as another aspect, another angle. I do think that at the end of the day, we as humans, the output of the machine creating something is going to become more and more convincing, more and more of a choice for what we're looking for in the moment. But we will also always have the respect and desire to use the output of the output of humans. And I don't think that goes away, even though these things can do that.
No. And we have one story in the rundown on this, but I'm going to mention I was at an event in Washington on Thursday with the CNTI, the Center for News, Technology and Innovation, which is trying to bring together technology with media people so they don't all go nut-eyed on it.
And it was a chat about this rule, but I don't think there's any problem with quoting this directly. Gina Chua, who is a top editorial executive at Semaphore, a brilliant editor and executive formerly of Reuters, is really making creative, productive uses of AI in collaboration with the machine. And I hope we can get her on the show at some point soon. So one thing she... One of the people who was there praised Semaphore for, I think it was the beginning of the Ukraine war, maybe it was Gaza.
I don't know which one. They used AI to make images, making it clear this was made up, but as a way to illustrate the story and video. They're not doing that so much anymore, but it was a creative use. And then Gina talked about using it to be able to categorize huge sets of data.
I think there's just all kinds of ways in which it is a tool to do what we want to do and do more of it. So there's a story in the rundown about an artist who's been working for years to make a robot art, British painter Harold Cohen, spent over four decades refining his collaborator, an image-generating robot. So it's not generative AI, it's a physical machine. I'm a big fan of the machine that will draw, but it draws what he tries to get it to draw. And I think that there's tons of really interesting work that's going to happen with artists being able to do things that they couldn't necessarily have done before and to see what the impact of the machine is on their work.
Yeah, this is really cool. And to understand and consider that some of this imagery, the artist here, he passed away not too long ago, right? As a British painter, Harold Cohen. Yeah, you mentioned it. Anyways, started back in the 70s, early 70s, to kind of work with computers. And back then, it was really, the output was really kind of line art that used this robotic plotter and pen kind of system to put it onto paper.
And then he applied his own enhancements with color by hand and everything. And yeah, it's a cool, like, I can't help when I look at it of thinking of something like MS Paint or whatever, which I think is totally like not giving it enough, enough respect. But that's what immediately comes to mind. And I think that's the aesthetic that's kind of endearing about some of this stuff. Because it really, you can see that the system that created this, this was a long time ago, this was not made in the modern sense of art. And so, it encapsulates a certain moment in time of technology while still having that human kind of interaction, that human involvement in bringing it to the walls of the, where is it on display?
I know it's the Whitney Museum of New York through May 19. Yeah, right, right.
Yeah, super cool.
I love it. I hadn't heard about this. And I think giving respect to it and making a major gallery display, I watched a panel some months ago where a poet was working with AI to do her work, and she wasn't allowed to submit her work because it wasn't made just by her. And I think it's going to be interesting to watch how we adjust our norms on tools and when we consider the tool in our control, but also as a collaborator.
Yeah, yeah, is there enough human in this tool? Right. And is the requirement of a human's interaction or involvement, like is it truly necessary? In some cases, it won't be.
In some cases, people are going to stick to it and say, no, that has to be part of the soup. So yeah, super cool stuff. I love the art. I would love to see that exhibit just to kind of see that art blown up and understanding kind of the starting points and how long and just being a fan of technology.
It's kind of a confluence of a lot of things that I love about this world. And then you had included in here a couple of things called bad AI. Yeah, bad AI, bad. Starting with Air Canada, this is really interesting, paying compensation to a customer who was misled by its chatbot.
And I have to imagine we're going to see lots of situations that, you know, similar to this, essentially the chatbot informed this customer that he could apply for a refund, quote, within 90 days of the date your ticket was issued by completing a form for a bereavement fair. Yeah. Yeah, right. Exactly.
Bereavement fair. And when he did that, Air Canada then told him that their rules state that they could not issue the refund in the case, even though the chatbot said that they could. They said that the rules are on the site, that the chatbot was, quote, responsible for its own actions. That's just bizarre to me. I can't believe that that was the direction that they were coming from. But apparently they did. And Air Canada did admit that the chatbot was using misleading words. I mean, it's your chatbot. You're a business and it's your chatbot.
I'm sorry. It's like, it's their employee. It's if you're a layman employee. So the guy, and he had the chat evidence, he had the receipts, the customer took them to court and won as well he should have. And as well he should have. $650 Canadian.
The equivalent of what he would have paid if the chatbot had been right. But it's the corporate idiocy of this. It's a stripy sand effect, right? It's now the chatbot effect. You allow your technology to do something stupid. You don't take responsibility for it. Guess what's going to happen now? So take that area to Canada. Not one of my favorite airlines, by the way.
Is it leg room? Because I know you're tall.
It's not just leg room. It's also, so last time I was in Toronto, I was coming back and they were doing the shove your bag in this thing or else you can't take it on.
Oh, yeah. And I was flying United and I got to walk right by. Thank you very much. Oh, yeah. Prove to us that you're following the rules. Yes. Yeah. I think, I think when the chatbot is an agent for a company, I'm sorry to say Air Canada, but I think you're kind of you've got to have a better chatbot. Have a better agent. It's your responsibility.
So I'll let you decide how to express this one, Jason.
Okay. Yeah. It is fair warning on this next one. Things get a little anatomical. So if that makes you a little nervous, maybe skip ahead. But scientific journal, Frontiers in Cell and Development Biology published research that I can't believe this. Used output imagery from mid journey. There's a rat diagram, which I'm like wary to show, that shows the internal structure, or a structure rather of rat testes. And it's, when you look at this image, I think I'll just let people go to the thing and see it if they want to see it.
But it's filled with strange, you know, the strange AI ish gibberish type words that kind of look like words, but aren't really as labels for body parts. It has a very enlarged rat penis. Much taller than the rat. It goes on the image.
It's not just large. It's larger than anything. It's such a weird thing to like make it through. And, you know, this is research. Research that hits the journal, or at least according to what I read here, are reviewed by many people before publication, right? The journal has its own policies and ethics. I'm imagining that should be the case.
But you know more about this than I. Well, the people, so I can't, the authors of the paper said that it was generated by mid journey. Why didn't, why they didn't question number one, why would they include that in that weird way? Why the paper didn't do something or maybe maybe it's actually educational, but I can't imagine that it is when you have a endless penis. So people need, once again, the moral of the story is take responsibility for the technology you use.
Yeah, it's just so weird. I don't understand how this happens, but apparently we haven't seen the weirdest of it yet. There will be more, don't you worry.
So I'm looking forward to your demo.
Yeah, so when we were kind of leading up to this, it was like, you know, I've been using Perplexity Pro a little bit and I did buy the rabbit R1. I put in my preorder for that of like probably like a month ago. Shortly after they announced the perplexity and rabbit R1 were working together that when we get the rabbit R1, which is like a little, you know, personal AI assistant piece of hardware, really unique thing that I'm super curious to get here in a couple of months, but perplexity is going to be part of it.
It's going to be part of the AI model that's running on the device. And they included a year's worth of Perplexity Pro, which is actually surprisingly hard to say three times in a row, with the R1 and the R1 costs $200. So essentially, depending on how you look at it, you bought a piece of hardware and you got a year of Perplexity Pro for free, or you just bought a year's subscription to Perplexity Pro like you were gonna, and they're giving you the rabbit R1 for free.
So depends on how you want to look at it. But anyway, so I've had, I've been and been using for a little while anyways, Perplexity Pro. And, you know, this is an LLM system. I think what's like I said, the pro for one year costs $200. It integrates Microsoft Co-Pilot, chat GPT four, it also has Claude 2.1 integrated in there. That's just for the LLM functions. And then if you want to do image generation, it can do that too. It has stable diffusion XL, it has Dolly 3 for image generation. And, you know, outside of that, it's very similar to, you know, some of the other ones that you've probably used. You can see, let me see if maybe I can make this just a little bit bigger. So it's a little bit better for video viewers.
Wait, let me ask a double question first. Does perplexity have its own model, or is it a gateway to all of those models?
It, it's, my understanding is it seems to be a gateway to all those other models. Having said that, I think if I jumped into settings, which I don't want to do because it shows some personal stuff in there. But there, it has some language, and I guess I need to read up more about it, but it has some language about how you can expand the capabilities of perplexity with chat GPT and with Co-Pilot, which actually now that I'm saying that out loud, kind of lends me to believe that they've got their own basic sauce working at the baseline.
And you can choose among those different models.
You don't use them all at once, right? Right. Co-Pilot right now. I see on the screen. So like I've been using it in auto mode, which is essentially, you know, it can kind of take my queries and make the best determination on what it thinks is the best direction to go. You can go into settings and tell it like, I want to just use the chat GPT for model or, you know, any of those things with the image generation, you can say just Dolly three for me.
So you can you can make some of those changes. As you can see, if you're watching the video version, apologies to audio listeners, we'll try and do our best to describe this as we go along. But the main input area does have a little switch, a little blue switch for Co-Pilot, if you want to activate that. And that does some really interesting things.
So I'm going to turn that off. And what do we want to ask? Oh, tell me the difference between, oh, actually, how about this? Tell me whether perplexity has its own AI model behind the scenes, or if it relies entirely on other AI models to do its work?
Okay, so I'm sending this with no co-pilot on. There's this focus area. If I pull this open, I've right now I have it set to all, which searches across the entire internet. And that is one of the foundational kind of cool things about perplexity is like a lot of the other LLMs, they have outdated all that's key.
That's the key to it. Yeah, this is key. This is really key. Think of perplexity in my use, think of perplexity as another place you can go instead of going to Google to do a search, not because you're trying to find a website. But sometimes I go to Google search because I want to find the difference between this and that. Or for example, like I went to perplexity when I first heard about Sora and I didn't have a whole lot of time to go finding sources and stuff like I was in the middle of things when I heard about Sora, I went to perplexity and I said, tell me what you know about Sora.
And you know, this was fresh information. And it was able to in like 10 seconds to kind of pull some information together. It gives me the sources, the links that it pulls it from. And it just gave me a short little summary. So I'd be like, okay, I can move on with my day.
But at least now I kind of have a general sense of what Sora is. And I'm going to look into it later. So it's very up to date. It also taps into Wolfram Alpha. You could narrow it to just published academic papers, you know, just standard like generation of text without searching the web.
So kind of just using it as a, I was about to say a dumb LLM, but that's pretty funny. You're searching Reddit for discussions and opinions on a certain thing or even YouTube if you just want to drill down on YouTube content because they can look at how they got access to Reddit. Reddit just there's a story just out. Somebody paid Reddit $60 million. That's interesting. That is interesting. Yes.
So yeah, there must be something going on there, obviously. So I said, tell me whether perplexity has its own AI model behind the scenes or if it relies entirely on other AI models to do its work kind of gives you a list of some of the sources that it's used up here.
Like I can see everything. It pulled back a video on YouTube, how to use perplexity AI. So I think it actually has access to the transcript there that it can kind of get some understanding. A few other websites as well as perplexity itself.
The more things are cited in this AI world, the better.
Absolutely. And as you're reading through here, you kind of get these little callouts that tell you like, here's where this point comes from. We call them footnotes in my world, Jason. Footnotes.
There we go. So I tapped on one that takes me to discover, but is that actually so perplexity AI has its own AI model called Sora? Nope. That's funny. That's really funny. Okay, so that's later. It says perplexity AI relies on OpenAI is GPT 3.5 language model. Okay. So the foundation would be 3.5 according to this.
If we can even trust it because the following sentence is the perplexity AI has its own AI model called Sora, which is a diffusion model that uses a transformer architecture similar to GPT models to create videos from text instructions. That is hilarious. Okay, you got that wrong. Perplexity. Now I did not have co-pilot on.
I wonder if I had co-pilot on, if that would give me a different answer. So let's take a look. So I'm going to go here. I'm going to go ahead and copy my same query.
If this isn't an indication that no AI system is impervious to getting things wrong, I don't know what is. It's talking about itself and it got it completely wrong. So I've switched on co-pilot. You can see when I hover over this, 595 left today. So it's not unlimited, but still that's pretty solid. Now co-pilot is pretty cool because when I submit this, if it does what I think it's going to do, no, it didn't really ask me for any expanded information. Often when I have co-pilot set, then it will come back and it'll say, what is your purpose for drilling into this information? Is it this, this, or this?
It'll ask you to refine with some more instructions to help it get its focus. Now AI, okay. So now with co-pilot, we've got a different set of output. I'm really hoping there's no more over. Yes, it's still included in there. Wow. How is it in including Sora? Oh, that's a just a curiosity.
Can you have us use a specified GPT-4?
Yes. So let me see here. Give me one second. I can activate that. I'm just going into my settings and I'm just turning off the video for a second because it does show some other stuff.
So okay, so I have now activated GPT-4. Let's see here. So we'll go home. I'll go ahead and add this back to the stage. Let's see here. Finally, GPT-4 going behind the scenes, at least or so I've fired it off.
It's looking like it's coming back with some pretty similar stuff. 13 sources as well as another two. Okay.
Very limited output. Perplexity relied on other AI models initially. OpenAI is GPT-3 .5, Microsoft Bing, as it was a wrapper of other companies. However, perplexity has since pivoted to open source language, large language models, including Mistral 7B and has been fine tuning them to their specific use cases. So this is definitely a different output. And you can see down here.
Is there a link to the source for this answer?
No. Let's see here. There is a two. More than an OpenAI wrapper, perplexity pivots to open source as of January 12, 2024. And so this is a relatively recent story, right? This was a little more... We just have to check people. You have to check. I mean, that's just hilarious that we were talking about Sora earlier.
And you know, we think that the skilled people are going to have to learn what the AI is prompting. The skilled are going to have to learn what the AI is fact checking.
Absolutely. 100%. Every time I use these things, I click through. I never take something full print without doing some sort of a pass. Because in my mind, it's not meant... Again, at least where we're at right now, it's not meant to be a replacement. It's meant to be an inspirational kind of injection of some sort or a good starting point or whatever. You know, it... For me, it limits the blank page syndrome. It's like, all right, give me something to start with.
That gets my mind rolling. Like you can see, I'm looking at my library, which is kind of like my past searches. I've created a collection called Vacation Planning. And like we are going, you know, to Italy this summer, we're going to Park City. I said, you're taking a family trip to Park City, Utah. I want to go out for a fun dinner one night with a large group.
Please select the five best rated restaurants for the group. And it gave me some ideas. So when we're there in a couple of days, you know, maybe we'll check out. Yes, that's true. Probably have to call them beforehand to make sure that they're actually...
Don't have a big reservation for you. I don't think it'll be reliable. Yeah. One last question, Jason. Sure. Since I haven't yet ponied up, which I should do, because I'm in a show called AI Inside, I've got to buy into one. I've got to pay up 20 bucks a month for one of these things. Yeah. So if you were me, would you do Perplexity or OpenAI or Google?
Man, it is such a hard question to answer. The reason... Part of the reason that I pulled the trigger on the RabbitR1 was because I was in the same boat as you. I was like, all right, I got to put my money where my mouth is. If I'm going to be doing an AI show, I need an AI service that I can use on the regular and get really comfortable with.
Not that I wasn't using those services before, but they all have their paid services that are expanded opportunity or expanded feature set and everything. And I knew that I needed to be somewhere. And I really just kind of like on a whim was like, all right, let's just go with this one because it's different and everything. I don't know what the right direction is for you, Jeff. I do know that you're very firmly implanted in the Google universe as an AI. Yeah,
I might just do that. Yeah. And it makes a lot of sense to me, even for myself, in light of having this Perplexity subscription, I'm like, at some point, I'm going to actually have to consider, like, do I also do that? Because that would be really helpful to me. And I would love to know
how these things work, yes, when it's a destination that I go to to use it. But I think the real utility for me is going to come from these things being embedded in the things I'm already using. And you get that with the Google approach.
If they don't screw me on my Google account, but that's true.
Very, very true. So anyways, thank you for that. Yeah. And I mean, obviously, not perfect. We ran into a very large speed bump with Perplexity. That's not going to keep me from using it. It's just a reminder that, you know, do do take anything that you get back from these things with a grain of salt because it's not 100%. It is not going to get everything correct. But it is going to get a lot of things pretty close to correct. And sometimes it gets things correct enough to satisfy kind of what your immediate thing is, you know, I had to do some comparisons, some technology comparison for a purchase that I was going to make.
I can't remember off the top of my head what it was. But and I was like, you know, compare these two things, tell me what what are people saying are the the pluses and minuses and it came back with a nice little summary. And I felt like that was good enough for me to make a purchase decision. So yeah. So it's kind of neat. So there you go. Did you did you pony up for the R1, pony up for the rabbit?
No, I didn't yet. I wanted to actually was that's why I'm really happy you did this this segment today to help me decide. Yeah. Yeah.
Well, we'll see. And if you don't get one, I'll certainly be talking about it later this year. I think I get it. I think I'm scheduled to get it. Who knows when it when it comes, but sometime in June. So it's a little further out. But we'll talk about it once that happens.
But anyways, that is perplexity. And that is this week's episode of AI inside. Jeff, thank you so much for coming on and doing a little bit early recordings that we don't miss a week.
I know what I'll do with my Wednesday. Yeah, exactly. I know what you'll do. I'll do this in Google. Yeah. Yes. And I'll think of you on the slopes. Enjoy.
Excellent. Yes, I can't I absolutely cannot wait. I'm already there in my mind. Let's see here Gutenberg parenthesis dot com for all your stuff. Yep. That's fine. Excellent. Everybody go there. As for me, I mean, I'm just pointing people to yellow gold studios dot com right now, which just points to the YouTube channel, which is where you can watch AI inside.
But the majority of you listen, and that is amazing. We publish every Wednesday. So, you know, you can go to aiinside.show. That's our actual web page that has all the links for subscribing to the podcast. Also, you know, I embed the video of each episode into each episode's page so you can kind of get everything there if you want to. Again, you can support us directly via Patreon. We really appreciate that. Patreon.com/aiinsideshow that helps us continue to do this show each and every week. And I didn't thank earlier our patron of the episode, Chris Huston. I think our second paid patron from last month when we kicked it off.
So thank you so much for your support and for everyone who supports us each and every month on Patreon. And then you can find us on all the socials. Just look for @AIinsideshow all one word you'll find us. And then finally, contact@aiinside.show. That's where you can send us an email and let us know what you think.
If you have any questions. Yeah, I mean, who knows, maybe we'll do a feedback episode if we get enough questions enough cool stuff to talk about. This is our clay. We will mold it with your help. Thank you everybody for watching and listening. This is such a fun show to do. And I'm having a great time. I'm happy to have Jeff along with me and happy to have you along with us too. We'll see you next time on AI inside. Bye.