Robert Hallock, VP and GM at Intel, joins us for a deep dive into the rise of AI PCs and why they’re more than just a buzzword. We unpack how new hardware accelerators are making smarter, faster, and more private computing possible, and why local, offline AI is about to become as essential as graphics in tomorrow’s laptops. Robert explains Intel’s ecosystem strategy, the real differences between CPUs, GPUs, and NPUs, and what it will take for AI features to reach everyone, not just creative pros but everyday users.
Support the show on Patreon! http://patreon.com/aiinsideshow
Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow
Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice!
Note: Time codes subject to change depending on dynamic ad insertion by the distributor.
CHAPTERS:
0:00:00 - Podcast begins
0:01:41 - Defining the AI PC: What Makes It Different and Why Now?
0:03:47 - Architectural Shifts: How AI PCs Differ from Traditional PCs
0:05:29 - Intel’s Role in the AI Ecosystem: Hardware, Software, and Industry Enablement
0:08:20 - Lessons from the Past: The Intel Web Tablet and Driving Industry Change
0:09:32 - Hardware Evolution: What Needs to Change for AI PCs?
0:11:02 - Real-World AI PC Use Cases: Enterprise, Creative, and Consumer Adoption Waves
0:13:51 - Local vs. Cloud AI: Privacy, Personalization, and the Value of On-Device AI
0:16:50 - Trust and Branding: The Meaning of “AI Inside” for Intel
0:19:26 - Accessibility and User Personas: Who Benefits from AI PCs Today?
0:22:30 - The Graphics-AI Connection: Why GPUs Became Essential for AI Workloads
0:25:10 - The Evolution of GPUs: From Graphics to AI Powerhouses
0:26:56 - Gaming’s Role in Driving AI Adoption
0:28:00 - Historical Tech Drivers: Media, Typography, and Early AI Tools
0:29:37 - The Local AI Movement: Are We at an Inflection Point?
0:30:44 - AI Hardware Breakdown: CPUs, GPUs, and NPUs Explained
0:33:49 - Internal Challenges: Education and Customer Awareness at Intel
0:36:06 - Robert Hallock’s Role at Intel and Closing Thoughts
0:37:17 - Thank you to Robert Hallock and episode wrap-up
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] Robert Hallock of Intel reveals why AI PCs are more than just a buzzword, how new hardware accelerators are empowering smarter, faster, and more private computing, and what it will take for AI features to become as essential as graphics in tomorrow's laptops. We're going to discuss Intel's ecosystem strategy, the real differences between CPUs, GPUs, and NPUs, and the long road to mainstream AI adoption. That's all coming up next on the AI Inside podcast.
[00:00:29] This is AI Inside, Episode 70, recorded April 30th for Saturday, May 17th, 2025. Intel's Robert Hallock on the Rise of AI PCs. This episode of AI Inside is made possible by our wonderful patrons at patreon.com slash AI Inside Show. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible.
[00:00:56] Welcome to another episode of AI Inside. I'm one of your hosts, Jason Howell. Jeff Jarvis is here for the interview anyways. He'll be here in a moment. Before we get started, real quick, it's a cool interview that we have coming up. But before we get there, patrons are awesome. I don't say it enough. I probably need to say it more. Patreon.com slash AI Inside Show. Rick Schrowers is one of our amazing patrons. Thank you so much for your support. It really does enable the health of this show going forward.
[00:01:24] Today's topic is AI PCs. What else? What the heck are they for? Why do we need them? What will it mean when we all have them? Well, Intel's Robert Hallock is our guest, and he's going to help us cover all this ground and a whole lot more. Let's jump right to it right now. All right. Thrilled to welcome to the show, Robert Hallock, who's VP and GM of Enthusiast Channel's segment at Intel. Robert, it's a pleasure to meet you and have you on AI Inside today. Thank you.
[00:01:51] Thank you for making the time. Appreciate the opportunity. What's on your mind today? Yeah, yeah. Well, as you well know, AI is really shaping how we work, how we create, how we compute probably more than anything.
[00:02:07] And Intel really seems to be seeing AI PCs as a major inflection point in bringing kind of what we've in the last couple of years seen out there in the cloud and on these massive cloud-based servers and everything more into our everyday devices. And that's what I think we're really kind of interested in talking about today is this kind of movement to bringing these AI models onto our machine.
[00:02:34] So let's start with the AI PC, I guess, at its foundation on a fundamental level. Maybe give us an overview of what Intel means when the company talks about an AI PC, what it is, and why right now really matters when we're talking about a shift to AI PCs. Yeah, so I think there's two things that people need to know to kind of answer that question.
[00:02:59] The first thing is when we say AI PC, what we mean is a computer that does AI stuff offline. No internet connection, right? It's much more personal, much more private. It's your stuff on your computer. Okay, so that's one. The other thing that's important to know is that running that AI stuff requires, well, it doesn't require, but it's much faster if you have it, special hardware.
[00:03:24] And so when Intel says AI PC, we're talking about these offline applications using this local accelerating hardware. And those PCs may be 20 or 30 times faster at running an AI-based workload. Okay, so I guess maybe we could talk a little bit about the architectural differences and what makes them different and unique. Because often when I think of like an AI PC in my mind, I'm like, okay, well, it can do all these things.
[00:03:53] But it's also kind of a normal PC as well, right? Like it's not just specific to AI. It's just far more capable of handling AI workloads than a standard PC. What makes it so different from an architectural perspective? Well, I think it's important to acknowledge and accept that AI PC is a buzzword, right? And it's a buzzword because it helps us communicate that something is new and different here.
[00:04:20] But if you peel back the layers, what Intel and all the other industry players are really saying here is that we all believe and Intel believes that AI is going to be a huge part of what it is to use your computer on a daily basis. And when each individual person encounters that reality, that's going to depend on how you use your computer and what apps you use. But this is like inevitable. Everybody is going to have this, right?
[00:04:49] And so we're trying to set the foundation now because when this reaches the full maturation, maybe 2027, 2028, everybody collectively needs a really robust, stable foundation for all of this stuff to live on. And so you have to start that work three to five years in advance. And that's kind of where we are now in the journey, two to three more years ahead of this before it's like mainstream and everywhere.
[00:05:17] But, you know, at the end of the day, it's the next way that companies will deliver performance, battery life, energy efficiency, like all the things we care about as fundamentals on a PC. AI is just the next one. What's Intel's role going to be in that stack? Obviously, the chip and the fundamental hardware. As NVIDIA has CUDA and has software, does Intel play in that? And then does Intel also play potentially as it used to? And I'll show you something in a minute.
[00:05:49] In branded Intel hardware, where does the plan likely take you along that? And not just PCs, by the way, but also phones and other things. Sure, sure. So I can speak for the PC piece only. So I can't talk about phones or tablets. I don't think Intel has plans there. But when we talk about where Intel is involved and what we're helping with, it's... I'm going to say three things now, and I may correct myself as I talk. But I'm going to say there's three components.
[00:06:19] There's this huge industry enabling piece, which is really invisible to most people. Running an AI piece of software is a very complicated software stack. It actually resembles 3D graphics more than not. And as we all know, that requires drivers and frameworks and APIs and runtimes and libraries. And AI is totally the same. And so that means AI is a complex place with a lot of middleware.
[00:06:49] And so that's thing one that Intel is working on, making sure that all these frameworks and runtimes that people might want to grab off the shelf. In addition to our own OpenVINO framework, that those all run on Intel hardware, on all the engines capable of running AI. So that's thing one. Thing two is we have this massive network of ISVs. Intel is, if nothing else, amazing inter-business relationships.
[00:07:19] And we collect an enormous amount of information from them on where their software is headed, what they want to do with it, what engines it's going to use, what APIs, frameworks. And so we're using that to steer our roadmap. So engaging with ISVs to help them build a feature, market a feature, get it to market, make sure it's optimized and fast. That's thing two that we do.
[00:07:41] And the third thing is making sure that these AI experiences are communicated broadly to people. And that could be end users. It could be yourselves. That's why we're talking today. It could be to retail partners, distributors. But there's a massive amount of education that has to happen. And that too is only partially visible to the extent that you can run into it in a podcast or a web page.
[00:08:09] But those are the big three things that Intel is working on. So it's one part software, one part education, one part ecosystem building. And we view the hardware as like a baseline table stakes. So as a prop, I went down to the basement right before we got on. And I worked on this in the 90s. Something called the Intel web tablet. It never came to market. I have a very rare version of this. Oh, wow. It never came to market. No, you would sit.
[00:08:39] Look at that. This was the actual tablet. And I was working for advanced publications. And we were a media partner in this. So the idea was you could sit on your couch instead of having to sit at the huge computer that you had, because laptops weren't really big yet. And you could surf from your couch within 150 feet. Whoa. It didn't use Wi-Fi. It didn't use Bluetooth. It had an Intel proprietary thing. It was pretty cool to work in. And it didn't come to market. But it reminds me of like, but then we have, you know, foldable devices, right? Which are awesome and serve the same itch.
[00:09:08] But, you know, that proves that this need to have computing everywhere kind of has always been there in just different form factors. So Intel's role at the time was interesting, because part of what I hear you doing is also helping to drive the industry. Absolutely. In this case, this was more than a prototype, but that's basically what it was, to say, this idea of a tablet could be a thing long before Apple said that.
[00:09:34] And so what kind of working with hardware manufacturers then, to get them ready for this change, to get them to change, where does Intel play in that kind of long-term development? From a hardware perspective, actually, somewhat blissfully, not a lot has to change at sort of the system level. We have some work to do, obviously, on the CPUs.
[00:09:56] But from a system level, actually, AI models are getting – we've kind of reached this point about two, three years on in the life of local AI where things are actually starting to get smaller. There was a period of time where I thought maybe these systems would need maybe more memory, more memory bandwidth. But we've seen development of LLMs that are much smaller, much more capable on modest hardware. Same thing for image generation.
[00:10:25] That's getting better all the time. Text in general doesn't take a whole lot of compute power. So these days, I tend to believe the amount of performance we're throwing at the AI problem, the amount of performance required, I think it's started to level off a bit. But we'll continue to grow in perpetuity like graphics does.
[00:10:49] And so all of that leads me to the place where I don't think, outside of a wise CPU choice, you need to worry too much about the rest of the hardware and the computer anymore. Although I wouldn't have had that same answer a year ago. So you're talking a little bit about what the AI PCs can be used for, obviously language, image generation. And I guess sometimes when I think of these machines, like I realize there's a lot of open models out there.
[00:11:18] We were just talking on the news podcast version of what Meta is doing with Llama. And it comes up time and time again, this idea that we can take these models and we can put them on our machines and everything. And I guess where I'm coming from is a standard traditional PC. You know, I've got a Mac Studio that I operate my podcasts off of. And I'm kind of curious, like I can do some of that on my machine. And then there is the AI PC, the PC that is really driven entirely to handle this.
[00:11:48] And I'm curious if from a tangible perspective, what are the differences for, say, a consumer or an individual user that's interested in going into an AI PC? Like why would they choose to do that when they have a machine that can do some of this model work? What exactly are they getting out of an AI PC that they couldn't get out of their machine specifically? If you have an example of that. Yeah. Okay. So I think this is going to come in waves.
[00:12:12] And already like the enterprise or commercial space is really interested in AI and actively adopting it, adopting it very aggressively. Because a lot of the tools to manage thousands of endpoints, collect intelligence on them, maybe do proactive maintenance, virus and threat protection. All of these are transitioning to AI and they're getting a lot faster and a lot smarter, which is cool.
[00:12:42] So enterprise is like well down the path. Creative is getting there. A lot of the AI models in the market today focus on creative output of some kind. You know, object recognition, categorizing something to remove it from a picture or detecting an object to add it to a picture. You know, these are the kind of multimedia modifying models that are in the market today.
[00:13:07] The last wave, like the big broad adoption wave, is actually going to be somewhat quiet, which is when it tips over to things like operating systems or office tools or web browsers having AI built in. And these may not even be actively advertised as AI. They're just using an AI model because it is the most performant and energy efficient way to achieve the thing that they're looking to do.
[00:13:36] And that will probably be without any big pomp and circumstance. There will be this moment where we all go, damn, actually, AI is here. My grandma is using it. My mom is using it. And we're not there yet, but we will be. So you made a point of saying that you're not involved in phones or anything else. But we talk on the show about this notion of AI-specific hardware. And Jason has a rabbit, which is sitting on a shelf gathering dust. Somewhere.
[00:14:06] And there's a humane PC. And there's AI in phones. And I think at a generic level, I'd like to hear a little bit about what will matter locally. And you said at the very beginning that it's about being disconnected from the internet. But that's just a way to say it's local. But it doesn't mean it has to be disconnected, right? There will be things that – is it privacy that will matter most? Is it personalization that will matter most?
[00:14:31] What are the things that make localized AI so powerful? First part of the question. Second part of the question. Do you think, not at Intel but anywhere, there may be thus development of AI-specific gadgets of various sorts in the future? Or was that kind of just a blip? So I'll take the second one first.
[00:14:58] Because from an enthusiast CPU perspective, like the market that I help run at Intel, AI will just be an ingredient of the part. We have no plans to sell a standalone AI whiz-bang into the market because there's not a huge need for it. We do believe that the accelerators built into the CPU are going to be able to comfortably handle what's happening in the software roadmap. Now, your first question – refresh my memory, Jeff.
[00:15:27] What was it? It was the benefits of AI being localized. Yeah. I mean, privacy and security is the big one. When I ask people what they think about AI, that is usually the first one that comes up. Because you really don't know what happens when your data goes in the cloud. And there's the old saying, of course, that if the service is free, you are the product. And you can sidestep all of that with this offline AI thing, right?
[00:15:58] And so if you're actively, consciously, privacy conscious, this is an awesome development. Because now you have these really cool tools available to you. And you don't need to put your stuff on the internet to use them. So I think that's a big and obvious one. But for us and for our direct business partners, the major benefit is probably what we call performance power area or PPA.
[00:16:25] Like how much performance can you give to the customer for a certain size of CPU? And that directly reflects your engineering prowess, your ability to meet certain price points, your customer satisfaction in the market. And so if we get the hardware right, that lines us up really nicely with what people are looking for in the market.
[00:16:50] So if both the hardware manufacturers and software and model makers and everybody is going to try to tout these benefits, it makes me wonder whether we see a revision of the famous phrase Intel inside to become Intel AI inside. Do you start emphasizing that because it's inside, you get all these benefits and it's Intel and you trust Intel? I mean, that's kind of the way it's actually worked out in literal objective reality.
[00:17:20] So if we look at the entire stack of software that could be available to a user or developer, right? You first have to support the framework both for the developer who wants to use it and for the user who's running that app. You, the processor vendor, have to support that. So we support more than any other company. In fact, we support more frameworks for AI offline than AMD and Qualcomm combined.
[00:17:47] We have over 400, I think 450 features up and running, AI features up and running, which is again, more than Qualcomm and AMD combined. And you can say the same thing for runtime, middleware, libraries, tools, quantizers, like every step of the chain, we have more of it because we build ecosystems like that is a key objective for us.
[00:18:09] And so when I look at the landscape of what's going on in the AI market and you talked about trust, if you're going to make an AI purchase, obviously you want to trust that thing because you're going to hold it for three to five years. Most people do. And I think Intel's the safe bet. I mean, yeah, I'm biased.
[00:18:29] Clearly I work here, but if you go look at the numbers of what's available in the market and how it performs and what options you have in AI software, Intel is overwhelmingly leading. Overwhelmingly. And so I think that does confer a certain amount of trust and I hope people see it, but that is a never ending job and we'll keep working at it. That's true. Yeah, it's never quite over. Jeff, you mentioned AI inside of Intel.
[00:18:57] And it just reminded me when I was at Mobile World Congress in Barcelona a couple of months ago. There it is. I passed the AI booth. I don't know. Were you there for Mobile World Congress? No, that was a commercial launch, which was my counterpart. Yeah, yeah. But anyways, I passed this by. And of course, the podcast is called AI Inside. So the second I saw that. We're suing for trademark violation. We're coming after you. All right. Okay. I'll let my lawyers know. I was like, immediately I was like, validated. I did.
[00:19:26] Anyways, earlier a thought that came to mind around what you were talking about with kind of the three prongs of where you're going with the AI PC. And kind of the last one being the features themselves kind of making like AI as a terminology stepping out of the way. And in its place being this thing just does useful things, or this is just a useful feature
[00:19:53] and AI becoming a little less necessary in that conversation. And it has me wondering, from an accessibility standpoint, when you're talking about AI PCs, like there are a lot of people that are more casual AI users, let's say, or they're learning. They're early in the stages of understanding what this technology is valuable for and beneficial for.
[00:20:15] And they might not be, you know, they might not be the absolute kind of avatar of the customer who might purchase an AI PC. Or am I wrong? Like, are these the kinds of things that could appeal to the general kind of new to AI still getting an understanding? Like, is that power, is that capability lost on a person in that perspective?
[00:20:41] I, it gets into user personas, which we, we, this kind of like inside baseball of, of marketing land at every company. It's like, what kind of user would this appeal to? Right. And, and, and it is helpful to think about it this way because it improves your ability to reach those people. So it's like a constructive two-way conversation, but we often don't talk about it. That said, um. You just create the tools. Doubt.
[00:21:10] And expect that that's going to find the people who find use out of it. Yeah. And, and for that matter, like it's, I, I don't think the, uh, you know, the, the, the average user demographic, the one who's just paying bills and sending emails, ah, I, I still think they're two to three years away. Right. And that, and that's okay. Right. Like this is a technology journey and it's going to take a long time to get there. Um, you know, I, I actually think a lot about graphics. Um, yeah.
[00:21:40] If we think back guys, graphics inside a processor was once ridiculed, ridiculed, right? Like what can this do? Why is this here? You can't play a game on it. Waste of space. Why am I paying for this? Blah, blah, blah. And now it is like an essential part of having a modern CPU. You can't run the windows interface or a web browser without it. Right. You got to have it.
[00:22:04] And, and I think that's probably more like where AI will land as this revolutionary, but quiet addition to the processor that is, is never going to go away and we'll be able to look back and see this fork in the road. But here and now, you know, John Q public looking for a $500 laptop. AI is probably not going to grab their attention and that's okay. Cause it's going to come in waves. Yeah.
[00:22:32] You mentioned graphics a couple of times and, and, and I'm going to ask a really stupid question, but one that keeps haunting me that I don't get through. I think you may be able to explain it. Okay. Is, um, uh, whether the connection of AI chips and graphics was purposeful or accidental by that. I mean, uh, it's a kind of a chicken egg question. Uh, I never really fully understood where, where to mention the competition NVIDIA, uh, ended up in this position because it was making graphics chips. It was making graphics stuff.
[00:23:02] Um, and, uh, but the way you just explained it to me starts to make some sense, but I wonder if you can make that connection for me of what tied graphics capabilities in the hardware to AI. Why, why did that marriage happen? Not something else. Right. So, uh, you know, as, um, an indulgence for the audience, I suppose, uh, if we think back to, uh, elementary math, right. We had math matrices or matrices, excuse me.
[00:23:30] And, you know, they were simple at the time to two by two, right. A little matrix, but we blow that up into like the doctoral thesis version. And that becomes AI matrix AI artificial intelligence, which runs on matrix math. And what do I mean when I say matrix math, it means that we're assigning meaning to words. Like if I say the word chip, right, is that a potato chip, a micro chip, a wood chip, who knows?
[00:23:59] And we use these number matrices to categorize these words in the context of the sentence that it is in. Okay. So this is all matrix math categorizing predictable workload. GPUs love that stuff. That is exactly what a graphics card is good at. Why? Why is a graphics card good at that? What made the graphics job require that? That's, that's an awesome question. Okay.
[00:24:26] It's because matrix math problems, just like any other big math problem can be broken down into multiple constituent steps and GPUs are really wide. Like they can process a ton of information in parallel, parallel at the same time. So that makes the GPU, uh, uniquely suited for these matrix math problems that are both
[00:24:49] big and easily, um, spread across the hardware versus a CPU that might have, I don't know, 24 or 32 threads at most versus hundreds or thousands in a, in a GPU. Professor, this is really helping me at last. I've been, I've been wondering about this question time and time again. One more, one more tie onto it is, uh, it's kind of a chicken egg again.
[00:25:15] When graphics processing existed, did people who made graphics processors and use them think these are graphic processors or did they think of them? These are matrix processors and they're going to have many uses and people think they're graphics processors, but we don't think of it that way. Or did they say we made graphics processors and we said, and wow, they're also useful for this new stuff. Kind of which, which came first chicken or egg in that case. I think it's the classic story of invention that it's actually, uh, genius by baby steps.
[00:25:43] Uh, first, first we had, you know, like a GPU with a fixed pipeline that could only do a certain set of things. Cool. And then we go to a programmable pipeline still only for graphics, but now you can make the GPU do things that you can imagine instead of picking from their list. Cool. Cool. And then we go into the async shader era where this is where compute on the GPU starts to pick up speed because it's being used in games. Uh, there's a lot of work in academia going on at this time.
[00:26:12] This is circa like 2010 ish. Um, and that academia was, was using, um, you know, some combination of like whatever PC I had lying around with a GPU inside of it and, uh, some combination of server work, some combination of like local PC work. But there was this aha moment, uh, in, in academia that we could use the GPU for, for
[00:26:39] more, but that it wasn't like we went from zero to a hundred instantly. There was just a lot that happened along the way in gaming to finally unlock all this other stuff. And so it was gaming data center and then AI really. That's totally so that really, honestly, that's, that's really helpful to be, as Jason knows, I've asked this question a dozen times and that's really fascinating is who would have predicted way back when that gaming would be what would be the bridge to this incredible
[00:27:08] world changing, uh, next step in technology. That's really helpful. Thank you. You know, I find gaming often drives the world changing technologies in PC, whether, you know, they said computers have died many times over the last 50 years and, uh, never seems to be dead. Always seems to be picking up new capabilities. And most importantly, gaming seems to be driving a lot of what people think about a PC or write about a PC.
[00:27:37] And I still think that's true. And I think a large part of what will take AI to the next level is, uh, a credible, um, impressive implementation in a game that, that one moment could unlock the next huge wave of, of people who go, you know what, actually this AI thing is useful for me. I see the value now. So it's all related. I hope this won't offend you, but in media, we tend to acknowledge that oftentimes porn is ahead of us when it comes to business.
[00:28:07] Um, and, and, and also, so I'm, I'm writing about, uh, early, uh, typographical changes, uh, in, in a, in a next book. And there was a big leap from when, uh, from lead letters to photographically shot through letters to using raster processing and realizing that this technology was at hand for the television. Oh, gee, we could use it to draw letters onto photographic paper.
[00:28:33] And then that led to a huge revolution in media as well. So I think it does come down to what technology is at hand for an idea you have. And I can use that. I can adapt it. I can kluge it into this other use. And I will say like Intel has actually been, uh, have we, we've had production ready AI tools, workflows in server since like 2016. That was 10 years ago. Right. And we've had our AI framework even earlier than that.
[00:29:03] And our first consumer CPU to ship with any AI extension at all, I think was 2018 for consumer. That was, uh, a flavor of Skylake codenamed Skylake product. So like we've been doing this a while, but, uh, the, the big innovations have been in, uh, the models being able to squeeze a ton more performance out of them. Uh, and that has happened a couple of times in the past couple of years, but that, that's
[00:29:29] been the big thing that took this from a kind of a cloud pet project to, Hey, maybe we can do this on a computer. Right. And there's, I mean, there's a, there's a ton of news about that happening right now. Actually just, uh, just this morning saw the Zuckerberg Satya Nadella at LamaCon talking, you know, largely about how these gigantic models are fitting pretty comfortably on, on laptops. And of course, PCs and AI PCs now.
[00:29:53] And, uh, I'm sure Intel sees that local AI movement as the next, like, is that the next major step for AI from your perspective? Let's talk about something we don't talk about enough. What happens to all the data we share with AI platforms like chat GPT or Claude, every question we ask, every idea we brainstorm, it's all being collected and tied back to us as individuals, but then what does it get sold to advertisers, corporations, maybe even governments.
[00:30:22] We've also grown accustomed to social media companies selling our data over the last decade. And I'd like to think that maybe we've learned a thing or two, so we don't make the same mistakes again. That's why I've been using Venice.ai who's sponsoring today's episode. Venice.ai is private and permissionless using leading open source models for text code and image generation, and it's all running directly in your browser. So there's no downloads, no installs.
[00:30:49] In fact, your chats and history live entirely inside your browser. They don't even get stored on Venice's servers. Their pro plan is where things get really interesting though. You can upload PDFs to get insights and summaries. You get a user controllable safe mode for deactivating restrictions on image generation. You can customize how the AI interacts by modifying its system prompt directly. And finally, you get unlimited text queries along with high image limits that I couldn't even hit if I tried.
[00:31:18] We talk often on the podcast about the benefits of open source AI, and that's exactly what Venice.ai is using. If you care about privacy like I do, or you just want an uncensored and truly open AI experience, Venice.ai is worth checking out. Go to my sponsor link, Venice.ai slash AI inside. Make sure to use the code AI inside to enjoy private uncensored AI. Use my code and you'll get 20% off a pro plan.
[00:31:46] That's Venice.ai slash AI inside with code AI inside for 20% off the pro plan. And we thank Venice.ai for sponsoring the AI inside podcast. While single agents can handle specific tasks, the real power comes when specialized agents collaborate to solve complex problems. But there's an important fundamental gap there. We have no standardized infrastructure for these agents to discover, communicate with,
[00:32:15] and ultimately work alongside each other. That's where Agency, A-G-N-T-C-Y, comes in. The Agency is an open source collective building the Internet of Agents, a global collaboration layer where AI agents can work together. It'll connect systems across vendors and frameworks, solving the biggest problems of discovery, interoperability, and scalability for enterprises.
[00:32:41] With contributors like Cisco, Crew AI, Langchain, and MongoDB, Agency is breaking down silos and building the future of interoperable AI. Shape the future of enterprise innovation. Visit agency.org to explore use cases now. That's A-G-N-T-C-Y dot O-R-G. And we thank them for their support of the AI Inside podcast. This episode of the AI Inside podcast is sponsored by BetterHelp.
[00:33:10] I've noticed a big shift in recent years towards taking mental health seriously. And I welcome that change because I recognize firsthand the benefits of taking care of my own mental health. Therapy can be a transformative experience. And it definitely has been for me. But no question, it can be pricey. Traditional in-person therapy can run anywhere between $100 to $250 per session. And that adds up. And it really should not stand in the way of getting the help that's needed when it counts.
[00:33:40] BetterHelp is online therapy that can save you on average up to 50% per session. With BetterHelp, you pay a flat fee for each weekly session. And that adds up to big cost savings over time. And not only that, BetterHelp is much easier to access than traditional therapy because it's an online experience that meets you where you are at with quality care from more than 30,000 therapists at a price that makes sense. You just click a button to join.
[00:34:08] Your therapist is there from wherever you happen to be. You can get support with anything from anxiety to relationships to everyday stress. And if you just aren't feeling it with your current therapist, you can easily switch to another at any time. It's mental health within reach. And it's totally worth it. I know firsthand I used BetterHelp a few years ago myself. It was incredibly convenient and more importantly, impactful to my life. I felt heard and supported. And that's what I really needed.
[00:34:38] Your well-being is worth it. Visit BetterHelp.com slash AI Inside today to get 10% off your first month. That's BetterHelp, H-E-L-P dot com slash AI Inside. And we thank BetterHelp for their support of the AI Inside podcast. We think the next, I think, the next major AI is probably going to be a couple steps, right?
[00:35:07] Security or enterprise looking pretty good. Creative, has some headroom looking pretty good. General office, productivity, entry, getting there, room to go. And then gaming, probably about two years out. And then widespread acceptance, maybe 2028-ish. But we think that about 70 or 80% of the computers by that time will have AI, actual AI accelerators inside, not just able to launch the executable. Right.
[00:35:37] And that should help. Okay. And when you say that compute inside, we were just talking at length about GPUs and how efficient and well fit they are for the task. What about neural processing units? How does that tie into these devices as well? Is it a mixture of both? Is one better than the other? This is just always a question that I've had. Yeah.
[00:36:03] So earlier in the show, we talked about that review board that we talked to with all the software developers. And there's about 100 software developers that sit on that board. And it's just a roadmap sharing effort. We need to figure out how to build our parts relative to what they're going to do. And so I don't have the numbers immediately handy. So I'm going to round them off.
[00:36:25] But roughly, I want to say roughly 30% of the workloads coming in 2025, 26, roughly 30% of those will go to the NPU, 40% to the GPU, and the remainder to the CPU. So while this NPU thing is new, it's still taking the minority of the workloads. Okay, now why? Why? Right?
[00:36:48] The GPU is incredibly fast for getting something done as quickly as possible. The GPU is a great choice. And so that's why content creation companies are 100% on the GPU. But what about the other stuff? The stuff like camera effects or like a text spell checking assistant or a real-time translation? Stuff that's kind of always on, but could use a lot of power for being on.
[00:37:17] That stuff goes to the NPU. So NPUs don't actually need to be all that capable and powerful because they're designed to handle stuff that would just like drain battery life or waste power rather than the high-performance stuff that you would send to a CPU or GPU. That's kind of the breakdown. Now, what is an NPU? It is an accelerator that only handles matrix math. GPUs are more general-purpose.
[00:37:46] That's the GP, general-purpose unit. Saying that for the audience, I know you guys know. But that general-purposeness increases power. Right? So you get speed, but you lose efficiency. And so NPU gives up speed to gain efficiency. And it allows the processor to sidestep these weird cases where you might not have exactly the right accelerator for the job that's happening.
[00:38:13] So our view is you really need all three of these, CPU, NPU, and GPU, all with AI extensions because that's what software makers are going to do. And I don't know anything more motivating than that when your partners tell you, this is what we're doing, so you better get on board. And that's why we have all three in our CPU. Yeah, working in tandem. Fascinating. Thank you for that. That clarifies a lot because I've had that question come up for me quite often.
[00:38:43] What do you see is the biggest internal challenge that Intel faces today when it comes to AI? I mean, for better or worse, I see the conversation sometimes come up to, like, what's Intel doing? I think the same is pointed at Google versus some of the other players like NVIDIA that really steals a lot of the oxygen in the air.
[00:39:08] And I'm kind of curious to know, from the company perspective, what are some of those challenges that Intel faces right now when it comes to the AI efforts? Customer, just like customer awareness is the big one. Education, right? Anytime you have this major new technology, even if it's industry-wide, there's still a ton of education that needs to happen. And it's really slow. It's so slow, right?
[00:39:34] I think of some regions of the world where there might be 25,000. I'll just take PRC as an example. PRC has thousands of internet-connected iCafes, as an example. And many people in China go play games exclusively in these iCafes. Okay, so let's say AI finally takes off in the gaming market. And these iCafes make a big upgrade to take on this hardware.
[00:40:03] So upgrading all these systems, that's a big effort. But still, are the people who are coming to play, do they know about it? Okay. If no, and the answer is probably no, then you have to send people or training to every one of these thousands of iCafes to do that training. Now you have to repeat that for Best Buy and Dixon's and every other major retailer on the planet.
[00:40:33] It's a huge job. And you don't always have the right software to communicate with every single person because this is a multi-year effort. So, you know, it's reach that is the problem. We're having no challenges getting software up and running. We are leading the industry in that regard. Performance is great, super robust. I'm extremely happy with it. But you need to teach people about it, and that takes time. That's the big one.
[00:41:00] Well, as you've demonstrated in this brief time, you are really good at educating because I've learned a lot from this. And I wonder just one last question is you have a new job title. I do. So describe what your job is now at Intel. Yeah. So I run what we call the channel business, which most people would know as enthusiast boxed CPUs like you can buy from Newegg or Amazon. But it's a little bit wider than that.
[00:41:28] Systems sold as like an Asus NUC systems that look like that. That flows through the channel. Notebooks you can buy directly. You know, that flows through the channel. So my team and I, we oversee that. Well, thank you so very much. You really have been very helpful to me and I'm sure the audience. And to me, yes. And we're, you know, this show is really about educating a lot. You know, both Jeff and I say time and time again, we're learning through the show.
[00:41:56] We are not inherently like experts in AI. But every time we do an episode and we talk to experts like yourself, we learn even more. Robert, thank you so much for being with us today. It's my absolute pleasure, guys. Thank you. These were awesome questions, and I appreciate your time. Thanks again to our guest, Robert Halleck from Intel. It was a wonderful conversation. Also, big thanks to Jeff Jarvis, jeffjarvis.com if you want to check out his books. Everything you need to know about the show can be found at our site, ainside.show.
[00:42:24] And if you happen to love this show a lot, if you leave us a review or a star rating, wherever you get your podcasts, it really does help. And finally, if you want to help us on a deeper level, support us on Patreon. That's patreon.com slash AIinsideshow. Get ad-free shows, Discord community access. Get an AIinside t-shirt if you become an executive producer. And it must be popular because there's a lot of them. Dr. Dew, Jeffrey Maricini, WPBM 103.7 in Asheville, North Carolina.
[00:42:50] Dante St. James, Bono Derrick, Jason Neifer, Jason Brady, and Anthony Downs, our most recent member at the executive producer level. So thank you all so very much. Thank you for watching. We'll see you next Wednesday on a news episode of AI Inside. Bye, everybody.