Jason Howell and Joe Braidwood, co-founder and CEO of Yara AI, explore the creation of Yara, the challenges in current mental health services, AI's potential in making mental wellness accessible, ethical AI design, and the importance of safety in AI healthcare applications.
🔔 Support the show on Patreon! http://patreon.com/aiinsideshow
Yara AI
0:03:00 - Introduction of guest Joe Braidwood, co-founder and CEO of Yara AI
0:05:14 - Exploration of the connection between SwiftKey and Yara AI
0:09:06 - The evolution of ELIZA to where we are now with AI chatbots
0:11:50 - Joe's drive to develop Yara came out of personal experience
0:15:26 - Using a GPT as a replacement for a therapist
0:20:50- How to tackle the amnesic quality of LLMs in a therapeutic application
0:27:10- Jason's experience with the open beta of Yara
0:29:06 - Joe's live demo of Yara
0:32:14 - What is emotional intelligence and empathy when an AI provides it?
0:37:42 - The accessibility and inequality problem with mental health services
0:40:34 - How does Yara manage the problem of hallucinations?
0:50:20 - Peter Hames hired by Microsoft AI
0:55:55 - Closing the loop to the SwiftKey days
Learn more about your ad choices. Visit megaphone.fm/adchoices
This is AI Inside episode 45, recorded Wednesday, December 4, 2024. How Yara AI is tackling the mental health crisis. This of AI Inside is made possible by our wonderful patrons at patreon.com/aiinsideshow. If you like what you hear, head on over and support us directly, and thank you for making independent podcasting possible. Hello, everybody, and welcome to another episode of AI Inside, the show where we take a look at the AI that is layered throughout so much of the world of technology and beyond.
We're definitely talking about technology today, of course, but we're also talking about Beyond. We're gonna be talking about how AI is, really interacting with the world of health and mental health. Anyone who watches or listens to this show, regularly knows that this is a topic that Jeff and I talk about a lot. It comes up. It's something that I'm very excited about and very interested.
It seems like a kind of a match made in heaven, and we've got a really wonderful guest to talk about all that with. Before we get to him, just to let you know, Jeff Jarvis is not here today. He's actually in the area. It's too bad that I couldn't convince him, to come up, to Petaluma where my studio is and do an in person episode. But he's in gonna be in San Francisco this evening, Commonwealth Club.
He's doing kind of a a talk on his book, which I have and I've been reading. It's excellent. The Web We Weave. So I'm actually gonna be driving down there here a little bit later to see him and do the sorry. I bumped the mic.
Do the do the fan thing. Even though he's on my show, I'm still a huge fan of Jeff Jarvis. So I'm gonna go watch him talk and get him to sign my book and, hang out with him for a little bit. So if you if you happen to be in the area, check that out this evening. You can do a quick, search on Google for Commonwealth Club Jeff Jarvis, and you'll find all the information you need to check it out.
So without further ado, Jeff, we miss you, but we've got a wonderful stand in for you. Joining me today is, someone who I've podcasted a number of times with on a previous show all about Android at the TWIT network. Back in the day, Joe Braidwood, is an all that Android friend. He was the cofounder and CEO of SwiftKey back in the day, but now cofounder and CEO of Yara AI. Joe, it's great to see you again, man.
It's been a long time. Jason, thanks so much for having me. It's great to be here. And, yeah, we got a lot to dig in on, so I'm excited. Yeah.
Me too. Me too. I'm super excited. I saw, I think I was on LinkedIn, maybe it was, like, a week or 2 ago and saw a post by you. Because, you know, I follow you in all the all the different places, but I hadn't I hadn't recognized that you had a new gig going.
I I hadn't recognized that you, you know, cofounded Yara AI. And so I saw your post, and I was like, oh, well, I gotta you know, this is like this makes a lot of sense for the show because like I said, we talk a lot about these topics. And, who better to do that than with you and, to be able to do a podcast with you again. It's I've missed you. Likewise.
And, yeah, it's it's it's not quite the same when you're, in your home office, but, I have to make my way back to Petaluma as well one of these days. It's a beautiful Yeah. Well, you know, I have yet to have someone in this room for, like, a live podcast. I'm not saying it's not possible, but, boy, it would be tight. But I'd be willing to make it happen.
If you're in the area, let me know, and we'll get you up here and make that happen. Real quick before we dive in, I like to kinda start things off by throwing out a huge thank you to our patrons who literally financially support this show and make it possible. Y'all that we're we're we're coming up on a year, which I kinda can't believe. And, you know, one of the big ways that we're even able to do this is because there are many of you out there who want to see and listen to Jeff and I talk about artificial intelligence on a weekly basis. That's why we do what we do, and that's why you have been doing what you do.
So thank you for your support. Patreon.com/aiinsideshow. CCO Mario is one of our patrons. CCO, is that your first name? Or I'm I'm sure that sounds for something and I'm missing it.
But, anyways, Mario, thank you for your support. Couldn't do this without you. And everyone else who supports us on Patreon, we appreciate you. And everything you need to know about the show, AI inside dot show. Alright.
That's all the details about this show, but we're here today to talk about Yara. And I think so do you do you call it Yara? Do you call it Yara AI? Like, what you're probably just Yara at this point. Yeah?
Yeah. There are there are 2 entities, technically. Right? One is the company and the other is the product and Sure. But you you know, there's been a long journey to get to this point and, yeah.
Yara is is definitely the and we're gonna get into this, but it's definitely the sort of entity that I relate to that I sort of talk to frequently and, is really the kind of the crux of what we're building. Excellent. Alright. So we're gonna get into that. Before we get to the now, let's talk a little bit about the then because as I mentioned just a few minutes ago, you know, we we have kind of a a history with All About Android and SwiftKey.
And so I know Joe Braidwood as the SwiftKey guy. And you've you've came on the show a number of times, and you're one of my favorite guests. Every time he came on, I really enjoyed it. And so I'm always, you know, super curious to kinda see the journey coming from something major like that, almost almost like the founder's journey. Like, you're very accomplished at this point.
SwiftKey, of course, got, acquired by Microsoft. I think it was, like, 2016. Is that right? About that time. And I'm just kinda curious what is the connective tissue between SwiftKey's acquisition and what you're doing now with Yara?
That's such a great question. You you know, I I I talk about the the good old days when people were excited about tiny language models, which is what SwiftKey was. And now all we can think about is, like, the laws of scaling and the tremendously large language models, but I I think that, you know, the the common thread is is is twofold. The the first is obviously that, the power of language and the the ways in which deep learning harness the power of language remain tremendously interesting, and, and transformative. You know?
So if you kinda wind all the way back to when, to when I teamed up with a couple of geniuses that I knew to to kinda take SwiftKey to the world, That was all about sort of trying to repurpose a spam filter and kind of make it small enough and nimble enough to predict words at the micro level on the edge on on an Android device, And now we're doing something just so much more complex, but it's really a similar idea. Right? It's it's, you know, what can we do to understand language and to shape language in a way that adds a ton of value? So that's that's the first, kind of angle. And I think the second angle is deeply personal for me.
You know, for for probably 20 years, I felt that there's this tremendous gap in how we talk about and treat and and and empower, you know, people who have various mental and cognitive issues and, and who are sort of held back by various aspects of their mental and cognitive health. And, you know, that's not a new problem. It's been around as as long as we have as people, but I think you know, about 11 years ago, I spoke at a conference in in California about just how AI was being harnessed in ways that weren't necessarily the right way. And the example I gave at the time was the fact that, you know, social media, Facebook, Twitter, as it was, you know, were really good at selling us products and making us envious of people that had products, but completely failed to guardrail when someone was, you know, publicly posting about self harm or or other topics like that. And and so this this gap was really, really noticeable, I think, to some of us in the industry at at that early phase in sort of 2012, 2013.
And, you know, then you fast forward, and it just you you get all of this sort of whistleblowing and all of the things that that that kind of then came to be, and it just felt like a real gap. So back then, I wanted to to to kinda try to harness AI for good and to think about ways to do it, but as I dug in, I just the tech wasn't there. You couldn't quite suspend your disbelief. You couldn't quite feel a therapeutic alliance with some of these more primitive models, and so now we're in a very different era, and and I think that that era commands, you know, full focus on this. So I I'm just really excited about it, and and that's why Yara was born.
Yeah. It's interesting. As you're as you're saying that, I'm I have these flashbacks to when I was a kid. So, you know, a long time ago, I think we're roughly around the same age, and, I had Commodore 64, and I had the program Eliza. And, you know, obviously, like, it it goes without saying that the time then versus the time now when we're talking about these interactive kind of, you know, chat agents or chatbots, you know, talking about how how are you feeling today, which I I think was the initial prompt that came from Eliza, which is very similar to what we're seeing now.
But, man, the evolution of these things from a simple kind of, you know, program that was on the Commodore 64 to now where, you know, you you kind of to a certain degree, you have to you have to if you didn't know it was AI, you'd have a hard time knowing that it was, which I guess is is a real testament to the technology and the development of the technology. That can also be a real danger, I suppose. You know, there needs to be clarity around that as well when someone's kind of using something, like artificial intelligence to get answers about their health and their mental health. Like, that's really important. Yeah.
It's funny. A a big friend of, of what we're doing is, an actor and a mental health advocate, Stephen Fry. And when I first showed him the prototype of of Yara, he said, oh, it reminds me of Eliza. Yes. And, you know, I think was it was Weizenbaum who said that, you know, he was just absolutely flabbergasted how much his very primitive version of this, people had faith in and believed in, and he sort of flagged it.
I was like, this is crazy. Here it was 40 years later, 50 years, however long it's been. Yeah. That was the sixties, I think. And and so That's what I was.
Now we're here and and and, you know, I I think, you know, in some of our discussion about this before the show, we were we were sort of saying, well, like, you know, how do you how do you have faith in this? How do you believe it's safe? How do you sort of really, like, look at this and and and consider this to be something that that has a tremendous amount of value to add? And and, you know, the the answer is because when you're shaping these technologies, when you're at the coalface, and when you also really sort of have in your own life, a gap that you want to fill with this kind of, therapeutic entity. You come at it completely with full knowledge of what you're doing, and and and, actually, I've just been nothing but consistently amazed at how how much potential there is, for this technology approach.
So, I mean, I can dig in and give much more detailed examples, but, For sure. You know, I I think that it turns out that going from kinda scripted personification, which is really what that was, to this much more nuanced, deeply memorized, very, very personal, set of of discussions that Yara makes possible, I think, is a a real transformation. Yeah. And if you don't mind me asking because I am, you know, feel free to answer this question at whatever depth you want or or, you know, or not. But, like, I, you know, I would I would definitely say in the past 5 years, I've had my own experiences with kinda you know?
And actually probably the last 8 years. I'd say 2016 was a pretty rough year for, you know, all things globally and from my perspective anyways, and that really impacted my own personal mental health. I would say somewhere around 2020, I started to kind of get some help and, you know, see some therapists and, you know, go through that process and get to a place where now I feel I feel great. You know, I feel like I have wonderful tools and I have a a great mindset and a great outlook, and, you know, I've learned a lot of things along the way. I'm just kinda curious.
You know? So so often products are created because the founder saw a need for themselves in a product like that. And, again, you don't have to say anything you don't want to hear, but I'm just kinda curious. Is that kinda part of how this product was was created initially for you? Oh, I mean, absolutely.
I there there were a couple of tent poles both in my personal experience and in the experience of my my very close friends and loved ones that that kinda shaped this. And one of them, was earlier this year, I was laid off and, you you know, I was, coming out of a very, very intense burnout period. Right? And, you know, look. The economy does these things from time to time.
I don't have any sort of hard feelings about what what that transition did, but it it left me feeling kinda deeply kind of unsettled with a an identity crisis to some extent. And, you know, what am I gonna do next? How am I gonna shape this? Yeah. A lot of guilt around not being there for my kids as much as I would have loved to have been, and and so all of these things can kinda cascade quite quickly in life, and, you know, it just so happened that that was in, you know, the beginning of Q2 of the year, around about the time that Claude got radically better and then GPT 4 o.
And so I I started to really dig in on these new models and started to use them as as sort of a source of inspiration, I would say, rather than the sort you know, I have a therapist, and I am very committed to that in parallel, but it's just an amazing tool. Right? The promise of these large language models is so, so tremendous, but I very quickly found myself kind of butting against things like memory and token limits and and the depth that I wanted to accomplish, and I was dissatisfied. And so you you asked me about sort of eat eating my own dog food. I built this because I wanted it, because I knew it was possible.
I could see the stars aligning on the horizon, and I was like, goddamn it. Let's build this. And and so, you know, I I think I it's completely authentic. I built it for myself, and and then I shared it with some friends, and they were just like, this is really different. This is, like, a lot different to my experience kinda just waxing lyrical in chat GPT.
Mhmm. And and and that was what kind of actually started a really interesting snowball effect that led me to teaming up with my cofounder who's a both an engineer and a clinical psychologist. Would you believe it? So I'm the luckiest man alive to have him on the team. Interesting.
Yeah. You know? And that's when we sort of looked at the market, and we realized that the timing was perfect and that, actually, if you take a very clinical and rigorous and evidence based approach to this, you can actually be very different from just as you noted in one of your messages to me, just another kid who instructs, a GPT to to act as a therapist without any actual evidence base. Right. Well and and that's that's one of the things that I'm really interested in in discussing here is because I have you know, my my former colleague, Megan Morrone, you know, she's, she's now at Axios.
And I think it's while she was at Axios that she did an article where that was focused on using large language models as like a not not like a true full on full stop replacement for therapy. But in that kind of vein, you know, using a CHAT GPT or a CLOD or, excuse me, whatever to kind of get some instructional insight into, you know, challenges that we're having or problems that we're having in our life, and, you know, not to replace medical involvement when it's needed, but just to kinda get the sense of it. And, so I know people do this. I mean, if if she's writing an article about, you know, exploring that, I guarantee you so many people have opened up ChatGPT because they were having, you know, some sort of a mental moment, and they wanted some advice or insight or wanted to feel heard or whatever whatever you wanna say. And I I would I would guess that the experience probably wasn't as good as it could have been because it's not designed for that.
Yara seems more more purpose built around that. And I guess, yeah, I'm just kinda curious how Well, you know what? Before we get into that, tell me or tell tell the audience what Yara is exactly because I think we're we're talking around it, and it's probably good to just spell out exactly what it does and and what you created. Yeah. And and I I think there's sort of 2 there's 2 ways to answer the question.
Right? So one is, like, what what is the company that we're building, and then what is the first product? Right? So the company is is is we're a b corp, and we were founded with a mission at the center of what we're doing, which is to make mental wellness, which is a kind of category, and I would argue that mental health is a subcategory of mental wellness, a radically accessible concept, like this idea that everyone should be able to access, the tools that they need to feel well, mentally. And so, that is written into our founding documents, and, you know, I'm really excited about that because it means that, you know, as we start to really grow and and and get some sort of momentum, you know, the the idea will have to be sort of at the core of everything we do.
How are we gonna harness AI for mental good? And then the the product is is a manifestation of that that made the most sense to us as a sort of minimum viable version, right, which was something which, was sort of a unique combination of completely accessible. You know, in California, there's a mental health crisis, as you probably know. There is globally. But in particular, in Southern California, there's a lot of people on strike right now, and there's a law in California that says that everyone should be able to get a referral in 10 days.
And at the moment, it's it's closer to 20 for some systems. Now we wanna measure how long it takes to get help in seconds, not in days, because we we just know that if someone is having a really rough time, if they're in crisis, they should always be talking to someone else, and there are plenty of resources for crisis support that are already very good. But it's it's the sub crisis. It's the people who, are afraid to ask for help because they don't feel that they're worth it. You know?
It's the it's the people that, might benefit from a discreet conversation without any fanfare. And so that was really the kind of idea was, like, let's create something that is just, you know and, again, with with gratitude to some of our early partners like Google Cloud, you know, let's just create something that's there and that doesn't have a paywall and doesn't have any barriers. And I'd love to maintain that throughout the the trajectory of what we're building. But then the second part of the product is is kind of almost at at at a right angle from that, which is, and then what would happen if the the union and the discussion and the and the kind of meeting of the virtual minds, and I know that that's kind of a philosophical debate, but I I I'm I'm ready. Let's go.
You know, what what would happen if if then, you know, there wasn't a token limit and there wasn't this sense of being, like, just another punter that gets kinda put on hold when you when you get really into it? What if it just doesn't doesn't do that? What if it just keeps going? What if the memory is as large as it has to be? What if the, you know, the always on nature of this means that you could do it at 2 AM even if you were doing it 30 minutes prior, and you wanted to come back and clarify something?
And and, you know, a lot of people, that that that reach for digital health services in the mental health domain do so in the middle of the night. Right? Because that's when they're confronted with their with their thoughts and and no one to talk to. And so That's when the silence sets in. Yeah.
You're alone with your thoughts. Absolutely. And not anymore. You can you can be there with Yara. So, so that that's the idea of the product.
It's like something that's empathy forward, something that's deep in its memory and its capability, and then, of course, something that's safe, incredibly safe from from from the get go. Yeah. Having that, having that longevity or longevity is probably the wrong word, but that kind of depth of of knowledge and being able to reference back. Like, that's something we get if we see a therapist. If if we go to see a therapist, that's one of the big, big benefits of seeing someone is, oh, I've been with this person for a couple of years.
They know my story. They know where I started. They know where I am now. They know what's worked, what hasn't, all of these data points. And, so so much of of large language model interaction that I've that I've witnessed and, you know, checked out in the last couple of years has been a lot more minimized than that.
That scale, that depth of knowledge is it is almost well, it is. In many cases, it's amnesic. It's the next time you open the window, you're starting from square 1 again. How do you how do you tackle that challenge? Because that's a really I think that that goes a long way to, you know, kinda getting us to a point to where if we're using a tool like Yara, we feel satisfied on the other end of it because we do feel like this system actually knows, you know, knows us, and then we're not starting over every time.
Yeah. I I I there's a lot to unpack there. Right? And I I think, you know, just last month, a couple of the leaders in this industry were saying that memory is the big frontier that has to yet be solved in AI. It's certainly not our experience with sort of the efficacy of what we've built, which is that, you know, there's this deep memorization so that the discussions that you have are vaulted to your user ID, and and if you don't, if if if you're just an anonymous user, they get they get thrown out.
But when you are, when when when you when you have that account, which is fully within your control, and we're already GDPR compliant, which I'm really excited about because my cofounder's in the UK, and it's nice to be able to have both sides of the pond. But, you know, this idea of of sort of summarization and then of embeddings and vector search and some of these other techniques that can really kind of start to build conceptual understanding of what you're what you're grappling with. That then segues really beautifully into some of the thoughts we have about sort of agentic kinda components of of the brain. We call it the clinical mind. And so, yes, it's true that today what we've built is a subclinical wellness product that has deep knowledge of therapeutic understanding, but will not try to masquerade as anything other than what it is.
I'm an AI. I'm here to help you, but if we if we figure out that actually what you need is clinical help, you know, go go talk to this person or that person. I'll help you break the ice with your doctor if you don't know how to talk to them. Is is real quick on that. Is is the, is the Yara system at this point capable of recognizing if a topic is too big for it Yes.
To call that out? Yes. That was one of the first starting points, in in in how we shaped, the the sort of therapeutic. It's I mean, it's the most important thing. Mhmm.
I I you know, you you could sort of subcategorize, you know, just diagnosis and then, like, crisis, I think, and and they're 2 different concepts. But, you know and the and the and the third big one is, is vulnerability. So if someone is a child, we have not designed this for children. If they pretend to be an adult and then it's clear that they're a child, you know, Yara will very gracefully say, I'm sorry. You know, you're you're talking about class.
Like, this, you know, this isn't a a forum for you. Like, please, you know, go go talk to someone at Crisis Text Line or something like that. Mhmm. Yeah. And and so, you know, all of those things were sort of really our foundational starting point, then building the deep memorization, and and then, you know, where we're now headed and the thing that I'm frankly just super excited about is this idea of of them sort of shaping that more broadly with concepts of a journey that you were on, a therapeutic journey.
So, intentionally, Yara just throws you straight into the chat. We did this because we didn't want any barriers to entry. Actually, you know, for some users that's just what they need. For other users it's not, so we're gonna add some optionality, you know, a a quick kind of quiz to you know, like you would get at the doctor's office, you know, how have you been feeling about the last 2 weeks. Yeah.
And and and but, you know, I Yeah. Everything we're doing here, we're trying to do it differently. We're trying to do it as sort of mental health 2.0 or 3.0, whatever you call the sort of LLM inspired version of digital health because, I think a lot of the early, attempts at this, you know, many of them had amazing UI features, and the mobile apps were just very heavy, very visual, products, but they would take you down a a relatively slow, relatively pedestrian, relatively canned kinda onboarding. And that's just not what anyone needs when they're in a place where they're actually seeking help. Right?
So Mhmm. So, so, yeah, that stuff is kind of in the works right now, but I'm really excited about it because I think that's when actually, you know, what we do now is we coach some of our beta community like, hey, when you're using this push back, you know, don't just take the the tone and and accept it, but be be the sort of agent of change with this dynamic and make sure that you are providing feedback and be very vocal in that feedback. And, you know, then you get kinda 2 types of user. You get the ones who are like, ah, okay. Not for me.
And you get the other ones that call me at, like, 11 PM, love crying, and they're, Jesus, like, this thing just changed my life. And and so that's really sort of where we're sort of anchored. Yeah. Excellent. Okay.
Well, we're gonna take a quick break. When we come back, let's, take a look at the beta. I know you've got it running on your machine, which is, I think, a version of the beta of it, that's a little bit further than the the beta that I interacted with, and I can share kinda my experience with it too. Let's take a quick break, and then we'll, we'll dive into that coming up here in a second. Alright.
So first of all, for folks who want to kinda see what y'all are about, meetyara.com. That's meetyara.com. And I'd it's a closed beta at this point. Right? Certain people are invited, so not everybody has access to this.
But But I I would you know, if you go to the website, the the second button on the website is, you know, to to to apply to join the beta. And Okay. You know, any any listeners, please go and and sign up, and we'll let you in as quickly as we can because, we're really now at the stage where we're beginning to ramp up the invites. And so, yeah, we'd love for you to to to come and join us on this. Excellent.
Excellent. Well, I'm gonna pull it up here, but I just wanna just real quick mention, like, I was playing around with the beta. I brought up when I when I went into it, it's like, you know, what am I gonna throw into Yara? And I brought up a pretty legitimate personal reflection, into it and just kind of followed it along the process. And I do have to say it did a great job at a few things for me personally.
It gave me, some solid reminders that kind of reframed my perspective on what I brought into Yara. It also actually did a good job of giving me some strategies that I found it really helpful. I was like, okay. I'm actually gonna write these down and think about this because I hadn't thought about it through that lens. So from a personal perspective, that's what I got out of it.
Now now granted, you know, I'm not going into Yara with some I I didn't give it an earth like shaking thing in my life that had a a a ton of, risk to it necessarily. But I gave it something that I've legitimately been kind of struggling with, I'd say, in the past couple of days, and, it did well. So that was kind of my experience for for what it's worth. But what do you what are you loading up right now? So one of the things that as I literally was just saying is, like, if you kinda give it some shape and you say, like, the this is my objective, and I have this much time.
Let's let's do this. You'll see it's an incredibly, like, warm, receptive, approach. So I'm just experimenting here. My friend is having a really hard time at work. She feels targeted.
She can't sleep. How can I help her? And I'm just curious. Like, this is all completely unscripted, but what what we'll do is we'll we'll we'll start to ask these more difficult things. You know, the second that you ask something that's a little bit more of a perspective that it it could be a concern or or or something difficult, you'll find that the answers get longer, and that's something we've been really kind of exploring is, like, what's the adequate length.
And so these listicles, which I call them can be very helpful, like, listen without judgment, encourage her, you know, help explore, you know, options like HR and so on and so forth. So you you can have this this this very sort of tactical guide, but then what I could also do is, you know, I I could say, well, actually, come to think of it, I'm having a problem in my life. So So typing okay. Thanks. Sorry.
I'll just re I'll just read it out for audio listeners because I don't have the benefit of seeing it. Yeah. I'm getting so anxious so anxious reading this, and you can make typos. Of course. LMS are good at that.
I think I am having a panic attack, and you help me. And it it's gonna be interesting to see just how she segues, but and I caught yeah. Sorry. This is one of the bugs that we're in the middle of fixing on this new beta. I mean, it is a beta.
Yeah. So just just for audio listeners, it it encountered an error. I got that one time in my interaction. I just reposted the question, and it worked through it and and got me to where it was going. We thought it was on the front end, and we actually have just discovered that it's on the back end.
So I'm gonna fix that one today. No worries. No worries. It happens. That's part of the main Mindfulness is is a key concept here.
Right? So I'm saying that I'm having a kind of a a almost a panic attack, and instantly and you'll say you'll see me call you or she because it's a girl's name or a female name, but, you know, you can everyone can kind of impress on on the product whatever they would like, but, you know, take a slow deep breath with me. In through your nose for 4 counts and out through your mouth for 4 counts. And what you'll find is that just being told to do that by another entity is a profoundly helpful thing, and it will slow you down and then it will open you up and then you'll say, okay. I'm ready.
And so, like, just that pivot and that ability to go from very, very specific targeted rational advice to much more mindful emotional understanding is something we've worked very hard on trying to get that balance right. And I think that kind of that leans into one of these questions that we were going to potentially discuss about, like, what is emotional intelligence or empathy when an AI provides it? Like, how do you meter it? Well, I think that's a really important question. That was certainly something that came up for me is that when I think of empathy, which is a a large part of, you know, the therapeutic experience, I'd say, seeing a therapist, you know, a large part of it is there's a human sitting across from me that I can be completely vulnerable with.
I can tell them everything. I'm not going to be judged, and they will understand my feelings and and feel them with me instead of try and talk me out of it or whatever the case may be. I when I see empathy, I see it as a very human experience. And I think when I've thought of AI systems or or, you know, computer systems that that, attempt to do the therapy thing or attempt to be human, there there's always that kind of that word that voice in my head that is like, yeah. But, like, it's it's just not the same because I've because they're not actually feeling this.
They're they're typing words on the screen to represent that they are. And I guess my question around this is, you know, is that enough? Obviously, that's not gonna be enough for everybody. Right? Like, some people are gonna be like, oh, I see right through.
I I know it's an AI. And so what you're saying right now doesn't resonate because it's not coming from a human. And that seems like a really big challenge for something like Yara. I just I totally disagree. Okay.
And and this is this is why I'm doing this. You know? Yeah. I think, you you know, when I I I grew up in in sort of suburban London and was fortunate to to do quite a lot of theater as a kid. And, you you know, I think, you know, when you study Shakespeare, when you study this idea of catharsis in in in in tragedy, for example, one of the first concepts that you talk about in theater is suspend your disbelief.
Right? Like, this idea of, like, you're going into an environment where you know that these are actors or you know that you're staring at a projection. You know, when I saw Moana 2 with my kid at the weekend, I knew that this was an animation, but that didn't stop me enjoying it. Right? And that didn't stop it adding some value and being a really interesting and transformative experience.
This is the same. You know? It's like, why do people play video games? Why do people chat to to to chat GPT? Why do people embrace Yara?
It's not about the science behind the curtain of how the how the, sort of, therapeutic or the entertainment experience that you're having is put together. It's about how it makes you feel and the value that it derives for you. So we're incredibly pragmatic about this. We're pragmatic in a sense of saying, you know, it, of course, is AI. You know?
When you go into this, it says, this is AI. When you go into the settings, it says, this is AI. Mhmm. But then if you're able to sort of become comfortable with it being AI and then meet it in the middle Mhmm. The cognitive empathy, the idea of emotional intelligence that these models possess and when they're tuned in the way that we've tuned them, that they that they kind of volunteer this stuff.
I mean, if you're reading what's on the screen, I I said I feel like a bad person for letting my friend get this bad. You're absolutely not a bad person, quite the opposite. The fact that you care so deeply shows what a compassionate person you are. That's really profound stuff. Right?
Like, that's not that doesn't feel like a cheap robot. So No. And I will say, like, the the response that I got to what I brought into Yara, was kind of along a similar line, kind of like a reminder of, well, the the fact that you care so much about this thing, the the fact that you're so anxious or concerned about this thing shows how much you care about it. And that was what I was talking about a few minutes ago where I was like, you know what? That's a really good reminder.
Like, I sometimes I you get so lost in your mind about the thing. Yes. And that extra, like, prompt to take a step outside of it and recognize it for what it actually means or represents, you know, it was a great reminder. So I I absolutely and, yeah, I'm not I'm not criticizing this at all. I just I think I think there are some people that there are going to be some people who when it comes to something as as incredibly personal and and something that we're so used to doing with a human, you know, they're gonna have a really hard time, some, not all, but they're gonna have a really hard time suspending that disbelief when the stakes are so high, if that makes sense.
I I yeah. And and look. If this was for everyone, I think we would already be there. It's it's it's not for everyone. Right.
But there is a very, very important, very underserved group of people that could can can fully embrace this technology and just are not in the habit of using it today, and that's what excites the hell out of me because I think this the opportunity for for for just a tremendous amount of of value that people deserve, you know, no one deserves to feel despondent or anxious or depressed. Right? And so, you know, we can we can create that for for folks, and and the people that that get that benefit, and we're already seeing a lot of them in our beta. That's that's enough for us. I think we feel very good about that.
Yeah. And, I mean, there is, you know, along that line, there is an accessibility problem when it comes to mental health services. You know? Mental health services, especially here in the US with health care being the way that it is, you know, it can be really expensive, and not everybody has the money to throw at, you know, week after week after week of seeing a therapist for a certain thing. And so so that's where that's one of the aspects of the kind of combination of health care and AI that gets me excited is that, oh, well, you know, as these systems get better, because it really seems like there's a lot of, a lot of room for for these systems to really get refined, and you're showing great examples of this.
But as they get better, then this opens up the playing field for people who might not have had the ability, the accessibility, to do this you know, to get this sort of support before? It was just out of reach. You know, I I think thank you for bringing that up. There are a 150,000,000 Americans that live in mental health shortage areas, you know, half the country almost. And, I think by the end of the decade, the CDC is projecting about a 1000000000 people globally will suffer from some diagnosable mental health condition, and so it's just a tremendously large problem.
And and because of the scale of the problem, there's just no way that we can provide enough human high quality therapists to a 1000000000 people. You know, it takes years to to train these people. And so I've I've been really humbled as I've sort of dug in and looked at specific markets and specific problems. You know, the UK is very close to home for me and, you you know, the National Health Service there is it it it's just underwater for a whole bunch of reason. And, you you know, people are trying to throw money at crisis, but it's the people that you could prevent getting into crisis that almost are the bigger problem.
Right? Because, actually, as the world goes through twists and turns and economic and political change, you know, where are people gonna be in 6 months, 2 years, and how can we actually start now to give them the tools to be more resilient and to resist, what might be a relapse for them or what might be the first bout of a serious mental health concern. And so we've just got to be more scrappy and more progressive in in how we build tools for people, and and that's exactly what we're trying to do with Yara. Mhmm. Okay.
Excellent. Couple of things that I I feel compelled to bring up because I I've seen a little bit of it in the in the chat room as we've been recording this, and, you know, these are topics. These are, like, long standing challenges when it comes to artificial intelligence. One of those is, of course, hallucinations, ever present in AI today, unfortunately. But when you're talking about health, you know, a hallucination, can carry a deep consequence with it.
What is Yara's approach when it comes to that? Like, are you seeing them? Yeah. What is your approach? Yeah.
So we we've tuned the the models to be, very sort of safe and and and to avoid hallucination, there are still, obviously, scenarios where it happens, and, what what we're looking at very, very deeply there was actually quite a lot of interesting stuff that came out of the AWS conference this week on on on sort of trying to get past hallucination in various mathematical constructs, but, you know, if you think about how to inject context and how to ground and how to sort of build agentic frameworks for safeguards and so on around the the fundamental core, chat models that you may or may not be using. You you know, actually, I'm very encouraged by what you can do with with with those tools today, and, I think one of the big questions that we're beginning to research is, you know, what does that look like when you do enter the clinical domain? Right? Because, you know, to make mental wellness radically accessible, we're not gonna stop just that, mindful exercises. Right?
It has to be something that we can underwrite as being a great therapeutic approach in lieu of being able to see a therapist or as a as a sort of complement to seeing a therapist. And when we do that, you know, it's critical that, you know, in addition to having a brilliantly evidence based approach to cognitive behavioral therapy and other forms of therapy, that, you you know, if someone has a a complex mix of conditions and if they have some, you know, long standing pharmaceutical, prescriptions that may interact with some of the things that, you know, to a normal person, might be a good thing to deal with insomnia, like drink a a noble tea. You you you know, we want to really ensure that there's there's not just 1 or 2 barriers to to to to the safety and to reducing hallucination, but the it it it it's essentially impossible for there to be an unsafe, advice provided, and and so that's something that I think we're very close to. I don't think, we as an industry are there right now, but it's something that we're working very hard on. And I think some of these agentic frameworks and some of these more sort of traditional logic based ways of of of calculating, truth help tremendously, to kinda close that gap.
Yeah. Interesting. Yeah. It's a it's a real, challenging moment right now because, you know, think those those challenges, like, truth and hallucination, then, you know, it it seems like anytime those conversations come up or or facts, you know, that that sort of stuff, there's always examples of how the the systems get it wrong. And as long as that's the case, there's always a little bit of doubt.
There's always that little shred of doubt, and, you know, and and I don't know I don't know. How do you feel about the the possibility of of systems, maybe not Yara specifically, but in general, becoming impervious to these things? Because a lot of people feel like, well, there is no such thing as as being hallucination free. Like, hallucinations will exist. It's just bringing that number down to a minimum that we're that we is acceptable.
Yeah. It's a it's such a hard question. Yeah. Sorry. No.
No. Like, it's it's good. Right? Like, I I you know, a lot of people when we talk about this, they bring up the character AI, mess that that that was in the press recently. And, you you know, what what I can say is that, you know, there are organizations in this AI, revolution that we're sort of living through that take safety and that take responsible ethical action very seriously.
Right? And we're one of them. And then there are organizations that, you know, don't do that and don't safeguard things and don't put adequate guardrails around things and then target vulnerable groups, and then, you know Yeah. It's unsurprising that those things end very, very badly. And, and and, frankly, it's you know, there's I was talking to someone yesterday who said there's a few black eyes in this industry, and I I said, yeah.
I know. But but but, you know, and this is probably actually something that that segues into a topic that I do really care about, which is is very close to home for me. But, you know, just to finish sort of answering, I I I just I think all we can do is we can say with a with an honest hand on our heart that we're doing this with our best intentions to be as safe as we can be. Mhmm. And, you know, and to build you'll see in the Yara UI a thumbs up, thumbs down on every single response.
You know? And so, like, to build these human feedback loops, one of the things that we would love to do as we grow is to have a dedicated team of of psychologists always online, always monitoring, you know, the the ways that the product's being used, and, actually, it's a really brilliant fusion of human capability and AI capability where we know that many of the use cases for this technology are already pretty outstanding, but there are those scenarios that are challenging. And if we can build the right metrics to flag those instantly, which I believe we can, then we can bring a human in the loop very quickly, and I think that's probably the best way to think about the problem. Mhmm. And just on the personal side, so, you know, I mentioned that there's a number of different things that in my life that sort of drove me to want to do this, and one of them, was my my my best man, my my best bud from from, school, became quite sick with brain cancer about, 8 years ago.
And, it's actually the anniversary of his death tomorrow, which is part of the reason why it's a particularly sort of poignant topic, but he he died in 2017. And after he was diagnosed, something tremendously interesting and very profound happened, which is he turned around and he said to me and to his other close friends, you know, the cancer's one thing, but I've gotta be honest. I've been really depressed for years, and I haven't been able to talk to anyone about it. And, you know, this is someone who had really strong friendships, someone who had great resources, someone who was, you know, who's a white cisgender male, you know. And so he like, none none of the stereotypes of someone that shouldn't be able to get help.
Right? Sure. And and the fact is that it it took something as life changing as a terminal illness to force vulnerability and openness. And he spent 3 or 4 months at the end of his life, the happiest I've ever seen him. Right?
Mhmm. Yeah. And he said to me at the end of that journey, you you know, when I visited him when he was really sick, he said, look, just do me a favor. Pay it forward for me. Like, do something.
And and so, you know, he he became very fond of the color orange and very fond of stoics and, you know, so the orange in our brand and and and the kind of ethos of trying to do something radical in this space is really my my sort of, my my sort of tribute to him, to Chris. Yeah. And and and I think, like, when you have people that have that depth of mission, and who are trying to build, you know, an impact organization that is trustworthy and that is designed from the ground up to be as high quality as it can be, we're we're looking to the community to say, you know, come on this journey with us. Let's shape this together. We know it's not gonna be perfect, but I'll be damned if it's as bad as some of these things I've read about in the press, and, and and, hopefully, we never get to a scenario where we've done anything to to to to not help someone in in in a crisis.
Yeah. Wow. Thank you for sharing that. That really puts a lot of things into perspective, and what a tribute, man. That's amazing.
That's that's really It took me 7 years, but I finally feel like I've done enough work to kind of take it back. Yeah. Yeah. I gotta respond. Getting started.
That's amazing. Okay. So I know, I do have to take another quick break, and then we've got a short period of time before the end. We were talking about, talking about a few kind of related stories. We can either go in that direction or we can talk about the future of this industry.
So I'll ask you what you wanna do. Take a moment to think about that. We're gonna take a break. We'll be back in a second. See, the challenge is we don't have unlimited time.
We can't talk about everything. I'm curious to know how you feel about the future of this technology, not just Yara, but in general, artificial intelligence and the developments that we're looking at in the in the next 5 to 10 years, or we can talk kind of about a couple of those stories, that you had, sent my way. Is there something that's really speaking to you right now that you wanna dive into? Well, I I I actually think that they're sort of to that they're very related. Right?
They kind of converge together. Okay. Perfect. Yeah. Yeah.
So Whether it's scenario or yeah. Yeah. Well, so one of the things that you, that you mentioned, leading up to the show is that you have a friend, Peter Hames. Personally, I'm not familiar with Peter Hames, but he's he was he was a cofounder, or is he still a cofounder CEO of BigHealth, now take on a new role at Microsoft AI? So I'm essentially I'm I'm assuming that he's no longer with BigHealth now taking on the role at Microsoft AI.
And I'm curious to know, you know, what excites you about this, what this means, how how Peter might influence the work that's happening at Microsoft AI. Yeah. Thank you. It's a it's a great question, and it really does tie into sort of my view on the future of this industry. Right?
So there have been a number of significant appointments and some consolidation in the industry over the last couple of years, and one of the most interesting organizations has been Microsoft. You know, they were very on the front foot with OpenAI's partnership, and then, they were able to get the founders of Inflection AI to join them and Mustafa Suleyman to actually run Microsoft AI. And, I'm almost certain that Mustafa was the person who approached Peter because all 3 of us are British, and there's not that many people in the British tech, AI, and and health scene. But, yeah, Peter's a a a just a tremendous thinker and and and a really, really brilliant mind. And he set out, about 15, 16 years ago, I think it was, to start Big Health, which, was then called Sleepio, and it was essentially that he had really bad insomnia, and he teamed up with a professor at Oxford to help build an insomnia program, which then became an insomnia and anxiety program and is one of the few digital therapeutics that currently is reimbursable in the US.
And so his commitment with with professor Espie and and and the team at Big Health was, you know, let's let's build something that is as scalable as medicine, but that provides mental health support to people at scale. And so to see someone with that level of commitment to the mission that I think many of us in the industry share kind of crowned as the vice president of health for Microsoft AI is just a tremendously important, and exciting moment. And so this was announced yesterday, and, you know, it's not yet clear. I haven't, managed to to to chat to Peter about is this primarily for the UK market, EMEA, or is this a global role? But I think, you know, and this sort of segues into machines of love and grace, which I know you've discussed in the past.
But if you look at those 5 areas that Dario identified, the first one was physical, the second one was mental, and and the mind, and and how AI helps those 2, and then he goes on to talk about economic and peace and work. But, you know, section 2 here, neuroscience and the mind, I think we've already done quite a lot to begin to really embrace computer vision in in in medicine and other concepts around diagnostics and around reduction of paperwork and things like that. But where I obviously am really excited is this notion of, like, what can AI do to better understand the circuits in here and the the challenges that those circuits and and and, you know, the the ways that our brain can function. And so we have been really inspired by this. We've just brought on an amazing neuroscientist adviser at a Ballen Institute who has a PhD in in this area, and we're sort of trying to sort of think about how we model cognitive health, not just through a psychological lens, but also a neuroscience lens.
And I think that, you know, as the health care industry begins to start to really embrace mental health, cognitive health, and the mind, that's going to be one of the most interesting frontiers because there's just so much we don't yet understand about the mind. So all of these things, I think, all kind of point in the same direction, which is that there's an industry that's taking this seriously, and it's no longer just, you know, a dream, but it's actually gonna very quickly become a reality. Yeah. Fascinating. Yeah.
How how the AI understands the mind. What came up for me there is, like, it's it's such a it's such a a give and take because it's about how AI understands the mind. It's also how we are understanding how the AI understands the mind. Like, we're there's so much learning happening right now, and it's so much, you know, from my perspective, because I'm not as steeped in, you know, the mechanics of AI, but there's so much, so much unknown about how all of this does what it does. But I think if if anything, the last couple of years have has shown me that, you know, there's so much potential here.
And when I look at the, you know, at something as important as health care and, you know, when I look at, like, pattern recognition and spotting, you know, spotting certain patterns in a in a, you know, a specific scan that it's easy for a human to overlook. And, like, I just see so much potential for for the AI systems to develop over the the coming years and become instrumental, if they haven't already in some cases, to keeping us healthy and and making us healthier than than we ever were. But, you know, that all remains to be seen. It's it's a new technology, so it's easy to assume that because it's new and it's and it's different or or feels new anyways, that it can do all of these things that, you know, like, time will tell, but I I feel pretty hopeful about it. Me too.
Yeah. I I look. I think back at, you know, sort of sort of like closing the loop to the SwiftKey days, we we early on in that in that journey when deep learning was really in focus, you know, we started with more basic, you know, natural language processing models. And when we started to sort of really unearth some of the neural stuff, we recognized that there are a lot of people out there saying fairly extraordinary things about AI's threat to humanity. And one of them was Elon Musk, and one of them was professor Stephen Hawking.
And, you know and so we started to have these conversations what what felt like when it was really science fiction, and Mhmm. And now we're sort of actually looking much, much more closely. You know, the singularity is nearer. And and so on that basis, I I just think it's so important that we just take a really responsible, really good look. And I I you know, I'm sure you watched some of the, Dario interview on the Lex Fridman podcast.
I like, seeing just how brilliant the people that he is assembling at Anthropic to work on this problem are, gives me a lot of hope. Right? It gives me a lot of faith that between him and, you know, San Altman and his crew and, you know, all of these other brilliant, research entities that are springing up at universities and, incubators all across the country and across the world, You know, I I think that we're gonna do the right thing, and I think that many of us are gonna, you know, push things in the right direction as humans tend to over time. Right? I'm I'm I'm a I'm an optimist.
Excuse me that way. Yeah. Yeah. Yeah. I well, I certainly hope so.
Yeah. Fuel test is not worth thinking about. No. No. Exactly.
But if you do find yourself thinking about it at 2 AM, I know a person or an entity. There you go. That's right. I know a thing. Know a thing.
It's called, Yara Yara AI or meetyara.com. Of course, you're seeing, this is, like, my my beta feed because I have access. But if if you don't have access, you'll see the main web page, which I can't show right now unless I delete my cookies. I'm not gonna do that right now. But, anyways, go to, meet yara.com.
You can sign up for the beta, and, hopefully, that means that, you know, you'll be cleared and be able to use it pretty soon after. And, yeah, congratulations, Joe. It's it's been really great getting the chance to catch up with you again and see what you're up to. And, yeah, we'll have to have you back again, sometime in the future and, you know, do an update or kinda see the state of, health health care. And I I promise I'll fix this thing.
Oh, the microphone. We were troubleshooting that before the show. There was a faulty cord. Even AI can't fix a faulty cord. You just gotta get a new cord or I don't know if it's the mic or whatever, but it's all good.
It's all good. It's all good. Thank you so much for having me, and, yeah, it's, as I say, please meetyara.com. Go in there and be as, as honest or as as as dishonest as you'd like, and then just give us feedback because we really are building this for the community, for for anyone that thinks that they they could use an ear, to to talk about, anything. And Daniel in the comments asked to know how how private is it.
It's completely private. You know, it it it's encrypted at rest. It's encrypted in transit. It's GDPR compliant. So if you're in Europe, it's European servers, and and, you know, we we just really don't wanna do anything to, to to to get in the way of what should be a very private, very therapeutic experience.
So let us know what you think, and please join us on this journey because, we think the world deserves, more more of this. Right on. Thank you, Joe. Thank you for, also, you know, thank you for being vulnerable and sharing some stuff that I know is is really near and dear to your heart, on the show as well. I really appreciate that, and it really kinda speaks to the, kind of the the heart behind the product.
And so I appreciate that. Thank you. Yeah. It's my pleasure. Yeah.
Great to have you, and we'll have you back. For everyone else, thank you so much for watching and listening. We do have, you know, the website. If you go to ai inside dot show, you can find all of our past episodes and also, you know, ways to subscribe. You can link out to our Patreon patreon.com/aiinsideshow.
Me and Jeff Jarvis talking about artificial intelligence each and every week. Awesome guests like Joe Braidwood and others, when we can get them. So, you know, help us out. Go there and support us on Patreon. You get a lot of you get a lot of bonuses, but, really, you know, this is in 2025, the Patreon is gonna become really pivotal to the health of this show.
So your support means so much to Jeff and I to keeping this ball rolling. So we really do appreciate it. You do get some other perks. If you contribute at the Executive Producer level, you get your name called out at the end of the show each and every week. This week is no different.
DrDew, Jeffrey Marraccini, WPBM 103.7 in Asheville, North Carolina, Paul Lang, and Ryan Newell. It's a mouthful, but I'm happy to add more names there if you all want to contribute. Thank you so much for supporting us. Thank you for watching and listening each and every week.
And, we'll see you next week on another episode of AI Inside. Bye everybody. We'll see you then. Jeff will be back then too. So he'll see you too.
Alright. See you guys.