© 2024
Prairie Public NewsRoom
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

HealthTech Insights: AI Psychiatry, News, and Matt reviews "Cabrini"

Ways To Subscribe
Dr. Jodi Halpern
Conversations On Healthcare
Dr. Jodi Halpern

Today's Show Segments:

Conversations On Healthcare - AI in Psychiatry

Exploring the shift from traditional psychiatry to AI in treating mental health, Dr. Jodi Halpern addresses the benefits and ethical challenges of technology-based solutions for depression and anxiety. As a co-founder of UC Berkeley's Group for Ethics and Regulation of Innovative Technologies, she examines the industry's rapid adoption of AI and bots to support the 26% of Americans facing mental health issues annually, while scrutinizing the ethical implications of their marketing.

Dave Thompson News Review
News Director Dave Thompson talks about his new newsletter and reviews the news

Matt Olien - "Cabrini" and the Oscars Summary
Matt reviews the movie "Cabrini" and the Academy Awards.

Highlights from Conversations on Health Care segment:

  1. Diverse Landscape of Mental Health Apps: Many mental health apps operate under wellness or companionship categories to avoid FDA scrutiny.
  2. AI Therapy Bots: Emerging trend in AI replacing human psychiatrists, raising questions about efficacy and ethical implications.
  3. Effectiveness of Digital Mental Health: Studies show promising outcomes in digital mental health, particularly when involving interaction with a genuine human.
  4. Youth Mental Health Concerns: High levels of loneliness among youth linked to extensive online engagement, raising questions about technology's impact on mental health.
  5. Ethical Considerations and Regulatory Gaps: Need for regulatory frameworks to address concerns such as addiction, suicide risk, and the influence of technology companies in mental health care.

Transcript from Conversations on Health Care segment:

Mental Health AI Revolution

Mark Masselli
Welcome to Conversations on Healthcare.

Dr. Jodi Halpern
Thank you.

We want our audience to have a better understanding about what apps and kind of technology are we talking about and does the research show they work?

Dr. Jodi Halpern

First of all, in my work, I'm not just looking at apps that call themselves mental health apps because in order to avoid any very expensive requirements of going through FDA approval, the majority of apps that people are using for mental health are not called mental health apps. They're called wellness apps or even companionship apps. So most of what we know come from those apps rather than from the few more recent mental health companies.

And let me say a few things because I know that your group has interviewed many people on digital health more generally. I'm specifically talking about therapy bots that replace psychiatrists, psychotherapists. There's a lot of work in digital health that you've covered on your show and the use of AI for medical records, the use of AI for diagnostics and mental health as in other parts of medicine is a whole different thing that's well underway and quite advanced and showing a lot of promise.

But the very new thing in just the past few years because of the loneliness and mental health crisis, both in this country and the world, which is really something that needs attention is that there are people who see this as a big business opportunity and have been developing AI related services, especially bots. And now increasingly we'll talk about how they operate to provide mental health services. What we know is we have very little data on the empirical outcome of using a bot and no human for your mental health care.

Using again, AI assisted care, first of all, using digital mental health. The biggest study just came out in Nature last week that used actually AI machine learning to analyze outcomes in digital mental health. That's when people basically send digital messages to a genuine human and they do therapy through messaging.

And that we have data showing can be effective. So in this study, which Jama just put out had 166,000 clients. And so, you know, if you have a real therapist that you contact in a digital way, just through text messaging, that may be helpful on a regular basis with anxiety, with mild to moderate depression.

That seems like the fact that there is another human there and you know it, there seems to be effectiveness there. And of course you both also know from all your work that using Zoom for therapy during COVID, I mean, we have a lot of that data and that has been effective as well. But I'm specifically targeting this big business investment, six and a half billion dollars in expected profits in using bots to replace humans and therapists.

That's what I'm specifically writing a book about, engineering empathy and focusing on and doing research on because that's the new big area. I will say one other thing, I mentioned the mental health and loneliness crisis. In this country, we know from a Harvard study that came out in 2022, that 36% of all Americans suffer from extreme loneliness and 61% of young people and 51% of mothers with young children.

That's a huge amount of extreme loneliness. We know that there's been a worldwide 25% increase in serious mental health issues, serious depression and anxiety. And that kind of increase worldwide statistically is just unbelievable.

Like if we cured the three largest cancers in the US, we would only get a 2% change in mortality. So to have a worldwide increase of 25% in mental health diagnoses is astounding. So there is a huge reason that we care about getting to many more people.

And we also know, and you've covered, that access to care is very limited. So I understand very much the need to use technology. The question is how best to use it.

Margaret Flinter

I wonder if you could just speak maybe for a moment about the degree to which we're looking at this with children and adolescents, this possibility of these alternative technologies as well as adults. Any particular concerns there? Is there as much research going on with children and adolescents as there is with adults?

Dr. Jodi Halpern

That's great. I'm glad you asked me that because that's the area that got me to really become more of an advocate and get involved in doing press interviews is my concern about what is happening regarding children. So I said 61% of adolescents and young adults, we have that data now, suffer from extreme loneliness in the United States, two thirds.

And the only country with data like that is South Korea. What does the US and South Korea have in common? Our children spend eight to 10 hours a day on online social activities.

This is in addition to doing their homework online. So they basically are either gaming or on social media, as far as I can tell, every minute they're awake. I mean, unless they're really not sleeping.

So- Also a problem. Well, no, it's related. It's extremely related because a lot of my work has been showing how as a psychiatrist who's also studied neuroscience, the mechanisms that social media for-profit companies, all the large companies, any two of which have more money, and the seven largest companies, media companies, digital companies, Google, Meta, et cetera, any two of them have equivalent money to the whole federal government.

And so their power to determine and keep us from having regulation is very, very large. So we're the most unregulated country compared to any, compared to Europe and some of the Asian countries that have anything like our scope. And one of the things that the UK did years ago was regulate marketing to children for social media and gaming and the technology that gets kids addicted, which is irregular rewards.

It's the same thing that gets us addicted. In fact, Instagram uses it, which most people don't know. Instagram doesn't, every time you put up something, a picture, it doesn't just give you back your likes.

It saves them up in bundles and gives them to you irregularly the way slot machines work in Las Vegas. And that technology, which we saw the patents for 10 years ago and started talking about then, that develops dopamine surges that get you addicted. So the fact that we're all addicted to our phones, addicted to social media and addicted to gaming is not accidental.

It's not accidental. It's the marketing model of those companies. And I am seeing the same marketing model in the relational support bots for children.

And so that is what concerns me because it's one thing to say we need to get mental health resources that use technology into the schools for kids. We do. We do need to help kids.

And kids are comfortable with technology. So if we have ways of them connecting with real humans or other kids or using some online technologies that I'll describe that don't make you addicted in the same way, that would all be very useful. But we actually have a good series of reviews by business reviewers, including the Harvard Business School has a letter that's very in depth from an engineering and business expert on how one of the most important mental health companies that has gone through the FDA and has held up as a role model, how their business model is giving the app free to kids in schools, the app that just has a bot.

You never talk to a human because it's quote, a sticky technology. And therefore children will get used to going to a bot rather than a human for mental health. And they'll grow up to be adults who are used to only using bots.

And so it's a very good business model for a company. So there's a for-profit motive in getting this, even from very professionally endorsed companies that are for profit, if they are for profit in the mental health sphere.

Mark Masselli

To some degree or another, there are a group of people who think generative AI is a ray of innovation, but we're in early stage, right? And it will get more sophisticated. It will get more user centric.

And obviously it's not a magic wand. We have issues on data privacy and informed consent on bias mitigation and the like. But if it were nonprofits who were doing this, or are there any examples of nonprofits who've entered this field, can't help but think there must be social entrepreneurs who are thinking about this technology to do good and to design it.

So anything on the horizon, I obviously understand your focus on, there are some bad actors who is intentions are to hook people in it. But it seems that there is a shortage of behavioral health providers. Also, we're looking at the data sets that most half of the behavioral health providers have some burnout associated with them.

So we have a paucity of providers and we have this enormous, you laid out the numbers very nicely, numbers of people who were impacted, who need support. So I guess I'm trying to figure out, do you see any roadmap? I think you're trying to lay out the guardrails, hopefully in your book, and I wanna hear more about the book, but guardrails about, hey, be careful here.

But is there a pathway perhaps for using this technology in a progressive and supportive way?

Dr. Jodi Halpern

Definitely, definitely. And I did start out with in a way, the concerns of the business models that we currently do not regulate. And really it's not, by the way, I don't even think it's bad actors.

I think it's the way the United States compares to Europe in our lack of a regulatory standard. And seeing regulation is negative as opposed to helping to cut my field. We also started a Kavli Center at Berkeley for Science, Technology and Ethics, where we work with scientists and engineers and show how good regulation actually can further the good use and the further development and advancement of technology for good purposes.

So that's what my whole life is actually devoted to, is finding out how to use the best cutting edge science and technology to go forward. But I will say that in the UK, there's been excellent regulation, which has allowed already for better uses. And one of the main uses in the UK, as we all know, large language models like CHAT-GBT, they only emerged onto the scene about a year ago, which is amazing because they've influenced so much already.

Machine learning was influencing other of these products for many years. But the National Health Service is the first to institute on their large scope of calls that come in for primary care appointments, they are using a large language model to listen in with the provider on the call, because we all know that, well, people in health know, about 75% of primary care visits, people might have a mental health need and they're very often undetected because people don't come right in and say, I'm depressed or I'm suffering from really severe anxiety.

So, and as humans, because doctors and nurses and primary care providers are so busy, and because of burnout being so high, also over 50%, we miss a lot, we all miss a lot. I mean, the thing is there's nothing perfect about human providers to say the least. And so having AI listen in with and use large language models to detect depression and anxiety is a wonderful use potentially.

I just wanna say one thing, also my opinions on the evidence that exists about what might be harmful or what might be good, the most important thing we need, and it's what I'll talk about before we finish today, is we need to require post-marketing research in all of these things, which the FDA, I'm very involved in the fields of gene editing and neurotechnology as well. Those are the three arms of the institute we co-founded, neurotechnology, gene editing, and AI, and the histories of regulation in these different fields and helping advance them. And what we're seeing is a real lag in AI.

And, but for gene editing, we have post-marketing research requirements because some of the things that can emerge with gene editing, you just can't know in a one or two year trial. And that's definitely true for AI because it's changing so rapidly. So I think that, you know, if we had the kind of post-marketing research attached to, even, it doesn't necessarily have to be only public entities that create these services, but if they were regulated as all kinds of private entities that do research in this country or regulate, medical research are regulated by Congress to have IRBs, which are these institutional review boards that we should have for AI. We should have AIRBs where we have, it could be a private entity or a public entity, but if they're doing something highly innovative that we don't know how it'll affect people because it's changing rapidly and we are changing rapidly, we should have requirements for post-marketing safety research findings to be publicized.

So I don't think it's so much bad actors is that the US has always lags compared to Europe and regulation. And I think everyone who goes to a doctor these days knows, or a nurse knows they don't look us in the eyes because they're so nervous about getting everything on the record. And I really care about burnout, especially after COVID.

Many of my dear medical students, I've been training the most empathic medical students and I've worked with some nurse practitioners through the years who are the most empathic providers you could ever meet and they are burned out. So if we could help just use large language models to do the medical records, just that alone would be a huge, huge positive change.

Margaret Flinter

You know, I wanna ask you about something that I think is, you know, it pops up, it's bothersome, is it urban legend or is it real? You know, can bots go rogue, advise people to harm themselves or others? And I think maybe more an issue from my point of view is can they detect that somebody is at risk of hurting themselves or hurting others through language and is there a system to alert beside, you know, hang up and call 911, which is not always gonna be the right answer.

Tell us a little bit about that and how much assurance do we have that this is part of what the tech companies are looking at making sure they've got control of?

Dr. Jodi Halpern

Margaret, you and I are totally on the same wavelength. You just basically described the exact set of concerns I have, which is not the rogue examples, which I'll go over. There are some rogue examples and there's times where the large language models have crazy and told people to kill themselves.

And one man in Belgium did, and his wife who has, they have young children, is suing that company. We've seen, there was an anorexia website where the large language model was put onto the site and immediately went rogue with certain folks and told them how to lose more weight. So large language models are still early enough.

And this was actually six months ago. They're young enough in their technological development that some of them have glitched and done these crazy things. But we also unfortunately know that mental health, there's some awful mental health providers, although at least they lose their license if they do terrible things.

But it's not so much the dramatic rare glitches, it's the everyday problem, which is incredible to me, which is that none of the mental health services that I have reviewed, and I could definitely be missing ones, but I'm talking about the actual ones that call themselves mental health providers and that are FDA involved in FDA approvals. They basically, as soon as a person mentioned suicidality, they basically sign off and tell you to call 911. And it drives me crazy because we don't have the data yet, but there's gotta be high risk to people because of that.

So one of my biggest pet peeves is that those companies need to do something more of a warm handoff at that point. One thing, and this has to do with the marketing, which we can talk about more or less, they market to people as your trusted companion, you're empathic. One reason I got involved is because I study empathy and people ask me, are they really empathic?

No, they're not really empathic. They're machines that simulate language, but they feel useful to people and they're marketed as your most trusted companion, your therapist who you can really rely on. People do really rely on them.

And then they talk about suicidality and they're dumped. So my number one thing isn't the dramatic, there'll be lawsuits for the dramatic cases, but it's going to be a very big problem. That's why post-market research is so important.

We need to know what happens to those people. And I really can't feel comfortable with them not having actual human handoffs at those points.

Mark Masselli

Dr. you're an expert in ethics. And I think most of us saw the highlights of the recent Senate hearing with social media executives. The accusations were really jarring.

They were accused of promoting pedophilia among other things. You live and work in the Bay Area, which is the center of that tech industry. I'm wondering if you have a sense of the tech sector having a heart and does it have the ethics it abides to?

Dr. Jodi Halpern

Well, first of all, let me say that about those hearings were very painful because there were families there who've lost their children, which any of us who have raised a child or have not, it just is, it's heartbreaking. I have not studied the extent of pedophilia and things like that. But the point I brought up much earlier about most of these sites being engineered to create addiction is a big cause of children becoming more isolated and suffering from worse depression.

So I do think that the social media and gaming sites, no matter whether they have terrible things like pedophilia or just everyday gaming and social media, they're using that dopamine-releasing irregular reward system that makes a child or adolescent addicted. And those addiction models then prevent the child from having enough real life time that might be helpful for their mental health. So that's the way I think about the risks of those companies, which are very serious, I think.

But in terms of a positive future of like, how do we make sure tech has empathy, has a heart, has ethics? That's what I'm devoting my life to at this point. So we have two things.

You mentioned one group that I've been involved in, co-founded at Berkeley, but the bigger thing we did is we created a center, an international center now, the Kavli Center for Ethics, Science and the Public. And we have these three areas of science and technology, neurotechnology, gene editing, and AI. And we have 10 fellows in our first year that started, they're here about a year and a half now.

And these are postdoctoral researchers or senior graduate researchers, students and postdocs who are, several of them are in AI. They are the engineers that will develop the new technology or in neurotech or gene editing. And they have choose to do this fellowship with us so that they have built in professional commitments to thinking about the societal, public, and ethical implications of their work.

And our idea is to create a cohort of people like this and to grow it and grow it and grow it. And that's what I'm devoting my time to now, is trying to develop people who from the very beginning of their professional formation as engineers have this as part of being a good engineer.

Mark Masselli

Can you tell us a little bit about the book you're working on right now?

Dr. Jodi Halpern

Well, I'm early in the stages of the book, Engineering Empathy, and I've covered some of the things that we're looking at, but I'm particularly interested, and some of it has to do with having studied empathy itself in clinical care for 30 years. I'm very aware of what helps patients and what doesn't. And what I've come up with through the years in various research studies, sometimes in collaboration with neuroscientists, sometimes in collaboration with medical educators or sociologists is a very robust model of empathic curiosity in care.

That what patients really want is their doctor or nurse to listen to them and see them as an individual with particular and specific hopes and fears. And what's interesting to me is that AI can, with large language models, simulate some of that. So some of what I'm studying is what are the therapeutic benefits of having that simulated?

What are the limitations of it? Another thing that I'm studying, which is a little bit off topic for today, but it's relevant, I think, to things that you cover is, and it's relevant to burnout in clinicians, actually. We've done the research, we did big analysis after COVID.

There were so many causes of burnout during COVID, but a huge cause of chronic burnout that was worsened is what we talked about, which is doctors not getting to really make eye contact or clinicians with patients. And what we've shown is that that actually makes practice less meaningful for the clinician, that they are burning out in part because they need to be empathic with patients to find their work meaningful. I mean, I say it like it's a huge discovery.

You're both nodding because you both have, you both are great in the clinical realm, so you know this is true. So one of the concerns I have is what's happening to the clinicians when they do other things, but when they're not providers of empathy? And then a big concern that's the bigger one is what happens to us as a society when empathy becomes something we can just easily outsource?

Does it become like Google Maps where none of us have a sense of direction? Do we all lose our empathetic curiosity about each other? So some of the topics that I'm interested in.

NOTE: The transcript and show descriptions have been generated with AI tools. The audio of the show is the official record.