Can AI be the new frontier for mental health support?
In recent weeks, OpenAI faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. To spotlight the controversial relationship between AI and mental health, host Bob Safian is joined on stage at Innovation@Brown Showcase by Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners.
About Soraya
- Founder & Managing Partner of TMV, a global early-stage venture capital firm.
- Leads TMV with $200M AUM, investing in 100+ startups, including AI and mental health sectors (2025).
- Co-founded Foodspotting, named 'App of the Year' by Apple & Wired; acquired by OpenTable.
- Serves on boards of AI-focused companies: Resultid AI, Tali AI, and Bridge.
- Former digital innovation strategist at The New York Times, pioneering social partnerships.
About Ellie
- Director of ARIA, $20M NSF-backed institute on AI & mental health at Brown University (2025).
- PhD in Computer & Information Science, University of Pennsylvania (2017).
- Expert in computational models of language semantics and pragmatics.
- Leads multidisciplinary research into safe, effective AI for mental health support.
Table of Contents:
Transcript:
Can AI be the new frontier for mental health support?
BOB SAFIAN: Hey everyone, Bob here. Today we have a special episode spotlighting the relationship between AI and mental health. Recently, OpenAI has faced several lawsuits alleging that its chatbot contributed to suicides or mental health breakdowns. We’re sharing a conversation that I had a few weeks before the suits hit with two on-the-ground experts: Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health and Soraya Darabi, a VC firm TMV, an early investor in mental health and AI start-ups. Ellie and Soraya talk candidly about the pros and cons of applying AI to emotional well-being from chatbot therapy to AI friendships and romance. Recorded live at the Innovation@Brown Showcase in Providence, Rhode Island, the conversation includes themes that are controversial and for some listeners potentially distressing, so please take care. I’m Bob Safian, and this is Rapid Response.Â
[THEME MUSIC]
I’m Bob Safian, live at Innovation@Brown Showcase, and I’m here with Brown University’s Ellie Pavlick, director of a new institute on AI and mental health based at Brown, and Soraya Darabi, lead partner in VC firm TMV, which has $200 million under management and is a backer of mental health well-being start-ups among many other things. Ellie, Soraya, thanks for joining us.
ELLIE PAVLICK: Good to be here.
SORAYA DARABI: Nice to see you, Bob.
Copy LinkWhere AI & mental health collide
SAFIAN: So among the many fields that have been disrupted by generative AI, mental health, and the personal relationships we build with AI agents are among the most fraught. A recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for, I have to read this to get this right—
PAVLICK: I always have to read it too.
SAFIAN: –AI Research Institute on Interaction for AI Assistance. It’s a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here?
PAVLICK: Mental health is something that is very, I would say I don’t even know if it’s polarizing. I think many people’s first reaction is negative, the concept of AI mental health. So as you can tell from the name, we didn’t actually start as a group that was trying to work on mental health. We were a group of researchers who were interested in the biggest, hardest problems with current AI technologies. What are the hardest things that people are trying to apply AI to that we don’t think the current technology is quite up for? And mental health came up and actually was originally taken off our list of things that we wanted to work on because it is so scary to think about if you get it wrong, how big the risks are. And then we came back to it exactly because of this. We basically realized that this is happening, people are already using it. There’s companies that are like start-ups, some of them probably doing a great job, some of them not.
The truth is we actually have a hard time even being able to differentiate those right now. And then there are a ton of people just going to chatbots and using them as therapists. And so we’re like, the worst thing that could happen is we don’t actually have good scientific leadership around this. How do we decide what this technology can and can’t do? How do we evaluate these kinds of things? How do we build it safely in a way that we can trust? There’s questions like this. There’s a demand for answers, and the reality is most of them we just can’t answer right now. They depend on an understanding of the AI that we don’t yet have. An understanding of humans and mental health that we don’t yet have. A level of discourse that society isn’t up for. We don’t have the vocabulary, we don’t have the terms. There’s just a lot that we can’t do yet to make this happen the right way. So that’s what ARIA is trying to provide this public sector, academic kind of voice to help lead this discussion.
SAFIAN: That’s right. You’re not waiting for this data to come out or for the final, whatever academia might say, this consortium might say. You’re already investing in companies that do this. I know you’re an early stage investor in Slingshot AI, which delivers mental health support via the app Ash. Is Ash the kind of service that Ellie and her group should be wary about? What were you thinking about when you decided to make this investment?
DARABI: Well, actually I’m not hearing that Ellie’s wary. I think she’s being really pragmatic and realistic. In broad brushstrokes, zooming back and talking about the sobering facts and the scale of this problem, 1 billion out of 8 billion people struggle with some sort of mental health issue. Fewer than 50% of people seek out treatment, and then the people who do find the cost to be prohibitive. That recent study that you cited, it’s probably the one from the Harvard Business Review, which came out in March of this year, which studied use cases of ChatGPT and their analysis showed that the number one, four, and seven out of 10 use cases for foundational models broadly are therapy or mental health related. I mean, we’re talking about something that touches half of the planet. If you’re looking at investing with an ethical lens, there’s no greater TAM than people who have a mental health disorder of some sort.
We’ve known the Slingshot AI team, which is the largest foundational model for psychology, for over a decade. We’ve followed their careers. We think exceptionally highly of the advisory board and panel they put together. But I think what really led us down the rabbit hole of caring deeply enough about mental health and AI to frankly start a fund dedicated to it, and we did that in December of last year. It was really kind of going back to the fact that AI therapy is so stigmatized and people hear it and they immediately jump to the wrong conclusions. They jump to the hyperbolic examples of suicide. And yes, it’s terrible. There have been incidents of deep co-dependence upon ChatGPT or otherwise whereby young people in particular are susceptible to very scary things and yet those salacious headlines don’t represent the vast number of folks whom we think will be well-serviced by these technologies.
Copy LinkWhat guardrails could look like for AI
SAFIAN: I mean, Ellie, these tools are moving and changing so fast. How do you think about what your research can do and what the impact can be? Or is it as much about creating guidelines that will help folks like Soraya and the folks who work at Slingshot to navigate?
PAVLICK: So there’s kind of a few, I would say lanes, in which we’re thinking about things that will converge and inform each other, but you can think about separately. One is things like guidelines, how to design systems and how to evaluate them. We’re really thinking about the development process for AI right now. What is the current process and what about that is not well suited for mental health? I would say I was initially fundamentally skeptical and one of the things that’s made me more optimistic about the use of AI mental health are the people who will work most closely on mental health. So the physicians, the people who have spent time working on mental health technology, and there’s real potential here. The people who are most scared are the researchers who do basic science and have never thought about mental health.
I also hear from students all the time. I got an email earlier today with someone who’s like, “I was so excited that you work on this because me and my friends have all been talking about what a great guide these chatbots have been.” So there is real potential, but part of the problem with this development is it’s not being developed for that use case. We’re building these huge generalist systems and the same system that’s going to be churning out new chemistry compounds is also going to be doing mental health support and is also going to be helping you cheat on your homework and is also going to be formatting Excel formulas. What? Maybe we’re in an innovative time, maybe that’s true, but we shouldn’t take for granted that that’s what this looks like. So I think one thing is just not getting too much in the exploit phase too soon of being like, “Look, we have a recipe for building big AI systems and it’s exciting and therefore that’s what we’re doing and we’re just going to run with this.”
But actually opening the discussion and seeing, “What do we want AI to do? What about our systems might produce that technology and what won’t?” And from that perspective, we’re really thinking about these more participatory design kinds of processes, thinking about what do we want the technology to look like? Let’s not assume it’s a chatbot. Maybe it is, maybe it’s not. Let’s actually, let’s think about this because the stakes are too high to just be like, “Let’s run with the first thing.” I think having companies trying to do stuff is part of that. We can’t just sit in a room and think about it forever. We need to prototype stuff. There is a basic science development part, so there’s like I said, these big generalist systems. The reality is we know so little about them. My own lab’s work and a lot of our work is kind of on understanding how language models work, what’s happening inside? Because we kind of stumbled upon this.
So when people talk about making them safe or placing guardrails, what we’re really doing is guessing and checking. We won’t really know what’s happening and we do need scientific leadership in conjunction with stuff being deployed. That needs to happen in real time now. And then we also need to be like, this is not the end of it. This is not the AI, and now we basically have this and we either have to make it work or scrap the whole thing. We’re kind of at the start of stuff so that there needs to be a discussion about what else might AI look like, what other learning algorithms, what other architectures, what other models, what other interfaces? There’s so many options there.
SAFIAN: You said this phrase, we kind of stumbled on this one, four, and seven uses for ChatGPT. It’s not what it was created for and yet people love it for that.
DARABI: It makes me think about 20 years ago when everybody was freaking out about the fact that kids were on video games all day, and now because of that we have Khan Academy and Duolingo. Fearmongering is good actually because it creates a precedent for the guardrails that I think are absolutely necessary for us to safeguard our children from anything that could be disastrous. But at the same time, if we run in fear, we’re just repeating history and it’s probably time to just embrace the snowball, which will become an avalanche in mere seconds. AI is going to be omnipresent everywhere. Everything that we see and touch will be in some way supercharged by AI. So if we’re not understanding it to our deepest capabilities, then we’re actually doing ourselves a great disservice.
PAVLICK: To this point of yeah, people are drawn to AI for this particular use case. So on our team in ARIA, we have a lot of computer scientists who build AI systems, but actually a lot of our teams do developmental psychology, core cognitive science, neuroscience. There are questions to say, why? The whys and the hows. What are people getting out of this? What need is it filling? I think this is a really important question to be asking soon. I think you’re completely right. Fearmongering has a positive role to play. You don’t want to get too caught on it and you can point historically to examples of people freaked out and it turned out okay. There’s also cases like social media, maybe people didn’t freak out enough and I would not say it turned out okay. People can agree to disagree and there’s plus and minuses, but the point is these aren’t questions that really we are in a position that we can start asking questions.
You can’t do things perfectly, but you can run studies. You can say, “What is the process that’s happening? What is it like when someone’s talking to a chatbot? Is it similar to talking to a human? What is missing there? Is this going to be okay long-term? What about young people who are doing this in core developmental stages? What about somebody who’s in a state of acute psychological distress as opposed to as a general maintenance thing? What about somebody who’s struggling with substance abuse?” These are all different questions, they’re going to have different answers. Again, I feel very strongly that the one LLM that just is one interface for everything is, I think a lot is unknown, but I would bet that that’s not going to be the final thing that we’re going to want.
And I think we can do these things carefully if we’re having conversations in the open and if we’re having healthy disagreement, like you have companies trying to move forward and people pushing back and regulators and a lot of people in part of the conversation, we can make this happen the right way at the right pace. We don’t want fear of everything to break into what I think AI commonly is now, which is the hypers and the naysayers, and this prevents any kind of reasonable middle ground progress because you’re basically with us or you’re against us, and then that’s not productive. But I think we’re at a point right now where we can have a really sober conversation and say, “We have an opportunity here, but let’s not move too fast, but let’s not let fear prevent us from doing anything.”
SAFIAN: You mentioned, Soraya, you mentioned kids. I mean, adults also are using these tools. I mean, it’s not just kids.
DARABI: Well, in fact, it’s not just adults, but it’s predominantly men. And this is super interesting. So in addition to Slingshot, we’re also investors in a company called Daylight Health, which allows nurses and folks who assist doctors the ability to offer mental health services, which PCPs, primary care physicians, normally don’t have time to do. But we learn in our research and diligence for that particular investment that the vast majority of men feel too nervous to admit to anyone other than their primary care physician that they’re dealing with symptoms of depression and anxiety. And the most beautiful part about AI therapy, if you use it well or technologies like Daylight Health, is that it provides a certain amount of accessibility and acceptance.
Copy LinkAI friendships and romance pushing social boundaries
SAFIAN: I’m curious what you think about AI support in other forms. I’m thinking about the friendships and even romantic relationships that people have started to develop with AI agents and OpenAI released GPT-5. There was this outcry because thousands of people lost their boyfriends and girlfriends. There are Reddit groups with thousands of people on them about virtual romantic partners. Is that progress? Is that opportunity for allowing people to have connection that they crave and need, or is it just scary? Again, I know I’m going to the extremes again, but Â
DARABI: I mean, maybe. I heard someone in Silicon Valley, someone high profile, say that gun violence is completely out of control in America, and maybe what we need to do is get AI romantic companions. And I mean, that’s an extreme example, of a particular solve for a terrible epidemic, but we can’t immediately write off the fact that AI relationships are bad, because it’s not what we’re normally used to. I know that, I mean, I love the movie Lars and The Real Girl, because I think that movie in particular addresses the thought of a fake companion so beautifully and so eloquently in that we’re all just out there seeking community and companionship and whatever form it comes in should be socially acceptable.
PAVLICK: I agree. I think I’m first and foremost a scientist, so it’s like there’s not a huge value to us speculating on this, is it good? Is it bad? I think there is an instinctive judgment that comes to it like, “Oh, that’s weird.” But this is again, this kind of classic something is new and different, the default, because so much of the discourse around AI is kind of like a human, so we think of it as replacing human companionship, but a pretty powerful analogy that I think it’s also maybe more like right now is journaling. And that’s actually something that would be great if more people did this personal reflection, actually externalizing your thought process. And I know some of the people who work on mental health within ARIA have been looking on these kind of AI assisted journals. A good therapist will do this, kind of just get you to think through your own life experiences to reflect, practice gratitude, think about your goals and ambitions, work through something that you might be having trouble voicing to other people.
It’s a really valuable process to go through. So we shouldn’t just right away say the process of somebody playing out a fantasy, working through an imaginary scenario, talking while being by themselves in the room is a bad thing. It could be a positive thing. If it’s replacing other things, it could be a negative thing. So I think there’s, again, just questions to ask and research that needs to happen to say, what is happening? What are the risks that we should be aware of? And start tracking it, and hopefully we can course correct if it’s going the wrong direction.
SAFIAN: I wasn’t quite sure how Ellie and Soraya were going to respond to my questions about AI-based relationships. The idea makes so many people uncomfortable, but as they both say, if folks are using it, then we shouldn’t just dismiss it. Still, we do need to understand it. So how do we judge whether AI is helping or hurting our cognitive state? We’ll talk about that after the break. Stay with us.Â
[AD BREAK]
Before the break, we heard Brown’s Ellie Pavlick and TMV’s Soraya Darabi talk about the rise of AI-based mental health treatment. Now we explore how AI might change us as humans, the challenges in effectively evaluating AI, mental health tools and more. Let’s jump back in.
Copy LinkCan AI have empathy?
Yesterday I was at an event where Siddhartha Mukherjee, the author of The Emperor of All Maladies was talking. He’s got an AI company called Manas AI, and he was saying how customer service representatives that are now becoming AI agents that once you’re done with your customer service, that agent may say to you, “So what else is going on? How else are you doing?” And it’s like he’s saying, “It’s going to be like this, your customer service agent and your therapist are going to come together.” And I don’t know whether when you hear that, Ellie’s laughing. I don’t know whether Soraya’s thinking like, “Is that a market I can…” But are we going to need to train all of the agents that are everywhere to be more understanding about the mental health impacts because it’s so alluring for us to have that conversation?
DARABI: Oftentimes when people talk about AI therapy, some of the big headlines for what one might be concerned about include AI can’t be empathetic because it doesn’t have real lived experiences. No, I’m not interested in these add-on experiences and making everyone a bot or everything a bot. And also AI will figure out empathy soon enough, and some argue it already has because the data of speaking to millions of people about millions of problems will ultimately result in some version of a lived experience. And through that we can create this inference layer of hopefully ethics and empathy that may be applied to other things. So maybe not the customer experience guy, but maybe when you go to the grocery store, the grocers will be trained on how to speak to you more cordially, and that might be rooted in data and, yeah, perhaps that’s really cool.
SAFIAN: Is there a way you research differently when you’re, I mean, you’re looking at sort of the cognitive reactions we’re having. Is there a way you research that differently, because it’s an AI, or is it sort of we’re humans, AI is not changing the way our brains work, so it’s the same?
PAVLICK: First I want to touch on this concept of empathy. I think this is a really good example of what I mean about there being questions we want to answer right now and just the levels at which we don’t have the scientific understanding to answer them. We take for granted humans have empathy and lived experience and models don’t. The truth is we don’t have really good definitions of these things. What do we really mean by a word like understand or empathize? When we talk about it’s important for humans to get empathy from another, be more specific, what do they actually need? Because some of these things we can imagine approximating with AI, some of these things we can’t, which is the one that has the positive effect on that person’s life. This is a really good opportunity to not pretend we know more than we do.
You think about something like CBT as a therapeutic device, there have been successful trials of AI-assisted bots that are basically just reminding you to think through things in a certain way, kind of reprogram your brain. This doesn’t depend on a deep psychoanalytic like, “Oh yeah, no, your mom was totally the one in the wrong there.” That’s not an important part and can be counterproductive. In other cases, we actually really feel like you can tell a human interaction you have where the person’s faking it versus not, and you sometimes filter out the friendships where you’re like, “Oh, this person’s saying things but they don’t mean them.” And so there’s some, we need to figure out how important these different things are, how much of this is part of how we’re raised and programmed and kind of innate, and how much is something that’s societal programmed and will drift over time as AI starts to fill a role and we start looking for other types of things, so much is on the table, so much is unknown, so I think it’s just impossible to anticipate what direction this is going to go.
On your last question, AI will not change how, I think that’s just definitely wrong. AI is absolutely going to change how we think. If nothing else, just knowing that AI is out there as one of the things we might be interacting with will change our expectations. That’s one of the most fascinating, whether good or bad from now looking forward is how are we as individuals and as a society going to change as a result of having this in the mix.
DARABI: What I’m also hearing Ellie describe is something that we’re seeing play out within 911 call centers. So AI has already basically infiltrated call centers. So when somebody dials 911, you might not know this, but initially you’re being screened. Are you experiencing a true emergency or non-emergency? And now with the advent of AI, including companies I know well like, Hyper AI, they’re able to immediately, within seconds, figure that out so that the true emergency calls are rooted to real humans and the non-emergency calls are redirected. My cat’s stuck in a tree and a local firefighter might be able to go and help the cat fall out of the tree safely so that the person who is held up at gunpoint may actually find the kind of emergency response they need. I think there are perfect parallels within SMIs, serious mental illness and for instance, your example of CBT, cognitive behavioral therapy, whereby people who just need on repeat, the training to work through their day-to-day anxiety can get that access at scale so that we’re redirecting the SMIs to people who actually can provide that high-touch empathetic and bespoke concierge service.
Copy LinkMeasuring success in AI-driven mental health
SAFIAN: There’s so much emotion around this topic. How do we know whether we’re making progress?
PAVLICK: By far, the hardest problem in AI is evaluation. There might be a small number of things that are easy to evaluate, but all of the things that are really getting people excited about how AI is going to disrupt these sectors, really depend on having good evaluations on things that have thus far not even necessarily been quantitative sciences. Mental health is a really good example. Education is another one that’s a really good example. Any of this other kind of having AI help with things like managerial work, any of this kind of classic white collar stuff, leading teams, coming up with ideas, doing science, and anyone can pick their field and think about how they’re evaluated in their own job or how their children are evaluated in school and no one is satisfied with these evaluations. The truth is we don’t have good quantitative metrics for most things.
We have a lot of proxies and correlates and we know that they make mistakes. AI really depends on very specific success metrics, which is why we’re seeing progress in code. AI is very good at writing code because it’s pretty easy to evaluate if the code was correct, and in other cases we’re just assuming that because it worked for code and because people who write code think that that’s the top of the hierarchy like, “If it can do code, surely it can do everything else.” Jokes on them, I actually think that that was the easiest thing. By definition, this other stuff is by definition harder. That’s why we haven’t figured out how to codify it yet or how to quantify it yet. So yeah, for mental health, we’re going to have to define success. It’s not going to be, we labeled a data set and this was the correct response, this was the incorrect response, and have a leaderboard and see who gets to the top and do reinforcement learning against it.
It’s not going to be that. At ARIA, a big focus is on this participatory AI, so it’s definitely not computer scientists and AI researchers and companies who get to define this and then crank on the metric. It’s going to be a large, messy conversation. It needs to include people who think AI is absolutely the wrong way to approach mental health. People who are super gung-ho, everyone in between. Everyone needs to be a little bit involved in this so that we come up with, what does success look like? And we really don’t know. I think until we’ve had this really large conversation of what do we want to look like, we just can’t really even speculate on what it’s going to be except to say that I will speculate, which is this not going to look like what current AI evaluation looks like, which is a leaderboard.
Copy LinkNavigating optimism and caution with AI
SAFIAN: When you think about AI’s impact on mental health and its potential impact, excitement, optimism, potential on one side, risk, fear, destructiveness potentially on the other side, where in that spectrum do you feel like we fall?
DARABI: I’m always a cautious optimist when it comes to technology. I don’t think you could do what we do for a living and not be, so I am somewhere just above the middle zone and also the coming wave has crashed onto shore, and if we look at everything through this dystopian lens, which is quite easy to do, we’re missing the plot I feel. And the plot is, it’s here. It’s happening. In this instance, people are using foundation models, Claude AI, ChatGPT, to solve everything from, “I’m nervous to go to work today.” To, “I’m a doctor who needs an AI scribe to help me keep up with my patient load.” And industry is capitalizing on it, okay, and society should also work with folks like Ellie who have the nuanced academic and research perspective on how we can tread water as carefully and safely as possible.
PAVLICK: I would say that right now I’m optimistic, and it’s because I think we’re still in this narrow window where we get to have a choice to make this happen the right way. I can imagine myself quickly losing that optimism if I feel like that window closes and the main narrative I hear about AI is how fast things are moving. So I think right now it depends on the technology. It depends on people being sober-minded, thinking about it not being histrionic in a positive or negative direction, that we’re thinking about this. As a whole society, this is all of our problems. I think we have this window where we can get it right.
SAFIAN: I want to thank both of you, Ellie and Soraya, for being here and doing this. Thanks so much.Â
I went into my conversation with Ellie and Soraya thinking that they might be at odds about AI’s role in mental health. What I find telling is how aligned they are. We can’t ignore that people are using AI for emotional support, yet we also can’t ignore that. Most general AI tools were not built for this specific use. As Ellie notes, just because foundation models work well for coding, that doesn’t mean they’re ideal for everything else. Here’s hoping we take the time as a society to examine our assumptions about this new technology, to take it beyond the realm of engineers and entrepreneurs, but also to use engineers and entrepreneurs in that exploration. These are wide open times and it’s only healthy to rely on each other to make it through. I’m Bob Safian, thanks for listening.
Episode Takeaways
- Bob Safian spoke with Brown University’s Ellie Pavlick and TMV’s Soraya Darabi about the complex and evolving relationship between AI and mental health.
- Pavlick described how ARIA, a new research institute, is focused on creating scientific leadership and safer guidelines as AI increasingly intersects with mental health support.
- Darabi highlighted the enormous unmet need for accessible mental health care and emphasized that AI-driven services, despite controversy, play a vital and growing role.
- Both guests discussed the need for open, participatory research to better understand the effects of AI friendships and romance, stressing that social and psychological impacts remain largely unknown.
- The conversation closed with cautious optimism: while the window is open to shape AI’s role responsibly, ongoing dialogue and careful evaluation are crucial to future well-being.