It’s always tough for parents to navigate challenges with their kids, especially amid a widespread shortage of mental health workers. Sam Gardner, a seasoned tech leader and mom of two, has faced parenting struggles herself. She found therapy to be so helpful that she built an AI-powered app, Happypillar, to democratize therapeutic tools. Sam talks about how the platform analyzes child-caregiver interactions, makes recommendations guided by professionals, and seeks to improve outcomes for families everywhere.
About Sam
- Co-founder & CEO of Happypillar, bringing AI mental healthcare to families
- Built the world's largest proprietary parent-child language model
- Collected 40,000+ minutes of HIPAA-compliant dyadic conversation data
- 10+ years leading teams across tech and startups, incl. Sauce Labs and Rasa
- Northwestern alum; partnered with licensed therapists on evidence-based care
Table of Contents:
- Why early parenting struggles can become a mental health wake-up call
- Turning a personal therapy breakthrough into an AI startup
- How parent child interaction therapy shapes the product
- What a five minute guided play session looks like
- Why personalization and cultural sensitivity matter in parenting AI
- The feedback loop that helps parents become more present
- How the technology works behind the scenes
- Building trust with privacy safety and responsible AI
- Everyday AI habits and hopes for what comes next
- Episode Takeaways
Transcript:
How AI can help parents, with Happypillar’s Sam Gardner
SAM GARDNER: So it was December of 2021. I was really stressed out. So Winston is my firstborn. He was four then, and he was losing it all the time. Every food I offered was wrong. Every toy was frustrating. Everything made him mad. His whining would turn to tantrums and I wasn’t handling it very well at all either.
RANA EL KALIOUBY: That’s Sam Altman Gardner, the co-founder and CEO of Happypillar – an AI-based parenting app that’s focused on the mental health of young children.
GARDNER: I was erupting right back at him. We’re screaming at each other. It’s like we’re both toddlers. And every night I would think to myself, wow, I really messed this kid up. He used to be so good and sweet and I’m a bad mom. I’ve ruined him.
EL KALIOUBY: The experience that Sam is sharing is something that all parents can relate to. As a mom myself, helping your child work through difficult behaviors and mental health challenges can be quite daunting.
GARDNER: But luckily, my pediatrician recommended therapy even for a kid that young — four — and it was really effective.
EL KALIOUBY: But most parents aren’t so lucky. In the United States, specialized mental health services for children can mean mountains of medical bills. There’s also a critical shortage of mental health professionals and that can mean months of delayed treatment.
GARDNER: I also noticed how a therapist analyzed how I was speaking to my child was really similar to how language AI analyzes how humans speak.
EL KALIOUBY: Exactly. The AI can analyze human speech similar to how a human therapist would. So what if AI could be a mental health resource for parents?
Could this allow parents access to affordable mental healthcare right at their fingertips? Today, Sam Altman Gardner and I are talking about how AI can become a bridge in our health care system, and how this cutting edge technology can democratize access to mental health services.
I’m Rana el Kaliouby and this — is Pioneers of AI, a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
I’ve known Sam for years, and she was early in seeing the power of AI to transform a key part of life for many people: parenting.
I believed in her work so much I became an investor in Happy Pillar. The name is a reference to caterpillars, by the way – a favorite motif of childhood, and of course the precursor to butterflies!
And here’s a fun fact about Sam – her given name is actually Sam Altman, same as the founder of Open AI, whose name you see in so many headlines today! Ok, let’s get going with her.
Hi Sam. Thanks so much for joining us today.
GARDNER: Thanks for having me.
EL KALIOUBY: So I have to start with this question ’cause I bet it’s on everybody’s mind. What does it feel like to have your name be Sam Altman.
GARDNER: It is the weirdest sensation almost all the time. Every time something happens with other Sam Altman, I’ll say in my mind, I’m original Sam Altman, even though I think he is definitely older than I am.
Copy LinkWhy early parenting struggles can become a mental health wake-up call
EL KALIOUBY: Sam has two kids who are six and three. I also have two kids, at a very different stage of their lives. They’re now 21 and 15.
But honestly, it feels just like yesterday that I was in Sam’s shoes, managing toddlers and early elementary school. Her story about feeling like an inadequate parent brought some memories for me.
I remember the growing pains that come with those years. In fact there’s one story that still haunts me today. Jana, my oldest, was only one at the time. I told Sam about what happened next.
We were in Cambridge during my PhD, and it was just she and I, and I was trying to get her to sleep in her crib.
And I read somewhere that you should really just plop them in the crib and let them cry it out every night. So I did that and I would put her in and she’d cry and cry and cry. She wouldn’t stop crying and I would give in and I’d take her out, put her in my bed. And then one day I was like, okay, I’m going to tough it out.
I’m just going to let her cry. And I sat there and she cried maybe for 20 minutes or something. And then she threw up. That was it. I was like, we’re not trying this again. And she continued to sleep in my bed until she turned five.
GARDNER: Makes sense to me. I would go that direction too if my kid threw up after crying.
EL KALIOUBY: This is what Sam is like. She’s kind, empathetic, and a fierce advocate for parents.
After seeing the therapist, her relationship with her kid got so much better! And she wanted other parents to experience that too. She saw an opportunity!
GARDNER: I was working at a conversational AI company at the time. And I turned to my friend, Maddie Mantha, an AI engineer and product and marketing leader at that job. And I said, can we rebuild this kind of therapist observation and coaching protocol using AI? Because it really worked for me.
And she said, yeah, we absolutely can do that, and that is how Happy Pillar started. Maddie is my co-founder and we founded Happy Pillar to make evidence-based mental health treatment accessible to everyone.
Copy LinkTurning a personal therapy breakthrough into an AI startup
EL KALIOUBY: That is amazing. One of the things that I’m super passionate about is this idea of leveraging AI to help with mental health. When you walk into a doctor’s office today, they don’t say, oh, Sam, what’s your blood pressure or temperature? They just measure it.
But in mental health, it’s still a questionnaire. They say, oh, scale from one to 10, how do you rate your kids’ anxiety level or depression level or whatever. And AI provides an opportunity to really change that. And I guess that’s what you saw, right?
GARDNER: Definitely. And I think we also saw that a clinician can often give patient feedback — either in a physical or mental setting, doing physical therapy, seeing a doctor — they can really observe you and give clinical feedback based on their training. But there are only so many clinicians in the world and only so much time for them to see patients.
If AI can spread the expertise of clinicians to a broader community, then that community of people can get the kind of clinical expertise and clinical feedback that sometimes only the wealthiest and most privileged can get. And that is really a special thing about AI.
Copy LinkHow parent child interaction therapy shapes the product
EL KALIOUBY: Yeah. I love this idea of democratizing access to mental health and mental health care and support. It’s one of the reasons — full disclosure — I’m an early investor in Happy Pillar. I love the mission you guys are on and I just think it’s super impactful. It’s an amazing use case of AI. So take us to the core science or the core therapy around which Happy Pillar is built. When you went to that human therapist, what was that therapy session like?
GARDNER: So I saw a therapist who had a handful of certifications in various modalities, but the ones that were most interesting for my personal case were PCIT, which stands for Parent Child Interaction Therapy. In that therapy, a therapist trains and coaches a parent on how to do a daily therapeutic session with their kid, where they get down on the floor and play together in that session every day.
The parent has some verbal goals, some things that they are trained to say to their kid, or some ways of speaking that are really helpful and effective in repairing relationships and building resilience. As a modality of therapy, PCIT has over 40 years of evidence behind it.
So if you’re seeing a human therapist, you’re trained on how to do that five-minute interaction. You go home and you do it every day for a week. And then once a week, the therapist watches you do it on Zoom and gives you really personal pointed feedback on how you’re doing it, how your child is developing, how you’re changing, things like that.
EL KALIOUBY: PCIT often brings in elements of play therapy. So a session can look as simple as say – sitting down with your child as they draw a picture. A clinician will then coach the parent on how they can affirm their child through play.
Sam took these therapeutic principles and brought them to an app.
GARDNER: With Happy Pillar, we teach parents the same skills that you would learn in PCIT therapy or in forms of CBT or parent child play therapy. Parents learn that from short lessons and videos that feel kind of like TikTok plus Duolingo. And then once a day, just like in real human therapy, you sit down with your child and play and talk to them.
And Happy Pillar observes that parent child interaction and gives exactly the same kind of feedback a therapist would give. In fact, all of the feedback is written and designed by licensed clinical therapists. What’s extra great about using AI to deliver that is feedback that in a human session you’d only get once a week, you can get every single day and immediately.
And we’re hoping that shows a lot faster outcomes for parents and kids. We’re already seeing it with our users now. And we’re starting to design studies to get our own clinical evidence basis on that.
Copy LinkWhat a five minute guided play session looks like
EL KALIOUBY: I love that. So set the scene for us — say a parent and a child, what kind of situation would they need to be in to find themselves looking for your app? And then walk us through a play session or a scenario where they’re using Happy Pillar.
GARDNER: So there are a lot of goals that you can work on in Happy Pillar, and when you start the app, you fill out what feels a little bit like an intake survey if you were going into therapy, and you outline some of the things you might want to work on. So some of them are things that might be causing parents some stress, like temper tantrums, nightmares, low mood, anxiety. Some might be situational — a move, a new baby — things that just cause stress to a child and you want to prepare for them, maybe going to kindergarten.
And some are just beneficial things that you want to strengthen, like fun new ways of playing with your kid, teaching them mindfulness and resilience, building connection, learning developmental parenting and evidence-based techniques. So you can choose all sorts of goals, and we have custom programs based on those goals, and custom lessons. So you get yourself set up, and then on your own as a parent, you do our short lessons, they’re between three and five minutes a day, watch some videos, answer some questions, really learn some of the things that a therapist would teach you.
And then once a day, you sit down on the floor with your child with some toys. And you’re taught to play with them and to really let them lead the conversation. And there are some verbal ways that we help you lead the conversation. Some of them are suggesting that the parent narrates what their kid is doing.
And we recommend that parents give specific praise or celebrate the behavior they’re seeing that they really like. Like, I love how quietly you’re sitting and I love how gently you’re playing with your cars. Thank you for sharing. And we record that five-minute play session where the parents are encouraged to say some specific things, and then immediately we’re able to give feedback on where parents are with those skills, how the play session went. We also collect data every week from parents based on evidence-based psychometric assessments so that we can show parents how things are moving, how things are tracking.
EL KALIOUBY: What’s an example of that?
GARDNER: Yeah, so there are a handful of psychometric assessments out there that are evidence-based to measure progress in mental health care.
EL KALIOUBY: How many tantrums happened in the last week or something? Okay.
GARDNER: Exactly. So measurable things, and we customize what you’re reporting back based on what you’re wanting to work on. So you can see progress on what you’re wanting to work on. And then we use another form of AI — machine learning — to design your therapeutic program based on your speed and progress and what you want to work on. So it is not just a video lesson like you could get from an Instagram therapist, and it’s not just a book that is one size fits all. It’s really customized and individualized.
That’s somewhere I think AI is really going to help expand healthcare and science — that we can really individualize things at an efficient and high-scaling capacity.
Copy LinkWhy personalization and cultural sensitivity matter in parenting AI
EL KALIOUBY: There are so many factors we need to consider when making individualized AI-powered health care tools. Of course these tools need to be research-backed and evidence-based.
But we also need to consider culture. Happy Pillar coaches parents on how to interact with their kids. It evaluates how we praise them … how we build up their confidence.
But that kind of behavior can look really different depending on where you’re coming from.
For example, in my family straight A’s are expected. And there are certainly no participation trophies.
GARDNER: So something that Happy Pillar teaches and coaches parents on doing is giving praise for the kinds of behaviors that you want your kid to be doing. And we know from decades of research in PCIT and other modalities that praise is a constructive tool, and that when used in a specific and focused way can really affect the outcomes for your kid.
But we’ve also learned that with different generations, different cultures, different accents, different idioms, praise can look really different for different people. And the way we talk about praise and the way we coach parents on doing praise needs to be done in different ways for different folks. And for us it’s important to do two things: to make sure that the coaching and training in Happy Pillar is culturally informed and sensitive and aware of how we all think about praise, and to understand the different ways in which a certain sentence could be interpreted as praise in a different culture — and figuring out the best way to do that in a way that gets the best clinical outcomes for kids.
Copy LinkThe feedback loop that helps parents become more present
EL KALIOUBY: Yeah, absolutely. So what kind of insights would a parent get after the session? And this is not happening in real time, right? It’s not coaching the parent in real time. It’s waiting until after the five minutes are done.
GARDNER: It’s about 10 seconds after the five minutes are done and the parent can go review that on their own. So while you’re playing with your kid, your phone is tucked away. The screen is off. It’s listening, but you’re not interacting with the phone. You’re really just interacting with your kid and focusing on them.
The app will give you the clinical feedback later. The kind of feedback you get is a lot of things. So definitely some encouragement. We pull out a top moment — a moment where we heard connection between you and the child.
EL KALIOUBY: Ooh, I love that.
GARDNER: And then we have a little gallery of top moments, so you can really see all the great connection you’re having. And then some of the skills that I mentioned — we count how many times you were able to do those. So how many times did you narrate what they were doing? How many times did you give them a labeled praise or a celebration? One of the things that you’re taught to do in therapy is to echo back what your child says.
So if they say, I’m building a tower, you say, you are building a tower. It doesn’t have to be an exact echo, but you’re trying to reflect back what they’re saying. It really builds their confidence and it encourages them to expand more. And there’s a clinical saturation point at which it really has an effect on their future mood and anxiety and behavior. And we make sure you’re getting really close to that number so that you can have beneficial outcomes.
EL KALIOUBY: Happy Pillar didn’t exist when my kids were younger, but I’m reminded of a conversation I had with my son, Adam, a couple of months ago, because I think a lot of parents, especially today, are distracted, right? So I was in a conversation with him and he was trying to remind me of something and I just couldn’t remember it.
And I was like, I must have dementia, right? And he was like, no, mom, you just never pay attention enough to any one thing. You’re always half present. And it was such a both sad — to be honest — moment and also a powerful moment. And it just made me realize that in a lot of our conversations I might be on my phone. I might be thinking about the next meeting that’s coming up. And I kind of wish that I had something that just nudges me as a parent to be more present.
GARDNER: Although, I do want to point out that if he is comfortable enough saying that to you in that way, that shows that you’ve really done a good job, that he respects you, understands you, connects with you. So pat yourself on the back for that.
EL KALIOUBY: Okay. I appreciate the support.
GARDNER: But I think you’re absolutely right. So the therapy itself taught me presence and focus and mindfulness. And also it subconsciously taught me different ways of communicating, not just with my child, but with my partner, my other children, my co-founder, my colleagues. So some of it is subconscious learning, and then some of it is taking the time to be present and focus and connect with them.
EL KALIOUBY: But you’re focused on the early childhood target audience, correct?
GARDNER: Right now, definitely, especially because there is very little evidence-based self-guided options for that age group out there. This therapy that I did exists, but there are only so many clinicians. It is not cheap and it’s kind of hard to access. But what we’ve built is a system that can analyze any two-person conversation, and we really can apply this to all sorts of mental health modalities for other ages, specifically the ages that are not well served in terms of technological advancements.
What’s really extra important about focusing on the early childhood age group first is that we’re really seeding mental health intervention and prevention and comfort in young kids so that as they grow older they believe that mental health care is just something we all do, like brushing our teeth.
And ideally we’ll be preventing a lot of future crises or mental health challenges. And also this is a population of children who were born in and around a global pandemic. So they need some support, their parents need some support. So that’s a big reason we focus on that age group right now.
Copy LinkHow the technology works behind the scenes
EL KALIOUBY: There’s a real shortage of mental health care professionals which means that families are not getting the support they need. And this is where AI can help!
But putting your trust in AI when it comes to your child’s health can be … tricky. So how do platforms like Happy Pillar earn and keep the confidence of families?
That’s in a minute.
So you’ve kind of walked us through how the user experience with Happy Pillar looks like. Let’s go behind the scenes and talk about the AI that’s powering all of this. What kind of AI and machine learning technologies are you using for Happy Pillar?
GARDNER: So it starts with us recording a parent child interaction. And then we do something called diarize the speakers. That means identify who’s the parent and who’s the child. And after that, we convert the speech to text so that it can be analyzed, while retaining some of the characteristics of the recorded speech, like uptalk or tone.
EL KALIOUBY: So it’s not just the words. You’re not just analyzing the words. You’re also analyzing the vocal intonations and other aspects of the speech characteristics.
GARDNER: That’s absolutely right. So one of the things parents are asked to avoid — just during happy times, the child is in control — they’re asked to avoid questions so that the child is leading the whole conversation. Questions are great tools for 23 hours and 55 minutes of the day. But in these five minutes, the child’s in control.
So we ask parents to avoid questions because it’s clinically effective. But you could say “you’re building a car” and that is perceived by a child as a question, and often interrupts their thought pattern and what they might’ve said next. So if we hear a question based on tone, we’ll mark that as a question and coach parents on how to say things as statements or as echoes or reflections.
EL KALIOUBY: So cool. Are there any phrases or things that happen that are particularly challenging for the AI to understand?
GARDNER: I think one of the bigger challenges at the beginning was actually the diarization, which I just mentioned. Often the biological parent of a child — if it’s the mom and it’s their own child — a female adult voice and her own child’s voice can sound really similar. And Maddie and the engineering team had to figure out how to identify who’s the child and who’s the adult, because the child can say whatever they want during happy time.
We don’t analyze what they’re saying. We throw away any recording of what the child is saying. So we really need to be able to identify speakers accurately. At first that was pretty hard, unless it was a dad. Dads have different voices than their kids, but moms don’t always. So Maddie’s team discovered that adult and child speech each have different formant frequencies.
So they are using AI to identify the formant frequency of each speaker and figure out which one would typically belong to a kid and which one would belong to an adult.
EL KALIOUBY: Well, that actually brings us to a really important question. Anytime we’re talking about AI, the fuel of AI is, of course, the data, the training data that you’re using to train these models or these algorithms. Do you need any human labelers or annotators?
And just for our listeners, that’s when the data needs to be labeled by actual human experts who can say, okay, this was a great interaction, or this was maybe a really lousy interaction, or this was a question, this was not. Do you need a team of human annotators at all?
GARDNER: Our data is annotated by licensed clinical therapists. So not by engineers or coders — it is annotated by clinicians. That is something we do to ensure that our model and our data set is being treated with the most clinical efficacy and that it’s the safest.
EL KALIOUBY: I want to pause on that, by the way, because that’s a really important decision that you’ve made as an entrepreneur and as a founding team. You didn’t put this data on Amazon Mechanical Turk, right — MTurk — which is this platform on Amazon where you can hire labelers from all around the world very cost effectively, but they’re not experts, they have no idea what they’re annotating.
And you went almost the opposite extreme, where you’re hiring these licensed clinicians to annotate the data.
GARDNER: We also want to make sure that we are delivering a therapeutic intervention that is actually effective. So maybe we could have done things faster or more cheaply from a business perspective by going with something like MTurk, but there’s no guarantee that that would have beneficial outcomes for kids and parents.
EL KALIOUBY: I also think data can be a true competitive moat when it comes to AI companies. And I believe you’re amassing the first ever dyadic data set between child and parent. Tell us more about this data. What do you plan to do with it? Have you mined it?
GARDNER: So, we started Happy Pillar a little over two years ago with just an idea, and there was no data set out there at the time that would have satisfied what we were needing. So, we knew we’d have to build it from scratch. So, all of the data that we collect from parent child conversations is stored like patient medical records.
We use HIPAA compliant and SOC2 compliant servers. We de-identify and scrub all data so that it’s just conversational and not identifiable. And then over the past two years, we’ve collected over 40,000 minutes of dyadic parent child conversation data and all of the nuances therein.
And you’re right. We have the largest parent child data set and language model in the world. All of it is proprietary, and when compared to a licensed clinical therapist performing a manualized therapy, we supersede the human accuracy that they would need to get clinically licensed.
EL KALIOUBY: Translation … If Happy Pillar could go through the process of getting clinically licensed – it would pass with flying colors.
GARDNER: So, we have a lot of potential use cases for this data set. It grows every day. It can be used in pediatric research. It can be used in developing new treatment tools. So the fact that our startup is collecting this kind of data hopefully empowers us to stick around for a long time and provide evidence-based mental health care to more and more people.
Copy LinkBuilding trust with privacy safety and responsible AI
EL KALIOUBY: Yeah. One of the themes that we are exploring in this podcast is the theme of responsible AI. So I wanted to ask you, what does that mean to you? And how are you practicing it — at the company, but also in terms of how you’re building Happy Pillar.
GARDNER: That is a great question. So I agree with you. I think responsible AI as a concept is really important to uphold, especially when you’re building something new and innovating. We try to make sure that our team is extremely diverse from different backgrounds, different cultures, as many women as possible, because AI and technology is often over-indexed for men. And same thing with annotating our data set and sourcing our data set — making sure that it’s coming from all sorts of populations.
EL KALIOUBY: I want to pause you on that because that’s another important theme within Happy Pillar — building responsibly — and it’s mitigating bias, right? If you accidentally built bias into the data you’re using, for example, if your families are not diverse, or the problem set is not diverse, I think that could potentially hurt the app. So can you talk about that some more?
GARDNER: We try to combat bias with data. So we are trying to make sure that our data is made up of individual datum in equal portions. The training data and the labelers need to come from diverse sources as well. We have 40,000 minutes of conversation data with different voices, different accents, different cultures, different ages, different genders, and then same with our clinicians. As we expand to other languages, making sure that the lessons and the training are culturally sensitive and competent and are analyzing other languages using the nuances of those languages.
So yeah, keeping data and diversity at the forefront of our mind and then keeping humans in the loop at all times.
EL KALIOUBY: It sounds like you’re recording these conversations. So how do you ensure that this data is kept safe?
GARDNER: So, a lot of things. As mentioned, we are de-identifying all data before it’s stored anywhere. This is not like an Alexa. It’s not listening at all times. You press start when you are starting your session. It only listens when you tell it you want it to, and it’s not listening at your most vulnerable times either.
We’re also experimenting with the idea of on-device inference, meaning that the data would never leave your phone.
There’s a trade-off there between the accuracy of the product if you have a small data set that’s just on your phone, versus if it’s de-identified in a cloud. So we’re figuring out the safety versus efficacy balance there.
EL KALIOUBY: I think that is a question that a lot of AI companies and AI startups grapple with. On the one hand, you can have these models run on the edge or on your device. And so your data doesn’t leave the device, which is great from a privacy perspective, but these models tend to be smaller because they’re running on a phone and so not as accurate.
On the other hand, if you send the data to the cloud, then you get access to amazing compute and cloud resources. And these models tend to be gigantic and a tiny bit slower too. But the accuracy is there, and that’s kind of the trade-off that you are talking about. I think we had to grapple with that at Affectiva as well.
GARDNER: It’s a constant learning experience. So we’re constantly thinking about it and figuring out what the best version of the trade-off is.
EL KALIOUBY: One of the other questions we’re exploring on this podcast is whether AI will augment or replace humans. What is your framework for Happy Pillar? Are you trying to replace therapists?
GARDNER: So definitely not trying to replace therapists. I think we have always seen Happy Pillar as something that augments treatment and the possibility for early intervention or prevention in mental healthcare. And also we’ve often compared Happy Pillar to something like a public city bus — if face-to-face therapy is a Cadillac. And right now therapy is the Cadillac, but there’s no Kia, and there’s no public bus, and there’s no light rail — there’s really nothing else other than the gold standard best option out there. And there are far more people in the world who need to get from point A to point B than there are Cadillacs right now. So we are trying to figure out how do we provide the transportation system or therapeutic outcomes to the folks who don’t have access to the highest level of one-on-one high-acuity care.
EL KALIOUBY: We’re going to take a short break.
[AD BREAK]
EL KALIOUBY: When we come back, I hear probably the best ChatGPT hack for parents.
Stay with us.
[AD BREAK]
Copy LinkEveryday AI habits and hopes for what comes next
How do you use AI every day?
GARDNER: So I love AI, obviously. I love having an AI company. I worked at an AI company before this. My name’s Sam Altman. I can’t not love AI.
I definitely use ChatGPT, Claude, all of these large language models every day for suggestions of how to organize my thoughts, for recipe ideas.
My favorite one I did recently was two families, myself and another family. We each had together five kids at two different schools. We had swimming lessons. We had two cars. We had four parents. And we needed to figure out who gets which pickup, and there were different pickup times. There was a lot of complexity. And I told ChatGPT, here are all the details. Swim lessons five days a week. There needs to be one parent from each family and one car at each school. How do we do this? It outlined the whole plan and my brainpower was saved to work with my team on building Happy Pillar and to bond with my children. And that was maybe 30 minutes to an hour of my brainpower that I didn’t have to use on logistics.
And then obviously we all use AI in ways we’re not thinking about. So Netflix recommendations, spell check. I feel strongly that we couldn’t live without AI and I’m excited about the places it’s going to go — all the Sam Altmans of the world at the helm.
EL KALIOUBY: Yeah. The AI organizer with all the parents — that’s gotta be the best example of a ChatGPT use case that I’ve heard. I love it. What excites you the most about AI and what scares you the most?
GARDNER: I think what excites me the most is the speed to invention that we’re getting to see. I’ve learned about so many cool AI innovations since starting this company, including diagnosing breast cancer faster.
I think what I’m scared about is probably what everyone is scared about, which is there’s a vulnerable population in this world — people who don’t have the literacy to know when things are real or fake. AI is an incredible tool in the hands of the ethical and responsible, and it could be a scary tool in the hands of bad actors.
That definitely scares me a little bit, but I also know that because it’s so powerful, there are enough of us out there doing good stuff that we’re going to learn how to combat the bad stuff and we’re going to figure out how to keep everyone safe and healthy and thriving using the powers of AI.
EL KALIOUBY: Last question. If you could have AI do anything for you, what would you have it do?
GARDNER: 100 percent organize and streamline my house, go through all of the stuff I have accumulated, figure out what I need, where it needs to go, and then print me out a nice searchable list of what I own and when I might need it and where it is stored.
EL KALIOUBY: Oh, I envisioned this robot — what was the name of the robot? Oh, I’m blanking.
GARDNER: In The Jetsons?
EL KALIOUBY: The Jetsons, exactly. Yeah. Rosie.
GARDNER: Rolling around, taking pictures. Writing a list saying, hey, you don’t realize this, but you have toilet paper in two different closets. Let’s put it together in one. All of that would be chef’s kiss for me in my parenting life.
EL KALIOUBY: Love it. Thank you, Sam, for joining us today. This was wonderful.
GARDNER: Thank you so much for having me.
Episode Takeaways
- Sam Altman Gardner opens with a brutally honest parenting story, and it sets up the episode’s big question: can AI make effective child mental health support more accessible?
- After therapy helped repair her relationship with her son, Sam teamed up with co-founder Maddie Mantha to build Happy Pillar around proven Parent-Child Interaction Therapy.
- The app teaches parents short, evidence-based skills, then analyzes a five-minute play session to deliver therapist-style feedback, progress tracking, and personalized coaching.
- Sam and Rana dig into the hard parts of building this responsibly, from cultural differences in praise to clinician-labeled data, privacy safeguards, and bias mitigation.
- By the end, Sam makes the case that AI should augment therapists, not replace them, while also sharing a very relatable use case: outsourcing family logistics to ChatGPT.