How to build robots that can live alongside us, with Cynthia Breazeal
Over the past several decades, the field of robotics has achieved remarkable milestones. Robots have been sent to Mars and to the deepest parts of the ocean. Cynthia Breazeal, a professor at the MIT Media Lab, believes there’s another frontier that robots should enter: our daily lives. As a pioneer in social robotics, Breazeal has devoted her career to developing socially and emotionally intelligent machines — robots that can serve as learning partners in classrooms and even assist in navigating mental health challenges. She joins Pioneers of AI to discuss how social robots can coexist with humans, explore the applications of the robots she’s developing, and explain why AI literacy is more important than ever.
About Cynthia
- Pioneered social robotics; Kismet recognized as the world's first social robot
- MIT Media Lab professor; founded and directs the Personal Robots Group
- MIT Dean for Digital Learning; leads MIT RAISE on responsible AI education
- AAAI Fellow; recipient of the George R. Stibitz Computer & Communications Pioneer Award
- Founded Jibo; its robot made TIME's 2017 Best Inventions cover
Table of Contents:
- Why robotics is having another breakthrough moment
- How Kismet pioneered the first social robot
- Why the dance of communication matters more than humanlike form
- Where social robots can make the biggest real world impact
- How robot companions can support learning without replacing teachers
- What Jibo revealed about attachment trust and healthy boundaries
- Why embodiment changes how we relate to AI
- What it takes to build a socially intelligent robot
- Why AI literacy should start early and focus on agency
- How to stay human while keeping control of AI
- Episode Takeaways
Transcript:
How to build robots that can live alongside us, with Cynthia Breazeal
CYNTHIA BREAZEAL: It was literally a moment where NASA land Sojourner on Mars.
And we’re all watching the field of robotics. Huge celebration. Right. Oh my God. Like, achievement. Achievement, you know? And then I remember thinking to myself, okay, now wait a minute. We send robots into oceans. We send robots into volcanoes. Now we even have sent one to Mars. Why are they not in our homes? Right? It’s like I grew up with Jetsons and this vision, Star Wars with this promise of social robots walking among us, interacting with us as part of our everyday lives. And that was just that moment. I’m like, nobody is actually really working on that problem when we think about a fully autonomous robot that anyone could interact with or collaborate with.
RANA EL KALIOUBY: So, Cynthia Breazeal took that challenge on. Cynthia is a professor at the MIT Media Lab and she’s MIT’s Dean for Digital Learning. She’s a pioneer in the world of social robots – the kind of robots that are socially and emotionally intelligent. They can interact with humans in a way that feels .. natural.
And today, I am so excited to share my conversation with her. Not only is she a pioneer in this field of social robots, she’s been a big inspiration to my own work as a computer scientist developing emotionally intelligent AI. Cynthia and I are going to talk about how to build robots that live alongside us, the wide-ranging applications of social robots, and why we need AI literacy now.
I’m Rana el Kaliouby, and this is Pioneers of AI, a podcast taking you behind-the-scenes of the AI revolution.
[THEME MUSIC]
EL KALIOUBY: Hi Cynthia. Welcome to Pioneers of AI.
BREAZEAL: Hey, Ronna, so great to be here.
EL KALIOUBY: I am so excited for our conversation today because as you know, I’m a huge fan and we’ve had the pleasure to work together when I was a post talk at MIT and then at Affectiva. So it’s so awesome to have you here. I’m so excited for our conversation.
BREAZEAL: Yeah, fan club is mutual. So proud of everything you’ve accomplished Ronna. Seriously, been so wonderful to see your career. Just blossom.
Copy LinkWhy robotics is having another breakthrough moment
EL KALIOUBY: Oh, thank you. Before we dive into your career as a robotics pioneer, I just wanna take a moment and acknowledge kind of where robotics is at today. I mean, I feel like robots are having a moment. There’s been like, I don’t know, over a dozen humanoid robotic companies started in the last couple of years. So what do you think of this moment we’re in?
BREAZEAL: Yeah. I feel like robotics is a field that feels like we’re either always having a moment or about to have a moment. It’s like it’s one of those fields that turns out very, very hard. So—
EL KALIOUBY: Are hard.
BREAZEAL: Robotics is hard. As I say, Mother Nature doesn’t care about your algorithm or how hard you’ve worked.
So, yeah, it’s exciting to see humanoids start to, again, get a lot more interest. Obviously I pioneered this area of social robotics and human robot interaction, envisioning this day when everyday people would be able to interact with fully autonomous robots in a supernatural, engaging, productive, but also uplifting and rewarding way. So it’s been quite a journey. I mean, that work started like in the 1990s when I was a graduate student with Kismet.
Copy LinkHow Kismet pioneered the first social robot
EL KALIOUBY: Yeah. Tell us about Kismet too, because Kismet actually inspired a lot of my research building emotional intelligence into machines. So tell us a bit about Kismet and how the origin story. What inspired you?
BREAZEAL: Yeah. So just historically, Kismet is recognized as kind of the world’s first social robot. So I was kind of inspired by the personal computer when they went from these huge machines in back rooms to like, what did it mean to put a computer on every desk.
So I’m like, so what is the universal interface of robots? And we had certainly seen people completely anthropomorphize and interact in social, interpersonal ways with our autonomous robots. Well, at the time, and we’re talking about 1990s, they were very much inspired by models of insects. Like these were literally bug-like robots, six legged robots, very simple robots compared to like what we can build today. But people, of course would imbue a psychology to them, treat them as a living thing. So we already saw that. So it was like. The social interface is the universal interface, and so what does it mean now to create a socially and an emotionally intelligent machine, a robot, right?
That was kind of how it really got started. And the vision was about to be natural for people to interact with in our human-centered ways. And I started, as I started to kind of dig into that question, I read like ton of psychology, took a lot more classes and, understanding not just the social behavior of people, but also animals, right?
I was ravenously trying to look at any literature on models, mechanisms, how to understand what it would mean to create a first robot, right? That could engage people in this natural face-to-face interaction where it was clear, there’s no way we’re gonna do an adult level interaction. I mean, we are the most socially sophisticated, mostly sophisticated creatures on the planet.
So I thought, can we start by where it starts with us, which is literally the mother infant dyad. And from those interactions, right. When you look at all the development psychology work, like from those interactions is where we both have our biological basis for developing, but those interactions are absolutely crucial that the caregivers, that those adults interact with the baby as if it’s already socially and emotionally intelligent. Arguably, although it’s not quite there yet. I mean, it has to learn and develop that. I mean, we take years actually to develop into our full social and emotional intelligence, right? We’re incredibly sophisticated. So that was where we started. Kismet was in many ways designed to be like a young creature, infant like creature, where the whole focus was on, when you read the, again, the psychology literature, the development of psychology literature to talk about the dance of communication, and this is actually really important because I think despite all the advancements we’ve had in large language, we are still not in the dance of communication.
With our technologies, it feels very much like playing chess. You take a move, the AI takes a move. It’s like communication with people. It is a dance. It’s mutually regulated. We’re constantly observing each other. We’re responding to subtle cues and nonverbal cues, gestures, tones of voice.
And so like Kismet was asking that question, like what are the mechanisms? What are the ways that we can basically start to design a machine, an autonomous robot that could engage in the social and emotional, back and forth rhythm and dance of communication?
So we looked at models of emotion. We looked at social models. I looked at motivations. I mean, I basically took a lot of models from, again, animal and human behavior and adapted them to an autonomous robot. So, I mean, the robot didn’t model exactly what a child is, but it was inspired by that. And it was the first robot that actually successfully did that.
Copy LinkWhy the dance of communication matters more than humanlike form
EL KALIOUBY: So, what is interesting about, I wanna go back to this. Dance of communication and this idea that a lot of our communication happens through nonverbal signals. Right? Like over 90% of it.
BREAZEAL: Majority of it.
EL KALIOUBY: Or J Right. Exactly. But I look at examples like figure AI and Tesla’s optimist, robot and physical intelligences robot.
They’re all trying to build these robots that are gonna work and live alongside us. Do you think they are factoring in this kind of social and emotional intelligence when I’ll have to, I have to confess, like I can’t really imagine figure AI or Optimus in my kitchen roaming around, but yeah. Do you think they’re working on this kind of key component of a humanoid?
BREAZEAL: Well, so I would say not to my knowledge. And so when I started the work with Kismet, the field of humanoid robotics was just getting started as well. And so, I mean, looking way back now, we’re talking about the 1990s, right? So Japan was building these robots because one of the main use cases, even back then they were very concerned about was their growing aging society.
So they were very cognizant and concerned about this. So at the moment that Kismet and the social aspects of robots came out, I would say back then in Japan again, the 1990s, a lot of it was still around what your, I think you’re alluding to around it, which is more about the physical capability of the robot, the ability for a robot to walk open doors, manipulate human tools. Do physical tasks, right? But then this work from Kismet came in and of course they were like, yeah, these robots have to be able to interact with people who know nothing about robots, right? I mean, they, it needs to be able to interact with everyday ordinary people, so that was enough of a community. I would say that it propelled the field, the now the field of social robotics and human robot interaction forward.
EL KALIOUBY: I kind of wanna make a distinction because humanized robots and social robots overlap a lot, but they’re not the exact same thing. ’cause social robots don’t have to be.
BREAZEAL: Exactly. Social robots is more about a, an intelligence and a capability set that can be applied to physical robots, virtual, I mean anything really. When you think about the core tenets of it’s a, to me it’s really about how do you build socially and emotionally intelligent technologies that are really able to.
Partner with people in a very natural human way. And then if you can do that, there’s many, many, many, many applications for that. So there’s humanoids as one kind of technology that could take advantage of that. So anyway, to back to your point, I don’t think people building humanoids today are thinking deeply about the partnership question with people. I think they’re still focusing on the physical capabilities, but at the end of the day, the ability to do that with people is gonna be crucial to any like real world application.
EL KALIOUBY: In a minute, we get into those real world applications. And how social robots could be the key to democratizing access to early education. That’s after a short break.
[AD BREAK]
Copy LinkWhere social robots can make the biggest real world impact
EL KALIOUBY: So what are some of the applications of social robots that you’ve been working on?
BREAZEAL: Yeah, so this was very interesting, right? So, if we look at the progression of the field itself, so in the early days, it was, and we’re still of course trying to advance this question of algorithms for creating more socially and emotionally intelligent machines, but a lot of it was at that level, right?
We’re trying to explore algorithms and so forth. Then we started think about, okay, what are the use cases, right?
So we started looking at areas where social and emotional support were actually really important for human outcomes. Well, it turns out there’s lots of applications like that.
And what we saw in areas of like particularly health, chronic disease management, mental health, education, we are not able to actually train professionals at the rate to meet the growing demand. So there was a question of how do you build these technologies that can augment and support the human social systems around. How do we thrive, how do we flourish, how do we become lifelong learners?
So those became a lot of the first really, I would say, anchor use cases of the field of social robotics, at least the ones that I certainly focused on. If you design these robots in a way that engage not just people’s cognitive abilities or physical abilities, but their social and emotional well as well, we as human beings engage much more deeply. And not surprisingly, we end up doing better. We are more successful with the technology when we do that.
Copy LinkHow robot companions can support learning without replacing teachers
EL KALIOUBY: Can we give the example of Tegu, I think was the robot’s name.
BREAZEAL: So absolutely. So we were looking at early childhood education. So first of all, young children don’t read and they don’t type. So like the normal forms of interacting with computers like that ain’t gonna happen. Right?
And so we thought that was from an AI challenge, a great use case because the robot would have to literally be able to engage in social and emotional playful encounters with children in a very natural way. ‘Cause children are just gonna be kids. Let’s face it. Kids are gonna be kids.
And it’s wonderful and it’s messy and it’s complicated and it’s delightful. And it’s like, if you can do that, you must be doing something right. You have to like figured something out. Right. So, but then from a social standpoint, we look at kindergarten readiness across different social economic classes of United States, we see a huge disparity. The more affluent kids are far more ready to enter kindergarten than kids coming from less affluent families. There’s a huge inequity there. So the question is, if AI, if robots are scalable, affordable, human centered, if you could design them in a way that supported the teacher could engage the students, could you actually have a much more equitable.
So the Tega robot was designed as one of our earliest platforms to explore that. So Tega was designed, interestingly, not to be a human-like tutor.
Like a lot of what you see today. Tega was designed to be a peer, like learning companion peer alike, because we did not want the teacher feeling that Tega was competing with their role or value in the classroom and in any way, shape or form. And also we wanted the robot to basically serve as this practice partner, right?
This playful practice partner that would engage children in these learning games and activities. Things like dialogic storytelling or educational games as a peer where it’s kind of like if your dog became really, really smart and you could play games like it was like that, right? So children delighted in the robot.
They were not confused. They didn’t think this robot was like their human friends. I mean, this was a big concern that we heard quite a lot is what is this gonna do to children’s social development? We studied that too. We’re like, even if children can associate a kind of social relationship, like thing with the robot, they knew it was not like what they have with other people or their best friends.
Like there was no confusion in their minds about that, but they found value in what the relationship between, you know what the Tega was, right? It was a different kind of relationship. So this was really fascinating, right? But there was also this companion animal dimension. Which was fascinating because children could make mistakes in front of Tega. Like it was okay. There was no sense of being embarrassed or losing face that they may be more worried if the teacher was doing this, feeling they’re being evaluated.
Right? They didn’t feel any of that with the social robot, so it just, the different kind of relationship supported a different kind of engagement even with their human peers, where we could see that it really did support engagement and learning outcomes.
Copy LinkWhat Jibo revealed about attachment trust and healthy boundaries
EL KALIOUBY: Cynthia is talking about a critical design decision when making social robots. You’re not trying to replace human-to-human interaction – you’re trying to augment it.
Back in 2012, Cynthia founded a startup that made a small tabletop social robot built for the home, called Jibo. They kind of went viral a few years back. We actually had one of these at my home.
So when Jibo came out, we were one of the first lucky families to have a Jibo in our home. And my son at the time, he was probably like six or seven, he’s now 16. And he would, he loved Jibo. I mean, we had Alexa in our home, we had Google, whatever, but Jibo was different because it had a personality and a character.
And I remember when you had to sunset the company and Jibo was gonna go offline. My son was actually in tears. And I have to say, I thought about that a lot ’cause I like that they had this special connection, but also as a parent, I’m like, hmm, he’s getting too attached to this robot. So I, yeah, I, it was that something you saw with other families using Jibo at home?
BREAZEAL: I mean, of course we were all heartbroken about the company and we have continued to use. So basically my research group has continued, yeah. Has continued to use Jibo as a really one of a kind research platform that we’ve gone on to explore many, many different really great applications.
So Jibo lives in that context. But it was really, I mean, it, as you know, Ronna, you do a startup and you put everything into it and it was just, I mean, it was heartbreaking for people, not just because of the company, but because of Jibo. So like what you’re talking about, I mean, everybody felt that, really.
So, and I was shocked, honestly, like. There continued to be coverage of Jibo a year or more after the company ended reflecting and talking about exactly what you’re talking about, like what Jibo actually meant to people.
We definitely achieved that aspect of the design, and we tried to do it in a super responsible and mindful way, right? So we made some very conscious design decisions of, if people wanted to ask Jibo questions about topics that, a robot frankly has no business answering, like religion or, well, like Jibo would basically say, Hey, I’m a robot. I think people are awesome. That’s a really important topic. You probably really need to talk to another person about that because I’m just a robot.
Exactly. So we, the robot intentionally tried to set up healthy boundaries of like, what was, what it could engage you around, but really what, like if there’s human domain stuff that really, it was important for you to talk to people about that.
So we were very, very mindful about things like that. And so fast forward. If the company had to shut down, we were using Jibo as a research platform. We were starting to look at the application of social robots for mental wellness and mental health. There was some kind of dawning of the mental health of students at universities as being like a big issue, particularly at very top universities.
So we started to look at Jibo as a social robot, positive psychology coach for freshmen in the dorm. So that’s where we started, MIT freshmen in the dorms. Right. And we saw some really provocative encouraging results around that, where we saw students, despite as the semester gets on, it gets more challenging.
The students who had Jibo reported having more positive mood and optimism and all of that. And they also commented on the companionship. They talked about it as light companionship. So fast forward, we wanna do a follow on study. We’re preparing to now ship Jibos all across the country to do this larger study.
And the pandemic hit. And so we ended up doing this study in the height of the pandemic was social distancing, where people were literally not leaving the house. And we, again, from the data wise, we looked at different conditions.
One was like the more classic kind of coach kind of paradigm. The other one was like the companion paradigm, where Jibo also had an emotional positive goal and would do all these activities with you. And these are like well established positive psychology exercises that you typically would do with a professional.
Now you’re doing it at home with a robot. But these are all very well established kinds of things. Like, being able to think about three good things in your life and expressing gratitude. So anyway, the punchline is we saw the companion condition was stickier, more lasting results even in the delayed post-test and better results in general. And people, again, would comment again and again, that social presence of Jibo when the home was so important for them at that time when they were literally like completely socially isolated. So, I mean, I think, and this raises again this bigger question. Like what is the appropriate, healthy relationship between us and these increasingly conversational kind of AI systems. So like this was long before ChatGPT, right?
— came out, right? And then people are like, oh my God, I can have all these conversations and blah. And we’re like, yeah, we’ve been studying this literally for decades. We could have told you that.
Copy LinkWhy embodiment changes how we relate to AI
EL KALIOUBY: How important is it, because now people, of course, are turning to chat for everything, right? How important is it that this thing is embodied?
BREAZEAL: Well, I think it’s okay. So I will tell you this is kind of the overarching finding that we have. And there’s always gonna be, it depends, right? But when you wanna look at the social and emotional engagement, much stronger and lasting when you actually anchor it on a physical interaction. So if you’re doing an activity that has nothing to do with the social and emotional qualities of the interaction, you’re not gonna see that much of an effect ’cause it’s irrelevant. But for encounters and outcomes and interactions where the social and emotional actually does matter well, so the physicality definitely boosted up because again, we as human beings that are able just to engage more richly with it. Provided you design it well.
So, I think now chat bots, people having these long conversations. I just feel like now that that genie’s out of the bottle and many more people are talking TOIs in this much more interpersonal way. The ethics of it of these, what is it, the appropriate, healthy relationship is so important for us to design into these systems because without like AI literacy, without being aware of what these technologies are, how you should be thinking about them, people can over rely on them. They can be potentially manipulated in ways through the social emotional channels that they wouldn’t even be aware of. I mean, there’s plenty of things that we need to be very mindful of on the ethics and responsible design aspect of it.
But I will just say, I mean, that is also. And this is always the case, I think with new technology that is balanced by the huge positive outcomes, you can also see if you do it responsibly well and be really mindful about what do people need to thrive and flourish. If you start there always, you’re gonna end up with solutions that I think we can all feel good about.
Copy LinkWhat it takes to build a socially intelligent robot
EL KALIOUBY: I wanna take our listeners behind the scene and just kind of unpack what does it take to build a social robot? Like what are the different components? ’cause it is complex.
BREAZEAL: It is so complex.
EL KALIOUBY: So let’s start with like the perceptual abilities of the robot. One, of course is vision. So it has computer vision. You’ve explored a lot of like touch capabilities. So walk us through some of the different kind of boxes that you need to figure out when you’re building a social.
BREAZEAL: Yeah, so I mean, again, it always depends on the application and the use case. We have certainly looked at a lot of different perceptual channels, so some are very intuitive to us. So obviously, like you said, vision, the ability to perceive social cues. So whether that’s literally language or the paralinguistic, like prosody, emotion and voice, right? So there’s the auditory channel, there’s the touch, as you’ve said, channel, and we have found people petting our robots, hugging our robots. I mean, touch is an important channel of interaction and communication. And then we look at things like wearables. So like if the robot is able to pick up your, like the galvanic skin response or heart rate or other like, so there’s other kinds of channels that the robot can perceive through different kinds of inputs and technologies that it can try to basically integrate into its learning or decision making systems.
So that’s just the perception side of it. Right. Then you have of course the actual like decision making elements of the robot and, there could be a lot of different algorithms honestly, that we put together in order to create that experience. Right. So, the thing about robotics, I would say is like we integrate a lot of different technologies and techniques in order to build these systems. It’s very different than saying generative AI is the one algorithm I’m gonna just use. It’s like it is a piece. It is a piece in our systems, right? So it does certain things, but it’s not necessarily the best at all things.
So there’s other things we have to think about, right? So we do a lot of work around personalization. So the ability to not just only personalize, but recognize people as individuals. The ability to remember past.
So like this is like the foundation of good, positive interactions beyond you get to the kind of content and information conveyed through language and stuff. You have to build the foundation of that connection, right? So the decision making could involve machine learning.
It doesn’t have to involve machine learning if it doesn’t make sense, right? And I mean, as you know, there’s the whole, even just like UX design.
EL KALIOUBY: Yeah, the personality.
BREAZEAL: Personality, the look, the feel.
EL KALIOUBY: Does it have eyes? Right? Does it not? The color? Yeah.
BREAZEAL: The color. How does it, like what’s its physical language? How does it express kind of its internal states to you? There’s a big assumption that the more human, the better. I’m just gonna tell you, Rana, I have not seen that in any of my work. As long as the design does what it’s supposed to do well for the embodiment it has, you can get all of these great outcomes.
I have not yet seen a case where the more humanlike actually leads to better outcomes. In fact, if it’s too human-like, as you know, even today, Rana, the bar is very, very high. We, as human beings have an acute appreciation of what human behavior is, and if anything is off. We pick up on it’s uncanny. I mean like we are not there yet for human rich social dynamic interaction.
EL KALIOUBY: But Cynthia isn’t just pioneering social robots – she’s also a pioneer in AI literacy. More on that after a short break.
[AD BREAK]
Copy LinkWhy AI literacy should start early and focus on agency
EL KALIOUBY: For the past several years, Cynthia has been leading AI literacy programs. She’s the Director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education. MIT RAISE, for short. So I do wanna switch now topics and talk about AI literacy, especially in K through 12. And in addition to all your robotics work, I don’t know how you do it. You’re also the dean for digital learning at MIT and you are the director of MIT RAISE, which we’re gonna talk about in a second.
I’m a trustee at my son’s school and I mean, this is like one of the top topics we are taking on right now, which is how do we ensure that both faculty and students are AI literate? And what does it look like in a world where AI is changing like almost every day.
BREAZEAL: Oh yeah, so we established the RAISE initiative in 2021. So it’s not that old, but we definitely did it at a time where we were just appreciating, as I said, it’s like AI was in social media. There were starting to be early technologies around deep fakes.
We’re like, people have got to understand this technology. And a lot of them had no idea how much it was in their products and services at the time. Right. So that’s when we started it. So we basically said, it’s, this needs to start in childhood because kids are already interacting with these technologies and it’s shaping them, and we wanted to build a strong foundation in those perspectives and those skills. So as they grow up and enter the workforce, they have a much more informed sense of. Well, first of all, just how do, what are these technologies actually, there’s a lot of hype out there, Rana. Are they sentient or not? It’s crazy out there. So anyway, a really grounded understanding of the opportunities, but also the limits, right? And there are real limitations of these technologies, understanding the applications in society, both the positives and we’ve seen through the media many examples of unintended consequences through bias and so forth, right? So we developed a number of curriculum that is actually oriented towards teachers that are very accessible. So again, the media lab is the birthplace of scratch programming language.
So we started to, create our kind of block-based programming languages that we started to bring state-of-the-art AI technologies into so kids could learn about AI through making things with AI, putting child in the designer seat in a way that had that low floor of entry. So very accessible, high creative ceiling.
And then a wide wall. So being very inclusive. So like we started developing these tools and platforms. App Inventor is our mobile platform that allows kids to create working apps with AI. So we wanna empower them to create things that other people can use. That’s a great platform. So it is designed to be a very holistic approach that are not only developing their technical understanding and their technical skills, but their human skills in being able to work in teams, communicate with each other, design them with humans in the center, responsible design frameworks.
And honestly, Ronna, a lot of this is about empowering children, and I think this is so important. When I was growing up and I thought about my future career and my future, I didn’t know what I was gonna do. There was uncertainty about it, but I wasn’t scared.
Today are growing up in a very different emotional place. They are scared for their future. They, in a lot of cases, it’s almost heartbreaking. It’s like they don’t have a lot of hope, and I’m like, that is a very different context when you think about young people with these technologies. So I think the more we can empower young people and we talk about this as computational action, which is a core paradigm of our materials, the more we can educate and empower children to make things that make a difference now in their lives.
That sense of empowerment is so important because it motivates their learning, it gives them hope and makes them feel they can do something. They don’t have to wait for the adults to do it. Right.
EL KALIOUBY: And have agency.
BREAZEAL: You give them agency and you empower them with the tools and the frameworks to think responsibly and ethically about it. I mean, the kids blow us away every year with what they create. They’re doing things to help promote a more thriving planet, a more just society, a flourishing people. I mean, it’s truly inspiring what they want to solve with AI.
And it’s like the sooner we can get them in that mindset, it’s gonna propel them through the rest of their educational journey and through the workforce.
Copy LinkHow to stay human while keeping control of AI
EL KALIOUBY: I think what’s really what I’m hearing you say, which I think is really important too, is it’s not enough to be a consumer of these tools. You have to approach it with this sense of empowerment and critical thinking and creative energy and that, yeah, that is gonna be so key.
Alright, so as we start to wrap up here, I always ask this question to every guest we have on the show. It’s not gonna surprise you with AI becoming more conversational and more empathetic and smarter and even more creative. What does it mean to be human in the age of—
BREAZEAL: Yeah. So this is where AI literacy is super important because I like to say these AI systems sort of mimic human intelligence and it’s really the sort of, is really important because, with human beings, like if we talk to someone and if they can speak about certain topics or they can do certain kinds of tasks or maths or whatever, we kind of assume there must be a lot of other things that they actually know and can do.
Right. It’s not necessarily the case with AI. So there’s a lot of like overreliance or misinterpret, right? That comes along with that because they are not human at all in how they generate their outputs. Right? I mean, these models like they generate, they don’t deeply understand what they’re generating, right?
We as human beings will look at the output and interpret. We ascribe the meaning. It doesn’t mean the AI understands what it’s generated. People are working on it. We’re trying to advance those methods. But I will say I am not gonna ascribe true reasoning planning like to these systems at all yet. Right?
I mean, it is a certain class of algorithms in the field of AI. So these AIs appear to be able to do a lot of things and it can be super useful to us as a tool that we are now using to take those inputs and as you know, use for our projects and our out like, yes, great, awesome, empowering, but it’s super important.
We don’t over ascribe what these systems are actually doing to the point that we relinquish a sense of autonomy, a sense of agency, a sense of authority. Right. This is really important. We gotta be calling the shots as human beings, we gotta be calling the shots. Right?
EL KALIOUBY: Plus one on that one. Okay. I this, I think this is an awesome way to end our conversation. Let’s have people call the shots, but hopefully kind, caring, compassionate people. Ethical people do that. Thank you for joining us on this conversation.
BREAZEAL: Thank you, Rana. This is really great.
EL KALIOUBY: One of my main predictions for 2025 was a rise in embodied AI. We’re seeing this play out right now. And specifically there’s a whole slew of companies building humanoid robots. What Cynthia made me realize is that her work needs to be imbued into these robots. These robots are going to be interacting with us – humans – on a daily basis – which means they need to have social and emotional intelligence.
So if you’re in the robotics space – and you haven’t already – you need to check out Cynthia’s work!
There’s also this idea of AI literacy. Young people today are growing up AI native. Cynthia wants young people to lean into AI as an empowering tool – to build solutions to causes they’re passionate about, to express themselves creatively, and use AI in a responsible and ethical way.
Episode Takeaways
- MIT Media Lab professor Cynthia Breazeal traces her career back to the moment NASA landed Sojourner on Mars and she wondered why socially adept robots still weren’t in our homes.
- Breazeal says robotics has long fixated on physical capability, while her work on Kismet asked a different question: how to build robots that can join the human dance of communication.
- That social intelligence opens practical doors in health, mental wellness, and education, where robots like Tega can support teachers and help children learn without replacing human connection.
- Breazeal says Jibo showed how powerful embodied AI can be, creating real companionship and even boosting student well-being, while also underscoring the need for healthy boundaries by design.
- She also makes the case for urgent AI literacy, arguing that young people need not just to use AI, but to question it, build with it, and keep human agency firmly in charge.