How Google DeepMind is building AI that can help humanity
Google DeepMind is the powerhouse artificial intelligence lab behind Google’s AI assistant, Gemini. It’s also behind the groundbreaking AI tool, AlphaFold, that can predict a protein’s 3D structure. DeepMind’s COO, Lila Ibrahim, who’s the former COO and President of Coursera, has built her career on exploring how technology can benefit humanity. She’s now leading Google DeepMind with a mindset of responsibility. In this episode, we explore Ibrahim’s impressive career, how DeepMind manages risk, and the ways AI is revolutionizing education and science.
About Lila
- COO of Google DeepMind, leading ops, partnerships, ethics, legal & gov relations
- Named to TIME's inaugural 100 Most Influential People in AI (2023)
- 30+ years as an engineer and tech executive across global and emerging markets
- Helped design Intel's Pentium microprocessor; led tech expansion incl. DVD and USB
- Former Coursera COO and Kleiner Perkins partner; founded edtech nonprofit Team4Tech
Table of Contents:
- How identity and outsider experiences shaped a career in tech
- Why building experience mattered more than investing alone
- What it means to run operations and responsibility at an AI frontier lab
- How responsible AI can accelerate breakthroughs in science
- Where AI is already improving weather prediction and climate resilience
- Why startups still have room to win in the AI stack
- How AI could make learning more personal and more accessible
- Teaching kids to use AI as a tool and not a shortcut
- Building educational AI that is accurate inclusive and grounded in pedagogy
- How DeepMind approaches risk governance and everyday AI use
- Episode Takeaways
Transcript:
How Google DeepMind is building AI that can help humanity
LILA IBRAHIM: When I was growing up, I saw my dad and he would come home from work. He was an electrical engineer, and he would set out these beautiful pieces of paper with colored pencils and make gorgeous designs that would then turn into these microchips that would then go to power things like heart pacemakers.
And I grew up with this almost like curiosity of how could this young boy from Lebanon who was orphaned at the age of five, end up designing technology that looked like art that would save millions of people’s lives. And that’s really how I ended up in engineering was this combination of art, math, and science that could benefit people.
RANA EL KALIOUBY: That’s Lila Ibrahim. She’s now decades into an impressive career and currently leads a powerhouse AI lab. As Chief Operating Officer at Google DeepMind, Lila remains focused on creating technology that benefits people, developing innovative projects at scale.
You’ll recognize some of DeepMind’s household products – like Google’s AI assistant Gemini. But a lot of its work isn’t consumer-facing. For example, they are the Nobel prize-winning team behind AlphaFold, an AI tool that’s revolutionizing scientific research.
And today Lila is peeling back the curtain on how the lab is working on the frontiers of AI in education and science. Plus you might get one of the best everyday AI hacks I’ve heard in a while.
I’m Rana el Kaliouby and this is Pioneers of AI – a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
EL KALIOUBY: Hi thank you so much for joining us on Pioneers of AI.
IBRAHIM: Thank you. Excited to be here today.
Copy LinkHow identity and outsider experiences shaped a career in tech
EL KALIOUBY: Yeah. So before we dig into Google DeepMind, I wanna go to your story and your background. We’re both Arab American and we have that in common. We’re also kind of one of the few women still in a very male dominated field. So I would love to hear your story — like where did you grow up? How did you end up in tech?
IBRAHIM: It’s been quite a journey. So I actually am the first from both sides of my family to be born in the US. My father is Lebanese, my mother is Palestinian. They met in the US. Both had come for their education and ended up staying in, eventually meeting. I grew up in Lafayette, Indiana, home of Purdue University.
So it was quite an extraordinary place to grow up, to be around a university. As part of my educational upbringing, it was fantastic, but when I was in elementary, middle, and high school, I was like the foreigner in my high school class. And so we kind of stuck out. I ended up going to Purdue University to study electrical engineering. And when I went into my first internship, it was with a company no one had heard of called Intel. It was quite ironic that I had this anti-computer mentality and then went on to help design the Pentium microprocessor, which was the brains of the computer, which kind of just goes to show you how much life can change — and now of course in the field of AI.
EL KALIOUBY: Yeah. So before you joined Google DeepMind, you were at Kleiner Perkins and Kleiner was one of Affectiva’s early investors. And Mary Meeker was at Kleiner and she was on the board of Affectiva for a couple of years and she was one of the very first women in VC and she paved the path for other women. I just remember her in these board meetings — she and I were the only women around the table and she held her own in a very strong and powerful way. How did that experience being in VC and also kind of overlapping with Mary Meeker shape what you do today?
IBRAHIM: Yeah. Well I think, growing up as kind of an outsider really taught me to get more comfortable with myself from a very young age. And also, I think through the first 18 years at Intel, I really gained confidence in my ability to see things from a slightly different perspective.
And that included things like when I moved to Japan in the nineties to work on this technology no one had heard of — DVD and USB. But I was willing to take that risk because I saw a lot of opportunity where maybe traditionally, other people didn’t. And my move to Kleiner Perkins and to venture capital came immediately after Intel. I was recruited, and I realized I had been this intrapreneur my entire career of like new markets, new technologies. And what was it like to go work with entrepreneurs like you who have these big, crazy ideas and make things happen. I was very fortunate. Again, there weren’t that many women, but to have the opportunity to work with the queen of the internet, Mary Meeker, and hear the types of questions and how she thought about things, especially with all the experience she brought into that role. But it wasn’t just her, it was the entrepreneurs.
At the end of the day, these were and are people who have these crazy ideas and these ambitions and these visions of what might actually be possible in the world. And what I appreciated about that time in venture capital was seeing such a broad range of entrepreneurs. But what I really missed was building, and I’m an operator at heart. So I actually went into one of our portfolio companies, Coursera, where I had a chance to take the experience of learning about the questions my fellow partners were asking, how the entrepreneurs are thinking about building their companies, and apply some of that into an early stage startup. It was about 40 people when I started.
Copy LinkWhy building experience mattered more than investing alone
EL KALIOUBY: That company Coursera is now a leader in online learning, with more than 170 million users, and a market cap of 1.38 billion dollars. I wanted to know how and why Lila made the jump from operating Coursera to Google DeepMind.
IBRAHIM: Well, I say, I’m laughing because I was supposed to take a year off. I’d been at Coursera for a while and I thought, I’m approaching my fifties and I really wanna be thoughtful about this next chapter of my career. I felt like I had this extraordinary luck of being in the right roles at the right time and making cool things happen and having a big impact in the world.
And so I was going to take a year off and really focus on my nonprofit. But John Doer from my Kleiner Perkins days said, “Lila, I just want you to meet this one entrepreneur.” He sat on the board of Alphabet, and I eventually caved in and thought, I’ll just do this one meeting as a favor to John.
Little did I realize here I would be, seven and a half years later. But I really wasn’t sure because I didn’t have a background in machine learning or artificial intelligence, and I was also based in Silicon Valley and the role was in London. So I ended up spending 50 hours — five, zero hours — interviewing for this role before I decided it was the right role and I was the right person for it.
Copy LinkWhat it means to run operations and responsibility at an AI frontier lab
EL KALIOUBY: Wow. That’s wild. People often think about long interview processes as a negative, but if you turn it on its head, it’s a way to ensure that this is the right next step and adventure for you. That’s pretty cool. So what do you do as a COO?
IBRAHIM: Yeah, so my role has evolved quite a lot over the past seven and a half years, but I oversee all of the central operations of how we organize to deliver our research and our products into the market. But I also oversee all of our responsibility and frontier safety work and our external engagement work now, whether that’s collaborating with policymakers around the future direction of AI, to our work around impact acceleration for social good.
So it’s quite a broad remit, but what I like about a COO role is really partnering with the organization to help achieve the mission. Our mission is to build AI responsibly to benefit humanity and I get to help shape how we do this. It’s really been an exciting journey so far.
EL KALIOUBY: Yeah, absolutely. So we are living through this crazy technological shift, right? And some people are charging full steam ahead with AI. Some others are a little bit more skeptical. What’s your framework for thinking about how we should be building AI and deploying it at scale in the world?
IBRAHIM: So much has changed since we started on this journey. DeepMind was founded in 2010 here in London, acquired by Google in 2014, and I think there’s something here where we are constantly thinking about how do we be responsible stewards of the technology and how we develop, how we govern, how we roll it out. So we need to be thoughtful and responsible at the same time, making sure that while we’re managing risk, we’re also investing in the opportunity.
Because at the end of the day, that’s why many of us are here. It’s not the technology for technology’s sake. It’s actually really wanting to make a positive impact on how people work, how they live, how they learn, on our understanding of the universe around us, helping to address some of society’s biggest challenges.
Copy LinkHow responsible AI can accelerate breakthroughs in science
EL KALIOUBY: Yeah. So let’s talk about AlphaFold. We have talked about AlphaFold before on the show, but for people who are not familiar, can you recap what that is about and how are you making it available to other researchers to accelerate scientific discoveries?
IBRAHIM: Yeah. So AlphaFold is our advanced AI system that helps predict the 3D structure of a protein. And so if you think of things like Parkinson’s, Alzheimer’s, malaria — these are all protein-based diseases and this is why proteins are important. If you can understand how a protein folds, you can understand when it misfolds what might be wrong.
And that helps us understand diseases. It also helps us deal with things like how to deal with industrial waste, why are some crops more resilient to disease than others. There’s all sorts of interesting things that we can really do in this space. So we developed an AI model specifically to predict the 3D structure of a protein.
EL KALIOUBY: And that’s a very challenging problem to solve — like before AI could figure it out, we did not have a way to actually do that, right?
IBRAHIM: Yeah. Right, and at scale is key because it used to take a PhD student about four to five years with the right equipment and the right experience to just do one protein prediction. There are 250 million known proteins. So think about it as like a billion years of research, consolidated all within the past five years or so. And we did something that I feel very proud of, which was as we were thinking about how to actually release this, we sought outside experts to help complement and make sure that we weren’t getting stuck in our own insider bias — was it safe to release? How should we release it?
And the result of that led to a partnership with the European Bioinformatics Lab to publish everything in a database available freely to scientists worldwide — all 200 million plus proteins. So one of the things we said was, if we’re going to give this to the world, let’s make sure there’s equitable access. So we’ve done some really interesting things. One is with a Neglected Disease Institute to make sure that they had access, like improving their onboarding experience and active use of it.
EL KALIOUBY: What’s an example of a neglected disease?
IBRAHIM: Leishmaniasis is an example of a protein-based disease that has actually impacted more people than COVID. It’s just over a longer period of time, and because of that, it hasn’t gotten the type of pharmaceutical funding that it might otherwise have gotten. So that would be one example.
We also took a look at some of the usage from the database and realized that the continent of Africa had low usage. So we worked with the community across the many countries to say, how do we actually do a train-the-trainer to help with onboarding researchers, so people who might not have otherwise even had access to the lab equipment can now advance some of the work in the fields that they’re dealing with. So I think what’s really exciting about AlphaFold as an example — a couple of things. One is that we went from model to impact in a very short period of time, and we’re still quite early in it. Last year we got the Nobel Prize for our work in this space, two of my colleagues.
And it’s significant because it was the very first time that the application of AI was awarded the Nobel Prize. So I think it may be the first, but it won’t be the last. And the other thing that I think is quite significant about this is really asking, where does the human ingenuity come in? Because we may have had the model, but this is not about any one company, not any one country, not any one field. So the fact that it’s being used so broadly globally — over 2.9 million researchers are using this worldwide — I think it’s quite extraordinary to think of what this might do to open up our understanding of so many fields around us.
Copy LinkWhere AI is already improving weather prediction and climate resilience
EL KALIOUBY: It’s so powerful and it changes the way we do this kind of research. Now you also do a lot of work around weather prediction and kind of mitigating the effects of climate change. Can you tell us some more about this work?
IBRAHIM: One of the areas I’m really excited about is some of the work around weather prediction because to me it’s like completely chaotic and unpredictable.
EL KALIOUBY: But you live in the UK, in London. So that’s part of it.
IBRAHIM: I’ve got my brolly over here and my wellies there. Yeah, it’s completely unpredictable. So one of the things that’s been really exciting is some of our work around WeatherNext, which is our 15-day state-of-the-art forecasting. Again, we bring the AI expertise and we work with the scientists to apply it into their field.
So we’ve worked with meteorologists worldwide to come up with a 15-day forecasting model. We’ve also been working with the US Hurricane Center on hurricane prediction — how can we predict 50 different potential paths that a hurricane might take. And you can imagine what this means for emergency preparedness.
And we’re still in the early stages, imagining what might be possible in a few years and how can we avoid some of the crises that we’ve seen in our lifetime.
Copy LinkWhy startups still have room to win in the AI stack
EL KALIOUBY: So I wanna put both of our investor hats on. I liked what you said — Google DeepMind is developing the underlying AI that is going to unlock all these vertical applications of AI. With an investor hat on, where do you think the opportunity is for a startup versus what DeepMind’s doing?
Like what makes a startup competitive, sustainable, with a moat, given that you guys are building all these underlying key enabling technologies?
IBRAHIM: I think a lot of the models now are available to developers to actually build on. Like, what are the problems that need to be solved? I think even when we did AlphaFold, we didn’t expect it to be used in agriculture in the way that it’s being used. And it actually took me back to the late nineties, early two thousands when we were working on the computer and the internet build out, and there were all of these questions of like, oh, our computers are going to displace farmers or teachers. When in fact what it did is it changed how people worked and it opened up opportunities that they hadn’t even imagined. And I think we’re at the early days of AI and we’re going to see that the creativity and the vision that entrepreneurs have — of solving problems sometimes that we didn’t even realize were a problem, because we’re just so used to how things work.
EL KALIOUBY: There really isn’t much controversy when it comes to AI-enabled scientific discovery. Wouldn’t it be awesome if we had more accurate weather prediction tools?!
But DeepMind is also working on AI applications that can revolutionize the way we learn. And when it comes to education there are a lot of opinions about where AI fits in. We get to that in a minute after a short break. Stay with us.
[AD BREAK]
Copy LinkHow AI could make learning more personal and more accessible
So let’s talk about education, because this is something we’re both very passionate about. You mentioned you were COO at Coursera for a while, and then you’re spending more and more of your time at Google DeepMind now thinking about how to apply AI in a way that democratizes access to education. Why do you think AI could play a key role in education and how we learn?
IBRAHIM: Well, I think education and learning is in my roots from our heritage. And even back in 2000, I went to set up a computer lab at the orphanage my father was raised in because I knew that education was — in Lebanon. There were a thousand students and they didn’t have the same type of access to books and teachers. And subsequently continuing to build out the computer lab — you can imagine students now having access to the same skills that some of the most advanced schools in the world had.
And it actually spurred me to start a nonprofit called Team for Tech, where we work with nonprofits globally that are focused on providing ed tech solutions to help with learning outcomes. So education and giving to the next generation so they have the opportunities is something that was a part of how I grew up.
It is something where I wanna make my mark in the world and leave it for my kids’ generation and the ones that follow. I feel really fortunate that when I came into this role, I wasn’t sure where that intersection would be seven years ago. But where the models are at right now, they’re getting factual enough, they’re grounded enough. The ability to interact with Gemini was multimodal from the start. And what that means is you can now meet a learner where they’re at. Maybe they don’t wanna type in a long prompt. Maybe they talk, right? Maybe they wanna take a picture instead. So that’s kind of like, we’ve been talking about personalized learning for so long and I feel like we actually are at the cusp of being able to unlock some of this.
EL KALIOUBY: I wanna go back to your story with your parents and the role that education and learning has played in technology. That really resonated, because both my parents are technologists. They met at a COBOL programming class — my dad taught COBOL and my mom attended the class in Cairo in the seventies. And I feel like technology was my window to economic mobility. The reason I’m in Boston now is because my parents really invested in our education and we were at the forefront of technology growing up. It’s also kind of my lens on investing — I really wanna make sure that AI is applied in a way that gives social and economic opportunity to people. Do you have any additional thoughts on that?
IBRAHIM: Yeah, I’ve been struggling to find the words because it’s actually quite emotional for me. When I took this role, I really felt like it was — it may sound cheesy, but like a moral calling, right?
I had no idea the AI industry would shift so much in the past seven years, and so I sit here feeling incredibly fortunate and very humbled by the fact that I am in this role, in this moment in time, in AI’s history, and all of a sudden my very circuitous, weird background makes so much sense. If I can help to make sure that we’re continuously thinking about how are we building AI with community so it happens with them and not to them — we as builders owe it to society, owe it to our past and to our future to be investing in this way.
Copy LinkTeaching kids to use AI as a tool and not a shortcut
EL KALIOUBY: Let’s get a little bit more tactical. I serve on my son’s school board and—
IBRAHIM: Oh, how is that going?
EL KALIOUBY: Yeah. There are actually — so I started a couple of years ago right after the ChatGPT moment, right. And I think everybody realized like, oh my God, we need to have a roadmap on how we’re applying AI at school.
And it’s kind of interesting — a lot of educators’ approach to AI is that it’s cheating, right? And separately, MIT just published a study essentially raising flags that over-reliance on AI could decrease your cognitive and critical thinking abilities. What’s your framework for thinking about AI and education? Because I actually think we absolutely have to have our kids using these tools. My son is 16. I know you have twin daughters who are 15. I think it’s amazing that my son is AI forward and I want him to be using these technologies. How do you use it in the right way that enhances learning as opposed to taking away from your learning experiences?
IBRAHIM: And I think that’s exactly why we need to be encouraging the responsible use of the technology and developing the healthy habits from a younger age. The technology’s not going away, so how do you onboard in a way that gets clear about what is appropriate use?
When are you using it for idea generation versus idea replacement? I think back to when I was in school, people were questioning what a calculator was going to do to our math skills.
Or you think about photo editing — was that cheating on your photo? You’re taking pictures. So as a society we’re not having the conversations that we need to be having in order to shape the technology in a way that it can be meaningful.
EL KALIOUBY: Yeah. I wanna kind of dive deeper into that, because I think that’s a question a lot of parents and educators are grappling with. I’ll just draw from my personal experience. Adam, my 16-year-old, was doing some research project over the summer and he’s been going to AI to ask about summarizations of articles.
And I actually had a conversation with him. I’m like, this is great. It’s actually helpful to get you starting to think about how to approach this research, but at the end of the day, you’re gonna have to just read that paper. So we’re kind of having these conversations on what’s the right use of AI and where do you use it in a way that helps you get to the answer faster, but you still have to do some of the basic work.
IBRAHIM: Yes. And the reality is you end up with some students who are not fast readers or are dyslexic, who may have challenges reading, who might otherwise be left behind or start getting labeled as like, why aren’t you keeping up with your classwork?
And you’re all of a sudden falling further and further behind. And I think this is again where this can make such a big difference. Where can the technology actually meet the learner where they’re at and help them on the learning journey, so that maybe in order for you to be good at algebra, you need to have your fractions down.
But maybe if you’re not getting one part of fractions right, it’s completely ruined your math trajectory. That’s where I am — I’m grounded in my hope: how can we use AI to really fulfill human potential and do it in a way where it’s not judgmental?
So I’ve been thinking a lot about tutors, right? A lot of AI companies are talking about, imagine a personalized tutor for everyone.
So I think there are some really interesting things that can happen with AI and it doesn’t replace the teacher. I really believe the human-to-human connection is so important. But in a classroom, it frees up the teacher to actually do what the teacher has the magical ability to do — connecting with the students and helping them on their learning journeys.
It just happens that in a classroom of 38 kids, everyone’s gonna be on their own learning journey.
Copy LinkBuilding educational AI that is accurate inclusive and grounded in pedagogy
EL KALIOUBY: Yeah. You talked about LearnLM and these kind of tutoring modules. How do you ensure that these LLMs are not hallucinating, that they’re accurate? If kids are gonna be relying on them to get a lot of their information.
IBRAHIM: This has been the big question with the large models, right — why it’s so important that we think about how to responsibly develop them from the start and not just in our final testing before release. But I think this really starts from the very beginning when you are training your models and thinking about what is it that you’re trying to achieve and making sure that you’re thinking holistically about it. So LearnLM as an example — we talked with educators, learning experts, and pedagogical experts to really understand what is going to be important, so that they could have a voice in how LearnLM got developed.
I always say responsibility should never be a bolt-on. And I think this is where three decades in tech of that internet build out taught me — you have to be thinking about this from the very beginning because the way that AI goes to market is you release a model and it can be in the hands of millions of people.
Exactly. And across many different countries at the same time, which is why global collaboration I think is so critical. You don’t have the same complexities we used to have in earlier technologies. This is available worldwide.
EL KALIOUBY: Yeah. I wanna talk about, and I think you’ve touched on this already, but I do wanna double click on it. How do we ensure that AI can also benefit neurodiverse human beings? I did a lot of work when I was at Cambridge University and then at MIT helping bring computer vision and AI and machine learning and emotional intelligence to individuals on the autism spectrum. And I could see how these technologies could really be an augmentation to say, the challenges they have with nonverbal communication — that’s a great use of technology and it could really help enhance how they interact with other people. How do you think about applying AI to neurodiverse populations?
IBRAHIM: I think it’s so important that we think of AI as a general tool that people from all interaction styles and learning styles can use. So the multimodal nature is actually really critical. Whether it’s different ways that people learn, or even physical limitations. So we did something called Lookout with people who are visually impaired — being able to use a phone’s camera and voice to have the context and translate. Seeing AI applied there. Recently, we also tested technology with American Sign Language to do real-time translations.
Then there’s even what I mentioned earlier on dyslexia — being able to convert long text into audio or visual has been transformational. I think about this a lot. My sister has cerebral palsy and when she was growing up in the seventies and eighties, my parents really had to fight for her to get mainstreamed education.
And that made all the difference in her career trajectory. But I think that’s old school technology — where might she, what might she have been able to do with AI? So I think there are so many different challenges. And we need, as we’re developing this technology, to think holistically about how we make this for everyone and not just an elite view.
EL KALIOUBY: We’re going to take a short break. When we come back, DeepMind’s approach to mitigating misinformation, and one of the best AI life hacks that I’ve heard to date. Stay with us.
[AD BREAK]
Copy LinkHow DeepMind approaches risk governance and everyday AI use
EL KALIOUBY: So I wanna talk about responsibility next, and how do you build this responsibly? One piece of it is ensuring that this is inclusive and accessible to everybody, no matter where you’re from, no matter the way you learn or experience the world. How do you think about mitigating risks in AI — you personally, but also the DeepMind view of mitigating risk?
IBRAHIM: Yeah. And we think about it on a continuum of like near-term risks around bias and misinformation.
We think about it in terms of misuse — if this gets into the hands of people who don’t want to use it for good. And then long-term risks of like who’s in control, whose values is the technology aligned to. So on that continuum, we have an incredible amount of research happening in each of those areas.
The social science side of it, the technical side of it, because we feel like it’s really important to advance those fields while we also do the technical research. An example of this would be when we first demonstrated our Astra technology research platform, which was like an assistant type technology.
We demonstrated the technology at the same time we published a report on the ethics of advanced AI assistance, because we felt like it was important to be able to talk about opportunity, risk, and nomenclature altogether. So just realizing that — that’s a lot of work we don’t talk about publicly, but we’ve spent a lot of effort on it.
EL KALIOUBY: I think that’s very important, by the way, because I think there’s this perception that some of the bigger tech companies are building these AIs, deploying them and not really thinking about the social implications — like, what does it mean to the moral fabric of society when we are over-reliant on an AI friend or a companion? But it sounds like you are actually doing the work to explore and experiment with what these implications could potentially look like.
IBRAHIM: Exactly. And we do that because we know that we have a team of experts, but there are many experts around the world who look at it from a very different perspective.
So a lot of this work also happens in collaboration with think tanks, university researchers, et cetera. That was one piece — thinking about the risk continuum. The other one is thinking about how are we building the technology. So what does it mean to be responsible and safe from a research perspective? Work in this space of safety filters or interpretability of what is the AI model doing. Then the governance part of it. We have an interdisciplinary group within Google DeepMind where all of our models go, and our research areas, even as they’re getting identified. And it’s a chance for us to have the conversation of like, what could go right, how do we make sure that happens?
What might go wrong? How do we mitigate it when we may not get it perfect? But having those conversations hand in hand with the researchers is really critical for us. And then thinking about how to deploy it. And I think one thing that’s important for people to realize is this isn’t perfect technology. We are still very early in the stages. We want to be responsible. We realize there may be issues that happen and then it’s a matter of how quickly can we adjust and respond and learn from that.
So I’m really into this Japanese concept of kaizen where you’re continuously learning and continuously improving. And I think that’s going to be increasingly critical.
EL KALIOUBY: Yeah. How do you incorporate AI into your everyday life? Do you have any hacks for us?
IBRAHIM: Oh, so many. Earlier today, as a leader and as a mom, I am very time poor — that feels like a constant in my life. I have a team meeting coming up. And earlier today, just about an hour before we met, I went into Gemini and gave it a prompt to say, looking at my docs and my email, what are the top five things that I’ve been focused on for the past two months? Give me the categories because I wanna share it with my staff to make it more actionable and give them some clarity. And I got back this amazing — I think my chief of staff was a little bit worried, but it was like, I got this wonderful five themes that I can now clearly articulate. It was super helpful for me also to check, am I spending my time as I think I’m spending my time? So that’s just an example from today. My favorite at-home tip right now is — again, time poor. Something in the house breaks and you’re like, where’s the user manual? How do I fix this? What is that error code? What is that symbol on the washing machine? So I actually use NotebookLM and I’ve inputted our user manuals. I’ve added links out. So we now have like our home assistant to be able to make inquiries. All set.
EL KALIOUBY: Oh my God, this is like the best hack ever. I’m gonna totally do that.
IBRAHIM: And this is what’s great. You can use the same technology for both, so it works in your home life and your work life. And I think this is really the power of AI.
EL KALIOUBY: Amazing. I love that. That’s gonna be my son’s summer project. I’m gonna put him on that. Okay, final question. It’s something I ask of all our guests, and it’s a question I’ve been thinking a lot about. What does it mean to be human in the age of AI, when AI can be so smart and creative and empathetic and all these things?
IBRAHIM: Yeah. I think one of the coolest things I’ve noticed in my social circles recently is that people are talking about what it means to be human. If you’d told me this five years ago that I’d be sitting around with folks talking about what it means to be human, I would’ve been surprised.
So I think in many ways AI has already made us more human just by making us very deliberate and intentional about who we are and what we do that’s unique. And one thing I’ve been trying to do within my team is shine a spotlight on how people are using AI and also where they’re not, because where they’re not tells you exactly where the human brings the magic into the work. And I think that is what makes us human — the spark of creativity is ours. Right? The human ingenuity. How is it showing up? In everything — the human connection, finding things to talk with people about, getting curious to learn about others. I think what’s really exciting is that maybe in the past we didn’t appreciate that as much, but I think AI is actually helping us appreciate it more. And is it helping us as a result to be more human?
EL KALIOUBY: I love that. Well, thank you so much for joining us, Lila. That was an amazing conversation. So inspiring.
IBRAHIM: Thank you.
EL KALIOUBY: So much of my personal story is central to my work as an AI scientist and now investor. It’s so powerful to hear from people like Lila, who are purpose and values driven. She’s working at one of the biggest tech companies out there, ensuring that AI is done responsibly and with inclusion in mind.
There are a lot of takeaways from my conversation with Lila, but a big one for me is around education. The reality is, we all learn differently and learning currently isn’t equitable. Whether it’s because of a lack of funding or a lack of support around neurodivergent learners – there are huge gaps globally when it comes to education. Like Lila, I see a bright future ahead. One where students have personalized AI tutors AND have access to high quality teachers. All of this is possible, IF we ensure that AI is not a crutch for learning, but rather a tool to enhance it.
Episode Takeaways
- Google DeepMind COO Lila Ibrahim traces her path from an Arab American childhood in Indiana and Intel days to a career built around technology that can genuinely improve lives.
- Lila says her years at Intel, Kleiner Perkins, and Coursera taught her to spot big opportunities, and ultimately prepared her to help scale DeepMind’s research and responsibility efforts.
- On AI deployment, she makes the case for balancing risk with ambition, pointing to AlphaFold and weather forecasting as examples of AI driving real breakthroughs in science and climate resilience.
- Education is where Lila’s sense of mission comes through most clearly: she sees multimodal AI and tutoring tools as a way to personalize learning without replacing teachers or student effort.
- She also lays out DeepMind’s view of responsible AI, from bias and misuse to long-term control, before ending with a wonderfully practical hack: using NotebookLM as a home manual assistant.