Integrating AI into human society means balancing so many uncertainties. The questions we ask as we develop the AI tools of today will shape the reality of tomorrow. As Chief Scientific Officer at Microsoft, Eric Horvitz takes this responsibility to heart. He joins Pioneers of AI as a longtime colleague of host Rana el Kabliouby in the AI space, to talk about the principles that guide him and his team as they steer breathtaking innovation. He explores AI’s circles of influence, and how humans can flourish alongside it.
About Eric
- Chief Scientific Officer at Microsoft; leads AI strategy, science, and societal impact
- Won Feigenbaum Prize and Allen Newell Award for AI breakthroughs under uncertainty
- Member, U.S. National Academy of Engineering & American Academy of Arts and Sciences
- Advises U.S. President via PCAST; serves on NIH AI working group
- MD and PhD from Stanford; Fellow of ACM, AAAI, and ACMI
Table of Contents:
- Why this moment marks a true inflection point for AI
- How Microsoft built AI ethics into product decisions
- What human flourishing means in an age of intelligent machines
- Why authorship and authenticity get harder with generative AI
- Where AI could deliver its biggest breakthroughs in biology
- How AI interfaces may evolve beyond chat into humanlike interaction
- The biggest risks from disinformation to biosecurity
- What skills young people should invest in as AI advances
- Episode Takeaways
Transcript:
Flourishing in the age of AI, with Eric Horvitz
RANA EL KALIOUBY: AI is revolutionizing the way we work, the way we learn, even the way we communicate with each other.
And in light of all of these changes, Eric Horvitz has been asking himself several very big questions.
ERIC HORVITZ: How will it feel to be human in a world where machines are really smart, like people? We have to ask the question, what’s the nature of human dignity, human agency, human authenticity?
EL KALIOUBY: Eric is the Chief Scientific Officer at Microsoft. In his role he’s not only thinking about where to apply AI today, but also how AI will impact humanity at large. If you know me at all, you know these are questions I think deeply about too.
HORVITZ: There’s a joy of living, of experiencing, of being with people, creating as human beings. We have these incredible chess machines now. But people still like playing people in chess, and we still celebrate people for being excellent chess players without computing help, necessarily.
We all have, are getting access to tools now that say, hey, can I help you write this email better. Well, I found myself, in some ways longing for the days when you saw words that really came from someone’s brain, as they edited and thought through how to write something. So, once in a while when I read a long message that I’m kind of proud is well crafted, I put at the bottom of it forward slash. Handwritten.
EL KALIOUBY: Perhaps it’s surprising that one of our great AI scientists yearns for human-crafted emails. But to me it makes a lot of sense.
AI puts into question so much of what we claim to be “human.” And many of us in the field are not only thinking about how to co-habitate with this technology, but also how to flourish alongside it.
I’m a longtime admirer of Eric’s work and I’m so pleased to share my conversation with him about flourishing in the age of AI, Microsoft’s approach to AI ethics, and where AI will make the biggest impact in our lives.
I’m Rana el Kaliouby and this is Pioneers of AI, a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
Welcome to the show, Eric.
HORVITZ: It’s great to be here, Rana.
Copy LinkWhy this moment marks a true inflection point for AI
EL KALIOUBY: So I was doing the math, and you and I first met almost two decades ago, back when I was a postdoc at MIT. And then we got the chance to work together through the Partnership on AI Consortium, which is this multi stakeholder organization that examines kind of the societal implications of AI, which you were a founding chair of.
I think it was 2016, is that right?
Yeah, that’s amazing. But you’ve been in the space of machines and cognition since the 80s. What is so special about this moment in time when it comes to AI?
HORVITZ: Oh, I think it’s the inflection. It’s the power and capabilities we’re seeing now. I and many of my colleagues are surprised by the power of these deep neural networks. During my graduate work, in the years when I first met you, I would call it the time of the science where we kind of understood everything about what we were doing. The nodes and arcs in a Bayesian network, the semantics is very clear, how things worked.
EL KALIOUBY: Bayesian networks. I did my PhD in this so I could easily spend the whole hour talking about them – but essentially they’re a mathematical model that represents relationships using probabilities – it’s a type of machine learning approach that was very popular back in the day.
HORVITZ: And in my mind at the time, there was a certain sense of we understand the foundations from the atoms to what these systems do when you throw them at real world problems. And we always had friends and were always very intrigued by connectionist models.
That’s what we used to call neural network models. The connectionist group. The connectionist approach. We had debates on stages of triple AI meetings at the big AI conference. The connectionist versus the logic heads versus the Bayesian network people.
EL KALIOUBY: I was definitely in the Bayesian.
HORVITZ: Yeah. Oh, I know you were. And all of a sudden we are mystified in ways, which is exciting to me, in the same way we’re mystified about our own minds.
EL KALIOUBY: Eric is describing where this debate has landed today. Basically, the “connectionists” were right!
Or rather, it’s their way of thinking that has led to the biggest, recent breakthroughs in AI, creating computer models that don’t take a totally linear, rational approach. But rather they mimic the complex connections our human brains make, even when those connections are not 100% deterministic.
HORVITZ: I used to always frown upon, don’t say neural networks are like biological neurons because they’re not, they’re very different. But all of a sudden I found myself today saying for the first time in my career, the reasons that I was motivated to begin with to enter into the field of understanding the mysteries of mind.
Copy LinkHow Microsoft built AI ethics into product decisions
EL KALIOUBY: And we’re going to come back to some of the applications of this and also what it means for our humanity in a second. But first, you’re the chief scientific officer at Microsoft. And I know when we caught up recently, you also mentioned that you head up the safety board, which basically like you approve the AI products coming out of Microsoft. Tell us more about your roles within Microsoft.
HORVITZ: I chair the Aether committee, which was an acronym that stood for AI and ethics in engineering and research. Aether was formed by myself and Brad Smith, our chief counsel, in 2016, when I reached out to Brad to say, we need to start thinking about the implications of building systems that will be entering into the realm of domains that have been solely the domain of human decision making in the past. And we set up a committee and one of the first things we did was come up with a set of principles. These six principles are accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness. And we can talk about each one of these things because we actually ended up writing a book that’s freely downloadable that’s still available, called the future computed from Microsoft, that talked about why we factored things in that way and chose those six.
EL KALIOUBY: Microsoft then implemented these principles, and began creating tools to combat deepfakes, guard against cyber threats, and place safeguards for children.
The company also runs internal audits, where ethicists, engineers, and scientists assemble to vet the risks of new AI tools.
HORVITZ: Sometimes they’d stop projects, other times they would modify them and give guidance. And they came up with three considerations for how you’d categorize an AI application as sensitive and requiring study and deliberation. The first consideration for why something might be sensitive in an AI system is that it puts people at risk for physical or psychological harm. The way I look at this is like there’s like rings around a human being. The first ring is like, actually, will this harm me physically and then psychologically?
Then the next ring out is, does this AI system have a consequential impact on someone’s life opportunities, loans, healthcare, education. And then the third ring out is, does this AI system pose a broader threat to people in society, in particular, a threat to human rights, including civil liberties.
So we look at all of those three dimensions and make recommendations. So we ended up reviewing hundreds of cases and we have like an hour live case library.
Copy LinkWhat human flourishing means in an age of intelligent machines
EL KALIOUBY: So we both share this conviction that AI should really be in service of humanity, and it’s clear that a lot of the work you’ve been doing falls in that realm. But you also take it a step further. You really kind of advocate for applying AI for human flourishing. How do you define human flourishing?
HORVITZ: It’s a really interesting question you asked because there’s so much more written about challenges to human health and well being than there is about what it means to flourish as humanity, as people, as individuals. Some of the best work by the way was done on flourishing by, believe it or not, by Aristotle who talked about eudaimonia.
He even had a word for this and he talked about — I’ll put it in my own words — when you look back at your life from when you’re in your eighties or nineties and think through what was that really brought you happiness and contentment, a sense of warmth and wellness, with being a human being on planet earth.
What are these things that you’ve accomplished? It wouldn’t, it’s not necessarily going to ever be in someone’s like their bank account statements. In fact, people would say, boy, I looked at the wrong thing in my life. I was really should have worked on my relationships more. Or they’ll say, like I always think back when I think about what makes me happy and content.
I know I’m a geek, but I think about the time I was sitting quietly in a conference room at Stanford as a grad student and I had this rush of intense ideas and it was there with me in a very small little conference room late at night and it was like a conversation with the whiteboard where I covered it, I said I had my thesis direction and I was so excited for the next four years about that whiteboard.
And that was the moment of the idea of creative breakthroughs and with a certain goal, to understand human intelligence. But also if you think back, it’s like the first time I kissed my wife.
These moments that change everything. Having a child and raising my son from birth to where he is now, a PhD student at Columbia, watching his growth and just being there as somebody who could do his best to try to mentor. That’s like a major part of my life and major part of my flourishing. And I think I’d love to see.
As humans, us thinking more deeply about that when it comes to everything from elections, to a democracy, to participating in civil society, organizations and movements, to charity, to helping others.
EL KALIOUBY: I recently came across Harvard’s Human Flourishing Program, a social science effort that seems right in Eric’s wheelhouse.
They defined five points of flourishing. So I wanted to get Eric’s take – since he hadn’t heard about them before.
I’d love to get your thoughts on how might AI influence each of these. So physical and mental health, happiness and life satisfaction. Meaning and purpose, character and virtue, and then social relationships. A, do you agree with those, and B, how might AI play a role.
HORVITZ: Well, those are all very big swaths through the goodness of a life well lived. So we might pick one topic there, for example, like in education, for example, coming to understand the world. I think these AI systems, whether it be answering a question like, what’s the scoop with human flourishing and eudaimonia? What did Aristotle do? Give me an answer that I can digest at my level of understanding, given where I am.
I think these systems will be incredibly valuable for giving people, in more efficient ways, answers to their questions, and pathways to ongoing sequences of cycles of curiosity and growth when it comes to learning.
And I think that this will add to meaning, comprehension and understanding. I should back up a little bit and say that my team was like one of the first teams at Microsoft and thereby being one of the first teams on the planet to get access to GPT-4. And one of the deep dives we did was in what I would call, how might these systems be useful to bring people together? So we have a paper called Sparks of Artificial General Intelligence. We wrote back then to capture the dimensions of what we were seeing. And you can almost read this paper and see the electricity and the surprise we had, as we went through these examples.
But one of them as an example was, we asked the system to help with the situation. You’re at a Thanksgiving meal and it’s your mom and her brother, your uncle, and they have very different feelings about getting vaccinated and they’re having a debate because it’s Thanksgiving and these things come up, and we give the example where the system did a beautiful job of saying, here’s the perspective I would take: make sure they both know you love them deeply and listen to what you have to say about your care, about their health and what you believe.
And since then, there’ve been some formal studies of the systems having the ability to facilitate conversations. In fact, Science Magazine just, I think, had two articles talking about how these systems can help people better grapple with what are called conspiracy theories to understand maybe the basis for some of the findings they see in the world.
And the other was on helping to facilitate different points of view and conversation. That was a long answer about flourishing and about education, but I thought you’d appreciate that.
EL KALIOUBY: I’m with Eric. AI can help us flourish as individuals. It will help us learn more and communicate better.
But there are still plenty of unknowns. For example, if you collaborate with AI to create a piece of music or art or even write a book, who is the real creator? How much of the work is yours or the machine’s? We’ll get to that after a short break. Stay with us.
[AD BREAK]
Copy LinkWhy authorship and authenticity get harder with generative AI
As we use these tools as collaborators and thought partners, the line of authorship and ownership becomes really blurry, right? If I use ChatGPT to kind of think through an essay or a scientific experiment or create a new film or music or art piece, who owns this work?
And do we need new frameworks to—
HORVITZ: So, interestingly, I was pulled into a National Academy study that led to a paper which came out in May. In the proceedings of the National Academy of Science called protecting scientific integrity in an era of generative AI.
I know you’re talking about more the creative arts, but let’s start with science for a second. If a scientist uses one of these tools and ideas come through in response to a query without attribution to the authors of the ideas, is that an okay thing to be doing?
And alternatively, these systems are creative. If they put two and two together and come up with a new concept, does the scientist say, well, it was my prompt, or attribute the idea to model 17.3 on this date. Also in the world of generative AI, their ability to synthesize data is becoming so high fidelity that data’s coming out that you can’t necessarily discriminate the synthetic data from empirical data that you get from an experiment. What kind of metadata do we need to have on data moving forward so we don’t confuse the two?
So, there’s a whole set of — in that paper — agreed upon key principles for moving forward, for doing science with integrity. One of the recommendations was we need to stay on this because things are moving so quickly, to create a strategic committee at the National Academies that will be overseeing and looking at implications of these tools for the sciences, making recommendations to different stakeholders, to model creators, to model users, to scientists in different fields. Another recommendation was, be very cautious about using models to guide key decisions in academia, including promotion and recommendation letters, for example, reviews of papers.
And the same for creative arts. So it could be really interesting when we move forward with copyright deliberations to understand what’s the right thing to be doing in a changing world where these systems can do creative things, but they also are drawing upon the creativity of humans in different ways. It’s raising questions that we need to answer as a society, I think.
EL KALIOUBY: Do you think this will put more premium on human generated stuff?
HORVITZ: Yes, I do. And going back maybe to six or seven years ago, I was asked to give a set of predictions. But one of them was, someday people will seek out art, whether it be music or literature, that is certified, created by humans without any computational tools. And they’ll pay more for these things. And shortly after that, I was sitting at a meeting at Microsoft where this beautiful song was played, and the lyrics and the music were created by an AI system. And it was really this heartfelt song. And I thought to myself, I know it was a beautiful song and I know it was creative and it’s celebrating AI, but something, I want to have somebody sing about their life and what they’ve experienced as a human and I’ll pay more for.
EL KALIOUBY: More for that. That’s fascinating. Like the, to know that it stems from this like authentic human experience. Yeah.
HORVITZ: Absolutely. And the question is, will society feel the same way over the decades coming forward?
Copy LinkWhere AI could deliver its biggest breakthroughs in biology
EL KALIOUBY: Yeah. So interesting. So I want to move us on to AI innovations, but I want to cover as many of these as we possibly can. So I want us to kind of rapid fire them. So one of the areas I’m particularly passionate about is the intersection of AI and the biological sciences.
HORVITZ: Oh, gosh. Yes.
EL KALIOUBY: Yeah, I know we can spend a whole episode on that. But what are some of the key things you’re excited about in this space?
HORVITZ: These AI tools are providing us with new computational microscopes where we can see molecules and seeing them, not just their shape, but how they move. We can now design them to go after target proteins and receptors, including those that could disrupt disease. I believe we will see in our lifetimes breakthroughs in medicine that will be clearly called out as AI breakthroughs.
For example, I think we’re going to be able to disrupt immune diseases, a lot of them in our lifetime. I think we’re going to be able to transform more cancers into chronic diseases that are managed. I think we’re going to get to the core of the mystery of neurodegenerative diseases in our lifetime.
I think we’re going to get to some sparks of understanding about what’s going on with Alzheimer’s disease and frontotemporal dementia. I often say that the biggest impact of AI in our lifetimes will be on biosciences.
I think we can probably agree on that.
EL KALIOUBY: Yeah, we do agree on that. It’s an area, as you know, I’m very interested in kind of identifying transformative companies in that space and backing them. So another area that I’m super fascinated about is how AI can take us to more sustainable living and in particular reimagining food systems.
So again, what do you see in terms of innovations in that space?
HORVITZ: Certainly the biosciences will read on agriculture, the guidance for how to farm more efficiently and effectively, for growing plants without poisonous insecticides, for making them resistant without imposing on the genetics in a way that makes the foods less safe. I think we’ll be seeing work in this space.
It’s happening now. I’m thinking about the agricultural possibilities. We have a project we’ve called FarmBeats over the years, using AI and communications technologies to change the way farming works. With climate change, we’re going to see some interesting and unfortunate changes — challenges to where we grow food and how we distribute it. So we’ll have to get creative about sustainable agriculture and food and its distribution.
Copy LinkHow AI interfaces may evolve beyond chat into humanlike interaction
EL KALIOUBY: Yeah, absolutely. And then one of the things I also think a lot about is, I don’t think the next generation of a human machine interface is going to be a phone. My thesis is it’s going to mirror human to human communication, which is based on perception and conversation and empathy. I’m curious, what do you think the evolution of an interface is going to be like?
HORVITZ: So Rana, now we’re getting into the field you studied, and you have led on with your Affectiva work and so on. When it comes to systems that can actually communicate in more natural ways, my initial reaction is the following. Many people have first come to understand AI through the chat interfaces because it just exploded onto the scene just a couple of years ago. That’s what AI is. But beneath the hood is this incredible spark of increasingly general intelligence, which can power up so many different kinds of interactive experiences.
We need to think beyond simplistic interaction modes and interfaces and get really creative about the possibilities of creating deeply human AI collaborative, fluid experiences. And you might say that there’s a huge opportunity, given how well we work with other people, the humans, and that’s part of our evolutionary history, of thinking how to do that better. A few years ago, we built a system, it was my computational assistant that would understand all about my schedule and help people that came by to find out where I was and how they penciled me in for meetings and so on. And at one point we took some of the uncertainties at different levels.
Did I understand well? Can I hear well? Am I seeing you well enough? And we said, instead of having these scores inside the system that are like log functions of looking at entropy levels, we basically pushed out the uncertainty to eyebrows.
Tilted head, to asking again, for confidence and for uncertainty.
And all of a sudden, we took this set of mathematical principles and had them controlling expressions in a natural way. And you realize that we all the time communicate our uncertainty to each other and our understanding and our confirmation and our happiness. And if we can’t see well, we’ll squint.
And in the paper we wrote with videos, we showed how this can all work as a much better interface to human beings than bar graphs or numbers.
EL KALIOUBY: Yeah, absolutely. And I think exactly to your point, all these different — I’ve been doing research in—
HORVITZ: I know you’ve been a leader.
EL KALIOUBY: Right? But now I feel like it’ll finally come into place because we’ve got the large language models. We’ve got these different modalities finally ready to all be combined in a multimodal context.
So I’m excited to see AI move from chatbot world to like, physical embodied AI.
HORVITZ: When I see that, you’ll be in the cloud over my head when these things are happening. Oh, this is what Rana was talking about over the years. I think though, along with that, a close friend just showed me these startups that are now, they said, give me your video and I’ll synthesize a version of you for a zoom call.
I said, we also want to think deeply about all the implications on the dark side of the wonders of having human like agency and agents in the world.
EL KALIOUBY: So what are some of these negative implications of AI that Eric thinks we need to address?
EL KALIOUBY: So what are some of these negative implications of AI that Eric thinks we need to address?
[AD BREAK]
We’ll get to that after a short break.
Copy LinkThe biggest risks from disinformation to biosecurity
What is your top concern when it comes to what we’re building with AI?
HORVITZ: I have two main concerns right now. I’m deeply concerned about the disinformation that comes through the ability, not just to visual and auditory renderings that are as good as reality but also through the persuasive powers of these systems, on a cognitive side, to create persuasive stories and campaigns. Those kinds of things might be useful when it comes to like a public health campaign, to persuade somebody to take better care of themselves and so on.
But of course they—
EL KALIOUBY: Or kind of debunk a conspiracy theory.
HORVITZ: Debunk. But, in general, the idea of manipulation, whether it be good or bad, doesn’t go well with people. And the idea that these systems can be harnessed by malevolent actors to impose a view or to persuade, or to disinform, I think is very troubling.
I’m concerned about the fact that our grandkids might be living in a post epistemic world because of these technologies. We really can’t figure out what’s really true and not true, and this led our team at Microsoft several years ago to propose an innovation which has gained lots of steam called Media Provenance.
And so with Media Provenance, the idea is we ask the question, I have a camera, I have a display somewhere on the internet. How can I certify that every single pixel here being hitting the light sensitive surface is rendered without any manipulation, end to end, glass to glass?
And one approach is with cryptographic methods that kind of put a wax seal on things. So you at least know if the wax seal is broken, you can’t necessarily trust it as being end to end reality, for example. Or, any piece of content, if it’s been created by somebody, like it looks like a trusted entity, let’s say you trust the BBC, to know that the thing you’re seeing was actually from the BBC without any manipulation along the way.
And that’s what Media Provenance is all about. So I’m hoping that that will be helpful. Technologies like that, along with policies. The second thing I worry about is AI and biosecurity. These methods can be used to design new kinds of toxins, gain of function, that can be harnessed by malevolent actors. And so I think we need to invest in that area deeply. Let’s just say these are short to midterm concerns that we need to get right.
Copy LinkWhat skills young people should invest in as AI advances
EL KALIOUBY: Yeah, absolutely. So my daughter’s 21 and my son is 15 and a half. What skills do you think they should double down on, as we see AI continue to accelerate?
HORVITZ: What a great question, as to how the rise of these technologies are changing what it is that people think they should be doing as well as what might be the best thing to be doing as they plan and invest in their careers, in their education and in choosing an area to become expert at or to master, for example. How are people feeling now about medicine when you see these systems performing at the level of medical experts? Is that going to change the decision making? There’s an article in the Journal of Radiology that came out a few years ago saying even in the previous inflection point, of models that could sort of look at x rays and could identify illness and highlight areas of pathology, there were few people choosing for their residencies radiology because of fears of competency in that space.
Well, now we have broader competencies in medicine. So there’s some short term even issues to think through, how are these tools rising and affecting people’s choices now?
I do think that there will always be a place for the humanities and the arts that will be untouchable by these systems. They’ll be influenceable, but there’ll be a place for humans that’ll be very central: writing, creative writing, sharing human experiences.
I have often wondered if with the rise of automation of new forms, will there be a concomitant rise of an economy of human connection that becomes even more important in a world of automation, that is celebrated because it’s scarce. And there’s always been a magic in human apprenticeship and human mentorship that we don’t understand yet.
It’s not clear we’re going to get there with machines, as intuitive as they may become. I think it’s an area that we should have some uncertainty about. We should be willing to have open conversations with young people that it’s not clear where things are going.
Let me just say that I think some aspects of AI and how it diffuses into society will go fast. Other areas will go much slower than we expect. And humans will continue to play a strong and central role in many fields. So I don’t see that going away even if we have some anxieties now with how fast these technologies are becoming competent at things that we have relied upon as human centered in the past.
EL KALIOUBY: Thank you, Eric. That was awesome. Thank you for—
HORVITZ: It’s great talking to you. We should just talk more often, even beyond podcasts.
EL KALIOUBY: I agree. Deal.
Episode Takeaways
- Microsoft Chief Scientific Officer Eric Horvitz says this AI moment feels different because modern neural networks are both astonishingly powerful and, in some ways, productively mysterious.
- Horvitz explains how Microsoft built its AI ethics framework around six principles and a review process that can stop or reshape products when human risk is too high.
- He argues AI should ultimately serve human flourishing, helping people learn, deepen understanding, and even navigate difficult conversations with more empathy and perspective.
- As generative tools blur authorship in science and the arts, Horvitz predicts truly human-made work may become more prized precisely because it carries authentic lived experience.
- Looking ahead, he is most excited about AI’s impact on bioscience and sustainable agriculture, while warning that disinformation, biosecurity, and human skills deserve urgent attention.