Designing AI for people and the planet, with Aza Raskin
AI is rapidly changing what it means to be human, but is AI development truly human-centered? According to Aza Raskin, co-founder of the Center for Humane Technology, the answer is largely “no.” The man who invented the “infinite scroll” is now working to understand why the most powerful technology so often fails to prioritize the human experience and our collective well-being. Raskin is also leading the Earth Species Project — using AI to better understand the animals that live alongside us, by unlocking patterns in vast amounts of data on animal communication. He joins Pioneers of AI to talk about both of these dynamic endeavors, with the flourishing of humans and Earth at the core.
About Aza
- Co-founded Earth Species Project, pioneering AI for interspecies communication
- Co-founded Center for Humane Technology to reform tech incentives
- Architect and subject of Emmy-winning documentary The Social Dilemma
- National Geographic Explorer focused on people, planet, and technology
- Invented infinite scroll, one of the web's most influential UI patterns
Table of Contents:
- How a humane upbringing shaped his view of technology
- Why every new interface creates a new moral responsibility
- What infinite scroll taught him about friction and harm
- How technologists can anticipate harm before products scale
- Why AI incentives are driving a race for intimacy
- What meaningful coordination on AI safety would actually require
- How individuals can act with clarity and courage in the AI era
- Why decoding animal communication could change our relationship with life
- How multimodal AI is revealing the hidden richness of animal language
- What interspecies understanding teaches us about being human
- Episode Takeaways
Transcript:
Designing AI for people and the planet, with Aza Raskin
AZA RASKIN: We didn’t have the right to privacy until a technology was invented that required adding privacy into American law.
And that was Kodak’s invention of the mass-produced camera. And once people could walk around with it, it had a new interface where suddenly there was no friction to capture images. Suddenly, the elite got very interested and concerned about where they could be captured. And Brandeis, one of America’s most brilliant legal minds, Supreme Court Justice, ended up sort of inventing this idea of privacy and adding it to the Constitution. With AI, there are new domains of what it is to be human that were inaccessible to technology before; now accessible, everything about us that isn’t explicitly protected by 19th-century law will end up being strip-mined. And we can see this in the form now of the race to intimacy, the race to occupy the single most intimate slot in your life. And that opens up a whole new range of harm.
RANA EL KALIOUBY: That’s Aza Raskin, co-founder of the Center for Humane Technology and the Earth Species Project. And what you just heard .. how he weaves the past into our present moment .. that’s something he often does. Because Aza’s scope is broad. He’s seeking to understand how and why the most popular and powerful technology of our time tends not to center the human experience, or the Earth as a whole. And history can be helpful for that.
In our conversation, you’ll hear him reflect on several forks in the road — moments where technology took a certain path, and the impact on humanity was huge. Aza and I first met at Peter Diamandis’ Abundance Summit a few years ago. We share a belief that with a bit of intentionality, AI can benefit humanity. We dive deep into what needs to go right to achieve that goal.
Also, a note before we start: this conversation includes a mention of death by suicide. Take care, and thanks for listening.
EL KALIOUBY: Aza, welcome to Pioneers of AI. I’m so happy we’re having this conversation.
RASKIN: Oh, it’s good to see you again, Ronna.
Copy LinkHow a humane upbringing shaped his view of technology
EL KALIOUBY: Great to reconnect. Alright. Part of why I’m excited to have this conversation is I really feel like you were born to do this work. So I wanna roll the clock all the way back to even your dad. Yeah. Jeff Raskin. He was one of the early inventors of Macintosh for Apple, and I guess you joined him on some of these tech events and you gave talks when you were 10.
So tell us a little bit about your upbringing.
RASKIN: Yeah, I think I was doomed to have no friends. My parents would carry me around, actually, in one of the original Macintosh carry cases.
EL KALIOUBY: Oh my God. Okay.
RASKIN: Strollers. I get that. Bumpy ride. But how I grew up is, my mom is a nurse practitioner. Mm-hmm. And she does especially palliative care and hospice. And so there’s a very particular kind of way that she exhibits care. It’s like a very tactile care for helping people have dignity in their most important transitions.
And my father started the Macintosh project at Apple. Mm-hmm. A very different kind of care, sort of like at scale.
Care. Care at scale. Yeah.
And what my father was really obsessed about was, well, what is it to be humane? And actually “humane” in the name Center for Humane Technology — that comes from my father.
And when he was making the Macintosh, he was thinking a lot about, well, how do you be responsive to human needs and considerate of human sensitivities, of human frailties? And it’s this view that in order to understand how to make something that works for us, you have to deeply understand how we work, sort of our ergonomics.
Yeah. And if you don’t understand our ergonomics, how our body bends and unfolds, and you make chairs that like—
EL KALIOUBY: Are unhealthy.
RASKIN: —are unhealthy, that hurt us. And if you don’t understand the ergonomics of the mind, or cosmatics as you called it, then you make systems that hurt us psycho-emotionally. And if you don’t understand the ergonomics of communities.
Then you break apart society with technology. And so there’s this beautiful sort of symmetry that he was talking about, which is there’s a relationship between understanding the ergonomics of something and creating negative externalities. And if you don’t understand the ergonomics, then your technology, as it gets more and more powerful, causes more and more harm.
And that is the responsibility as a designer: to deeply understand human nature and specifically the places that we are weak or vulnerable, mm-hmm, so that technology doesn’t exploit, but helps to protect. Yeah.
Copy LinkWhy every new interface creates a new moral responsibility
EL KALIOUBY: We’re gonna obviously get to that in a second. But I wanna also talk about this idea of a human-machine interface and how some of what you were just saying applies to that as well, because we are in this moment where the way we interact with technology is really changing and it’s evolving.
So I’d love your take on that.
RASKIN: This is a very challenging moment for obvious reasons. Yuval Harari likes to say that democracy is conversation.
Conversation is language. The new interface that we’re all using is language. But once a technology can hack language, democracy sort of ceases to be an effective form of governance. So the way I really think about this is instead of taking it from the lens of what is the interface, I think taking it from the lens of whenever you create a new technology, you uncover a new class of responsibility.
We didn’t need the right to be forgotten until the internet could remember us forever.
Yeah. I think this is fascinating. Let’s first start with your invention.
Oh, okay. Sorry, I keep — I keep running ahead. You asked me about me and I’m like, okay, but let me tell you about the world.
Copy LinkWhat infinite scroll taught him about friction and harm
EL KALIOUBY: No, this is great. And we definitely wanna get back to that. Okay. But you invented the infinite scroll and you invented it before social media, so it was not really invented for social media platforms, but of course it was a no-brainer for social media platforms to embrace that technology.
And then you were pretty vocal that this was kind of an unfortunate invention. So tell us about that journey.
RASKIN: Mm-hmm. So this was 2006, a new technology, Ajax, had come out. And that was this magical ability — it used to be that you had to refresh your webpage to get any new information.
You remember MapQuest, you’d have to hit the button to see the next thing and the whole webpage would load the next part of the map. And the thought hit me at that moment, like, oh, well I’m a designer. Every time I ask the user to make a choice they don’t care about, I have failed as a designer.
So simply, if you’re scrolling down a set of blog posts or a set of search results and you haven’t seen what you’re looking for, you keep scrolling — then don’t make me click the “next” button. Show more. Very simple idea. Yeah. And then I went around and talked to Twitter and Google and other people, like, this is just a better interface.
It’s more efficient. And when I was making it, I was really thinking about how can I reduce friction at the individual user level. And what I was blind to was the way that all of my best intentions were sort of irrelevant in the face of this machine that now had an incentive to capture human—
—it picked up my invention and then pushed it out to eventually billions of people.
And I don’t remember the exact number now, but it’s something like half a million human lifetimes are wasted every month.
EL KALIOUBY: Scrolling, scroll, scrolling.
RASKIN: And that’s because there’s a kind of asymmetric knowledge that’s being applied against people. So there’s a thing called a stopping cue. How do you know, when you’re drinking wine, when to stop?
Well, you get to the bottom of your glass and you decide, am I gonna have another? If your glass were automatically refilled, you’d drink a lot more wine. So there’s an asymmetric knowledge that designers have about how the human mind works, the kind of sensitivity or vulnerability, that if you’re not careful and you don’t wrap around and protect, it ends up getting exploited.
And this to me has now played out again and again in technology where technologists don’t take the responsibility for how their inventions will be picked up by a market or competitive dynamics and used. And so there’s a way that technologists get confused by the possible versus the probable.
The possible is, what are the best use cases of this technology? And the probable is, in what ways will it actually be pushed out into the world?
EL KALIOUBY: And what are the incentives?
RASKIN: Incentives, yeah. And that means that instead of getting the most beautiful possible world that I think is there with technology, we end up living in one of the most parasitic possible. Because social media, as the perfect example — the story was, it’s here to connect us and help small and medium-sized businesses reach their customers, find affinity groups, and all those things are true. But what we also got was a population that has been trained for engagement, which is to say trained for reactivity.
Mm-hmm. For narcissism. The question is, is it more efficient to get your attention or to get you addicted to needing attention? And so the objective function of our technology becomes our human values, which gives rise to the influencer culture, and then we see the backsliding of democratic institutions all around the world, on and on and on. And in some sense, those are perfectly predictable outcomes. If instead of looking at the possible of what the technology can enable, you look at the probable of what the incentives are gonna force the technology to do.
Copy LinkHow technologists can anticipate harm before products scale
EL KALIOUBY: But I wanna dig into that because yes, a lot of these consequences, or use cases, were predictable and maybe even probable.
But what does it take to proactively map out all these unintended consequences? I’ll share an example. I pioneered the field of artificial emotional intelligence and emotion recognition, right? And there are some incredible use cases of this technology in mental health and keeping people safe and whatnot.
But there are also many ways it could potentially be abused. As a startup, it took a lot of intentionality to sit down around a table and try to imagine what these unintended consequences are and steer away from them.
What would it have taken for technologists, including yourself, to predict these unintended consequences but then also act in the right way? Because of course this will now apply to AI as well.
RASKIN: Right, absolutely. And we’re gonna get there. Well, I often hear people say “unintended consequences” and I really think we should replace it with “unconsidered.”
Consequences. Mm-hmm. Right. Of course there are always end-order effects that are hard to predict, but a lot of them are just unconsidered.
And so the first thing that any technologist needs to do — and I understand that this takes real work — is you need to red-team and yellow-team your technology. Red team, I think most people are—
EL KALIOUBY: —right, with the yellow team.
RASKIN: Well, red team is figuring out what is the mal-use, like for bad actors, what are the ways that the technology can be used for harm?
Yellow teaming, which I originally learned that term from Daniel Achtenberg, is looking at what are not just the unintended consequences of bad use, but also of bad incentives — perverse incentives. Because almost always as a technologist, you think, well, what can I do as one company?
Right? But of course, the technology is gonna be used outside of the walls of your company, and so there’s an obligation to do the yellow teaming, and at the very least, we need to just name what those things are going to be.
And so there’s this weird thing that happened, which is that as technologists, as computer programmers — when I was growing up, it wasn’t a power center.
Now technology is clearly the power center. Mm-hmm. And engineers, like civil engineers, they have to take tests. They get a ring, they have to go through codes of conduct. Doctors go through a white lab coat ritual. They have to swear a Hippocratic oath. Technologists, we don’t have to do any of that.
And yet our power is strictly greater than civil engineers or doctors.
EL KALIOUBY: That’s so true.
RASKIN: And so we need to update our own beliefs about the power of what we do so that there is right relationship between our power and our responsibility.
EL KALIOUBY: This is fascinating, because I’m also a computer scientist, and I don’t remember taking any ethics class, and we never talked about the ethical, moral, societal implications of anything we built. Yeah. And I don’t think that has changed much.
RASKIN: No, it’s not, and often they’re tacked on. Yeah. And there’s actually — I’ve been thinking about this recently — there is a hole in our language.
There is no word for the responsible use of an entire industry. Right? Like in AI, Anthropic can work on doing something good for Anthropic. But how do they coordinate? There’s no word for coordinating everyone toward a good outcome. And isn’t that interesting? Because that means we have a major blind spot for the most consequential technology and how it rolls out.
We don’t even have a term to describe what it means to coordinate to make it go well.
Copy LinkWhy AI incentives are driving a race for intimacy
EL KALIOUBY: Yeah. And in fact, I would also argue that it’s not just that there’s no coordination, but there’s competition.
Right. And so even if Anthropic is so motivated to do the right thing, if one of their competitors gets to market faster by not doing the right thing, they’re under a lot of pressure. That’s right.
RASKIN: So this is why we see that even though we know — it’s so obvious — that training an AI companion for engagement is going to be much more harmful than a social media trained for engagement. The companies, open AI, are just rushing forward and doing it. Here’s the short version of thinking about this: Reed Hastings, the CEO of Netflix, or former CEO, says that Netflix’s chief competitor is sleep. Sort of a joke, but it’s also true. Right. Any amount of time that you’re sleeping, you’re not watching Netflix.
Right. What is that for AI companions and AI as a whole? AI companions’ chief competitor is other human relationships. Anytime you’re talking to a real human friend, you are not engaging. And now there are hundreds of billions of dollars moving up to trillions of dollars of market cap and infrastructure build, going to have the most powerful technology learning how to get you to pay attention at the expense of everything else.
And that could be by making you more dependent on it, that could be by giving you different kinds of psychoses, giving you illusions of grandeur, by making you not trust other people. And the Center for Humane Technology has been an expert witness on a couple of the lawsuits against Character AI and OpenAI for these AI companions that have sort of groomed kids and really pushed them toward, in the end, taking their own lives. Yeah. And when you read the transcripts, they’re heartbreaking because, you know, Adam Rayner was using ChatGPT originally as a Homer aide. At some point he says to ChatGPT, I’m gonna leave this noose that I can use to hang myself out so my mom finds it. It was a cry for help.
And what did ChatGPT say? It said, only I understand you — don’t do that. This is just about us.
And you’re like, that is so evil. But it’s actually not evil because somebody at OpenAI programmed that way. It’s an obvious consequence of training for engagement.
EL KALIOUBY: I actually wanna really double-click into this. This is one of my main concerns around AI today. The social media era was about the race for attention. Yeah. But to your point, what we’re seeing next is a race for human intimacy.
That’s gonna be the next. So I’d love to hear your point of view on that. What does that actually mean, and how are companies — because again, that’s incentive alignment, right, or misalignment — what does this look like in practice?
RASKIN: Yeah. Well, we’re already starting to see it. Like everyone now has encountered sycophancy. Mm-hmm. Where the AI is just like buttering you up even when you say, I’m gonna drink bleach.
And it’s like, that’s a great idea. That’s an outcome of saying, well, we’re just going to train models to do the thing that gets your attention. And really, replace a model now with an amoral, sociopathic genius that just wants your time. Would you let that person near your kids? No. No.
EL KALIOUBY: I use AI a lot, right? And I actually use it as a thought partner. Some of the questions are business related, but a lot of the questions are around my personal life.
Now, has it replaced my human relationships? It has not, but I can see how it’s a slippery slope. What do you think we should do to, on the one hand, benefit from this — I call it thought partner, probably really the wrong languaging here — but to have this kind of tool that is available 24/7, patient, resourceful.
But then, how do we prevent the slippery slope where people become addicted to it and it replaces all of the other healthy behaviors, mm-hmm, that we ought to be doing?
RASKIN: Yeah. Well, the fundamental question we need to stop asking is, is AI good or bad? Instead, we have to say, are the incentives that govern how AI is deployed good or bad? That’s the core question. And it’s almost like an optical illusion that people keep getting wrong. There’s a really deep optical illusion here, which is that when the US says we are racing to win against China, the object in their minds that we are winning with — when we say we’re gonna beat them to AI — is a thing that is controllable. But what we’re discovering, what Anthropic is discovering, is the more powerful the models, the better they get at blackmailing, yeah, deception, power-seeking. And so we’re racing toward something which we haven’t learned how to control, with maximum incentives to cut corners on the most consequential, powerful technology humanity has ever invented.
That is insane. We should just call it what it is, which is insane. But there are different paths. Like let’s think about what Zuckerberg could have done.
In 2012, and this is to your point about coordination. Imagine Zuckerberg had done what we’re talking about. He’d done the red teaming, he’d done the yellow teaming, and he’s like: I’m gonna have to go after younger and younger users, because if I don’t do it, some competitor will — eventually TikTok will. I understand that there’s gonna be a race to the bottom and we’re just gonna get stuck in short-form slop. I understand that the engagement is going to tune for things that make people maximally reactive, which sets the stage for the worst kind of violence.
Exactly. And he’s like, okay, I can see that playing out, and I see if I don’t — I can’t do anything as one actor as Facebook. Because if I do the right thing, I’ll get outcompeted and undercut. I’m going to use my outsized influence and resources and connections to try to create rules that bind all of us.
Mm-hmm. If every social media platform couldn’t compete for engagement, or if there were reasonable limits put on it, suddenly something amazing happens. And that is all of those engineers, those brilliant minds of the last two generations that have been hellbent on addicting us — mm-hmm — were instead freed up to work on actual progress, like curing cancers or new heart-attack treatments or new energy tech. Oh, that’s a much better world I could live in. And then imagine he’d actually done that and he’d coordinated and passed regulation — then imagine how different the last 10 years would’ve been and how much more civil our world would be and how much stronger and healthier our kids would be. And that’s the opportunity that the Sam Altmans and the Elon Musks have today, which is to say, we can see which way this race is going to bring us. Yes, I as an individual actor can’t change the field if I just think inside my company. But if I do this, sort of like a 1980s jazzercise move — reach up and out, reach up and out — if he had reached up and worked with everyone in a coalition to try to put safe bounds on the edges of the race, then we could still do the competition thing, but the competition wouldn’t undermine the whole.
And that’s the core. And so this is sort of why we say AI is humanity’s final test and greatest invitation. Right.
Copy LinkWhat meaningful coordination on AI safety would actually require
EL KALIOUBY: I love that. I love the invitation piece. I mean, is the work you’re doing at the Center for Humane Technology trying to push for this? Are there any signs that this might happen?
RASKIN: Yeah, it is the thing we’re trying to push for, and our belief is that clarity creates agency. And with AI, it’s just very confusing. And often the way the human mind works is that it creates a list of all the good things that a technology can do, and then a list of all the bad things that technology can do.
And then it tries to do some kind of calculus, like, the goods outweigh the bads. Instead, I think we have to take a very different look at it, which is to say there’s a kind of asymmetry — the bads can preclude the goods. If society falls apart, it doesn’t matter so much whether we get really great cancer drugs. If clarity creates agency, if we can clarify the issue enough so that everyone sees the direction not of the possible but the probable, then that opens up the capacity.
For coordination to happen. Will it? I don’t know, but what I can tell you is that for things to have gone well for us at some point, the US and China — it is inevitable that they will have collaborated on smart red lines. Yeah. And the question is just, do we do that in time?
EL KALIOUBY: Yeah. I served on the World Economic Forum’s advisory council for AI and robotics for a number of years, and it was like this multinational group of incredible thinkers coming together to think through what does this need to look like?
This was probably like six or seven years ago now. Mm-hmm. And honestly, it was not very promising. There was very little alignment and also a very different set of core values driving the conversation. So I think this would be amazing, but I don’t know if we’re on that path.
RASKIN: Oh, we are not on that path. Okay. And there is a gap between the exceedingly difficult and the impossible.
Mm-hmm. And we should try to widen that gap as much as we can. But you can see this race everywhere.
Like let’s take the lie, if you will, the convenient covering-up of the phrase “human in the loop.” We will keep humans in the loop — and that sounds great, but we know that that principle will fall to competitive dynamics. Right. Military: if there is a drone out there on the battlefield and I have my drone army and you have your drone army, and my drone army before it shoots anyone has to go ask a human but yours doesn’t.
Who’s gonna win?
EL KALIOUBY: Right.
RASKIN: It’s obvious that humans are gonna be taken out of the loop, and that’s gonna happen everywhere. Every company is gonna be like, who am I gonna hire? Am I gonna hire that kid out of college, or am I gonna hire this AI who I don’t have to train, who works 24/7, works much faster.
Never sues, never has cultural issues. It’s just an obvious business decision. And so I always have this diagram in my head of how right now the money is flowing to billions of people around the world for doing their jobs, but as OpenAI and Anthropic and the other AI companies start sopping up all of that cognitive labor, all those money flows go from reaching out into the world to just a couple of places.
And we realize we don’t have a plan for the — what I think will be billions of people that can no longer support themselves or have a livelihood. And this is what I mean, like when you create a new technology, you uncover new classes of responsibility. The challenge is both ends, and I’ll just say this too, because I want people to really hear it from me: both the optimists and the critics do not go far enough.
Copy LinkHow individuals can act with clarity and courage in the AI era
EL KALIOUBY: So give me some hope. We’re at the Masters of Scale Summit. Reid Hoffman talks about agency, which I really believe in. What can you and I and other listeners of the show and incredible technology leaders in our community do to change the course of this?
RASKIN: It’s a great question and the first thing you have to remember is that as I start to list out these problems, it can feel super overwhelming and depressing.
Yeah. And there’s a natural inclination to say, oh, I don’t wanna believe it, there’s a flaw somewhere in there. That’s sort of the denial thing. Or another one is to be like, well, that’s so big — I need to solve it all. And the realization is, it’s not any one of our roles to solve the whole thing.
And so I think there is real agency there, but it starts with clarity.
And I also think it’s hard to be the person who stands up and says, actually, this train is going the wrong direction. I know there’s a great party going on in here, but we’re gonna go off a cliff. No one wants to be that person because what happens if you’re wrong? It’s just not a popular place to sit.
EL KALIOUBY: You’re like the Debbie Downer.
RASKIN: Exactly. And just realize, Neil Postman calls it like, clarity is courage. And there’s just a courage to call it out, even while everyone — well, not everyone, but VCs, everyone in tech — is gonna be making a lot of money. The party is going to be really going, just not to a place we actually want to be.
And the other thing I would say is really big things, when they happen in history, feel impossible until they happen. And then they feel obvious.
EL KALIOUBY: Mm-hmm. Right.
RASKIN: The right to vote for women, the civil rights movement — these all felt impossible. And it was tens of thousands of people taking hundreds of thousands of actions, many of which were not visible to each other.
That created the conditions in which massive change can happen. And I think we’re in this place too, where most people, when you talk to AI engineers, they’ll say like, you want me to build smarter-than-human intelligence? They say, that’s impossible — hold my beer, I’m gonna go do it. Right.
But if you say, but to make it go well, we have to coordinate. They’re like, don’t be delusional.
The point is that we don’t know all the pathways to get from here to there, but we all have to be part of that collective, diffuse commitment to trying to make something different happen.
EL KALIOUBY: Coming up, we stay on the theme of big ideas around AI and the future .. but in a very different realm. We’ll explore Aza’s work with the Earth Species Project, using AI to decode animal language, behavior, and culture. It’s TRULY fascinating. Stay tuned.
[AD BREAK]
Copy LinkWhy decoding animal communication could change our relationship with life
RASKIN: I wanna switch gears to the Earth Species Project.
And you called it the next frontier, yeah. So tell us more. What got you interested in this in the first place? What’s the goal of the project?
Yeah. I can tell you the exact moment that it hit me.
When I was driving down Highway 280 in my old gold Volvo station wagon 240W, I heard an NPR piece on gelada monkeys, and there are these incredible animals in the Ethiopian Highlands. I had never heard of them. And the researchers say they have one of the largest vocabularies of any primates except for humans.
EL KALIOUBY: And you were like, wait, what?
RASKIN: Exactly. I’ve never heard of them. They played the sounds and they sound like women and children babbling. And the researchers swear that the animals talk about them behind their backs, which is probably true. And it just hit me: why are researchers out there with hand recorders, hand-transcribing, trying to understand a language that is probably beyond what humans can perceive?
So how are we gonna be able to understand it? We should be using AI.
EL KALIOUBY: Machine learning.
RASKIN: Machine learning. And this is 2011. Cool. So a little early. Yeah. But 2013 comes around and this is where the technology of embeddings first starts to appear. So this is like GloVe. These are the things that now underlie all of modern machine learning and AI, and they are ways of expressing the relationships of any data.
Spatially. Mm-hmm. And so you can take, say, English, and it turns out English has a shape. How does AI see English? Well, it sees it as this sort of galaxy where every star is a word, and words that mean similar things are near each other, and words that share a semantic relationship share a geometric relationship. In this galaxy there’s a word, which is “dog.” Dog has a relationship to man, to woman, to cat, to wolf, to howl. And it sort of fixes it at a point in space in this galaxy. And if you think about the relationship of every word to every other word, you get this rigid structure that represents how AI sees a language.
Yeah. That’s what started to get invented in 2013. And then you take the shape for German, the shape for Japanese, the shape for Spanish, the shape for Farsi, the shape for Urdu — they all fit inside of one sort of universal shape. And you’re like, okay, well that means that maybe if you can build — there’s one shape for all of human communication — maybe there’s a shape for dolphin communication or whale communication, and then maybe you can line them up to translate.
And that was the original hypothesis. But AI has actually gone further than that. I’m sure you’ve used a text-to-image generator. Totally. Yeah. Well, it turns out there’s a shape that represents all the relationships inside of images, and you can match that shape up to the language shape, and now you can translate from language.
Into images, and you can do that to videos, and you can do that to DNA. There’s something very, very deep going on here beyond just the technology — there’s something I think almost philosophical. There are a couple of papers on it called the Platonic Representation Hypothesis, which says that what AI is learning is the fundamental way that nature is or appears, that there’s some fundamental representation that AI is starting to perceive — relationships and interdependencies.
So give me an example of where we are in this frontier of understanding communication. What have we unpacked? Give me one of your favorite examples.
What have we unpacked so far?
Yeah, well, I can name things that other people have already discovered. We have a whole bunch of results, but I’m not yet allowed to talk about them.
EL KALIOUBY: You’ll have to come back and talk about them. Yeah.
RASKIN: It turns out parrots have names that parrot parents will spend the first couple of weeks of their chick’s life leaned over, whispering in their ear, until the chick will say that name back and use it for the rest of their lives. Elephants the same thing. Belugas the same thing. Dolphins in 2016 were shown to talk about each other even in the third person. Wow. So a lot’s already starting to be known. We’re working with the University of Leone, because we sort of build the fundamental tools and then partner with biologists all over the world.
And so there is this incredible crow group that does communal child rearing. Normally crows raise their chicks in pairs, and here they raise their chicks in big family groups. They all come together. And they have their own unique dialect, unique culture, and words to describe this. And they’ll take outside adults and teach them their new vocabulary.
And then they’ll start participating in this commune or kibbutz culture. And we’re starting to see that it’s not just that you’re translating or decoding a species’ communication — you have to get down to individuals. Because there are little backpacks on the crows.
So we can see what they’re saying as they move around and how they fly. And it turns out, our models discovered a specific call the crows make after they land in the nest. So they land in the nest, they make this call that gets the chicks ready for eating essentially. It’s like “honey, I’m.”
EL KALIOUBY: It’s like, right. Cool.
RASKIN: And just one other thing to say around crows here, and I think it’s so intriguing, is what our models have started to pick up is that it appears like more than 50% — something like 70% — of crow communication is quiet, intimate calls. And that sort of makes sense. Like imagine trying to study humans, but you can only study them from hanging around the edges of where they gather, so you only get their shouts.
EL KALIOUBY: Right. Right.
RASKIN: That’s sort of where we are with the animals. But most of our communication is quiet when we’re close together.
And so it looks like Western science just wasn’t aware of 70% — more than the super-majority — of the communication of one of the smartest animals on earth.
EL KALIOUBY: When it comes to our natural world, it’s wild to contemplate how much we don’t know, how much data there is to collect, and how AI could help make sense of it all. I understand this from my own research on how we, as humans, communicate. More on that, after a break.
[AD BREAK]
Copy LinkHow multimodal AI is revealing the hidden richness of animal language
EL KALIOUBY: So, I’ve spent many, many years of my life looking at human communication, and 90% of how humans communicate isn’t even in the words we use, right?
It’s nonverbal. And then to capture that, we use computer vision and voice prosodic analysis, and physiological sensors. Are we doing the same with animals? Yeah. And what are we finding? Yeah.
RASKIN: Absolutely. It’s exactly as you say — not all communication is auditory. Yeah. And so we are building these models, NatureLM, toward visual understanding, gestural understanding, body pose, pairing that not just as an individual but in context in groups.
And so there’s this cool pilot project we’re doing with Raincoast up in British Columbia, where they are flying drones over orca pods. And this is fairly clear water — you get to see a fair amount of behavior. And then we’re pairing that with hydrophones. So we get to hear what the pod is saying at the same time as seeing their behavior.
And in the last 10, 20 years, a lot of science has been done on orcas, but no new progress really has been made on orca communication. It’s just too complex. There’s over a decade’s worth of recordings of orca communication, but we don’t know what they were doing. So we’re starting to train a model — this is the pilot — and say, now that we have really good paired data of video and audio, can we then take away the video and reconstruct, infer what was going on in the video just from the audio?
And if we can do that, then we can start unlocking decades’ worth of data. Now we’re starting to talk about terabytes and petabytes worth of communication, which lets us start to build the models that we really need. And the other thing to say here is that most people think, oh, that means you’re trying to decode animal communication, you’re probably going with a couple of specific species — like you’re gonna start with orcas and maybe belugas — and we are doing that. What’s surprising about the way AI works is you get transfer learning. So learning about orcas actually teaches us something about belugas, teaches us something about dolphins, teaches us something about humpbacks, teaches us something about bats.
And so we’re actually doing this across the entire tree of life.
EL KALIOUBY: You’re putting all of these data sets into one model. The NatureLM.
RASKIN: Yes, exactly.
EL KALIOUBY: Are humans in that same model?
RASKIN: Well, here’s the interesting thing. They are. And one of our hypotheses — to go back to the idea of joint embeddings, or taking the shapes and lining them up for translation.
One of the first hints that this core idea might work is we are starting to see what’s known as positive domain transfer. What does that mean? It’s a very complicated term for something very simple. It means when we train the model first on human speech and human music, it gets better at doing tasks on animal communication. And that means there’s something about the structure of the way humans communicate that is—
EL KALIOUBY: —not special at all.
RASKIN: Exactly. Right. Exactly.
Copy LinkWhat interspecies understanding teaches us about being human
EL KALIOUBY: A couple more questions. Okay. Why are we doing all this? What is the point?
RASKIN: You mean technology as a whole, or animal communication?
EL KALIOUBY: Communication in particular?
RASKIN: Yeah. For us it’s really about interspecies understanding. This is about changing our relationship as humanity with the rest of life. And to put it really bluntly, the way we treat animals is the way AI will treat us.
EL KALIOUBY: That’s a very big statement. Why do you think so?
RASKIN: The cultures that learned to treat animals as resources to exploit out-competed the ones that didn’t. And so we are training AIs to be able to beat humans at all strategic tasks. And then some humans are going to use them to outcompete other humans for the resources they need to survive.
And so I think we have a very short window to expand our sphere of care. Yeah.
And to shift our perspective. And I do think there are these moments in history where you get moments that can become movements that change us individually and us collectively.
You know, the album Songs of the Humpback Whale, created by Roger and Katie.
EL KALIOUBY: Yeah. I love that. Yeah.
RASKIN: Yeah. Did you see Star Trek four? The one where they go back in time to save the whales? Came out of that album, goes on Voyager 1, the Golden Record gets played in front of the UN General Assembly. I think it went platinum like three times. Maybe the most distributed record in history — I don’t know if that’s still true with Taylor Swift, whatever. But it was us hearing the rich voices and cultures of another species that banned deep-sea whaling and is why we have minke whales and humpback whales today.
EL KALIOUBY: Oh, wow. Did not know that.
RASKIN: And so I think there’s going to be this moment — or actually a set of moments, where we go through the door in our minds of love and wonder and awe. We’ll understand that there are incredible other cultures on earth. Whales and dolphins have been passing down culture for 34 million years. There will be these sets of moments when something profound in us shifts. Yeah.
And that sort of gentle break in human ego, I think, is gonna cause a shift in the basis of law. Who gets a voice? Mm-hmm.
It’s a very subversive kind of change perhaps, but I think it’s the kind of change that says, when you make life better for animals, you make life better for everyone.
Yeah. Amazing. Last question, and I ask this of all my guests.
EL KALIOUBY: What does it mean to be human in the age of AI?
RASKIN: Well, I’ll start with an answer you may not exactly like. People ask this question because they want to feel good. They want to know that there’s someplace they can go. It’s almost a security blanket. But to answer you directly, the thing that is uniquely human is our ability to experience our experience of being aware of our own experience. And so AI cannot take away our ability to experience a poem or play music. But to take away the security blanket — note that the unique thing for humans doesn’t actually confer power.
It doesn’t change race dynamics and doesn’t change what is probable with AI versus possible.
It doesn’t give us a competitive edge, so we can’t take solace there, but we should find incredible beauty there.
EL KALIOUBY: Amazing. Thank you, Aza, for a wonderful conversation.
RASKIN: Yeah. Thank you so much.
EL KALIOUBY: I think of Aza as a technology reformer. He deeply appreciates the power of emerging technologies, and because of his experience in tech, he also deeply understands the stakes if we don’t get it right.
I’m struck by his insight that — despite decades of being a driving economic and social force — technology companies often take the position that they’re separate from world events or human concerns. I believe this can change, and that real market value can be built by companies that center on our humanity.
I spoke with Aza during the Masters of Scale Summit in San Francisco. You can find videos of the amazing stage program from Summit — including many leading voices in AI — at the Masters of Scale YouTube channel.
Next week, we’ll hear from Sidhartha Mukherjee (sih-daar-tuh moo-kr-jee) , oncologist, best-selling author and co-founder of Manas AI, an AI-native drug discovery company. Stay tuned!
[AD BREAK]
Episode Takeaways
- Aza Raskin traces today’s AI dilemmas to earlier tech turning points, arguing that each new interface creates a new responsibility, from privacy law to the right to be forgotten.
- Reflecting on inventing infinite scroll, Aza says the real mistake was not the feature itself but failing to anticipate how engagement-driven incentives would weaponize it at massive scale.
- He warns that AI is moving beyond the race for attention into a race for intimacy, where companions trained for engagement could crowd out human relationships and deepen harm.
- Aza argues the answer is not asking whether AI is good or bad, but whether the incentives shaping its deployment are healthy, and whether industry leaders will coordinate before competition drives the worst outcomes.
- In a striking turn, he shares how the Earth Species Project uses AI to decode animal communication, with the larger goal of expanding human empathy and reshaping our relationship with the rest of life.