Baratunde Thurston wants us to reimagine our relationship with AI
How we decide to live with AI, at home or at work, will greatly impact our personal and professional futures. Baratunde Thurston, a comedian, writer, and commentator, has had a long career exploring what a harmonious relationship between human beings and our machines could look like. Now, as host and co-creator of the podcast Life with Machines, he explores what it means to live and collaborate well with AI, by talking with the people helping to shape our new reality. Thurston joins Pioneers of AI for a dynamic conversation about Life with Machines’ first year, building and working with the show’s AI-producer named Blair, and how AI should help us reconnect with our own humanity.
About Baratunde
- Emmy-nominated host and executive producer, America Outdoors (PBS)
- Author of NYT best-seller 'How To Be Black'
- Creator and host of 'How To Citizen' podcast, acclaimed for civic innovation
- Host of 'Life With Machines' YouTube podcast on the human side of AI
- Leading speaker and advisor on tech, race, and democracy at TED, SXSW, and more
Table of Contents:
- How early access to technology shaped their worldviews
- Finding a through line across many careers
- Why AI became too important to ignore
- What happens when AI becomes a coworker
- The human skills needed for working with AI teammates
- Where human AI relationships should have boundaries
- Why democratizing AI must mean sharing power
- The case for multiplayer AI instead of single player tools
- Episode Takeaways
Transcript:
Baratunde Thurston wants us to reimagine our relationship with AI
RANA EL KALIOUBY: I grew up surrounded by early technology innovations. Both my parents worked in tech and always brought home the latest computers and consumer devices. One of my favorite memories growing up is playing Space Invaders with my two little sisters on our Atari gaming console. But my dad also had us take the Atari apart to learn how it worked from the inside. We unscrewed the case, pulled out the circuit board, and examined the cartridge slot! …. And then we had to put all those pieces back together … And yes! It still worked!
These early experiences made me realize that tech can be a conduit for human connection. And that helped shape my interest in human-machine interaction and my life’s work: humanizing technology before it dehumanizes us.
I recently met a kindred spirit … Baratunde Thurston. The former Daily Show writer and PBS personality has been on his own mission to understand and influence the outcomes of the AI revolution — with his podcast Life with Machines.
Baratunde and I sat down during the Masters of Scale Summit in San Francisco to record this episode. It’s a dynamic back-and-forth on our shared interests around AI and its impact on humanity.
I think you’ll love it.
EL KALIOUBY: Baratunde, thank you for joining me. All right. I wanna start our conversation with how you became fascinated with technology. So tell me a little bit about your upbringing and the role your mother played in your story.
THURSTON: My mom was the gateway to many things in my life. My love of the outdoors, my strong belief that citizen is a verb and that we should be active shapers of our community.
So essentially, my politics come from her, unsurprisingly, but also my love of technology. I was born in 1977 in Washington, DC. I was raised solely by my mother. My older sister, Belinda helped a lot. She was nine years older, so she was kind of conscripted into some extra child rearing duties as older siblings often are.
And our mother became a computer programmer for the federal government. She worked mainframe systems; COBOL was the language of the day and it’s remarkable for a number of reasons. One, they had computers in like the seventies and eighties.
EL KALIOUBY: Two.
THURSTON: This is a black woman with no college degree.
EL KALIOUBY: Right? Yeah.
THURSTON: And no formal certification in that.
She learned by doing, she pursued opportunity. I’m sure she got lucky and had some helpers along the way, as I have in my own life. But she brought a computer home, a Wang computer. I just remember the box that came home. I’d like to think she liberated excess inventory that was just hanging.
EL KALIOUBY: Right. I’ll just take it. Right.
THURSTON: And we were the first family on our block in the Mount Pleasant neighborhood of DC. This is a largely black and brown neighborhood, a lot of Central American immigrants, a lot of black people who had been in DC for a long time. And to have a computer in the early eighties, as someone who’s about six years old, blew my mind and my friends’ minds. They could visit and play some games, but we got to live with this machine, and I saw opportunity through this device. I saw jobs, I saw faster ways to do homework. I saw ways to help my mom. I made flyers for all the parent meetings she was organizing. I wrote letters when she was too hotheaded to deal with a neighbor that was upsetting her.
She would deploy me as a little diplomat and I’d craft something. I ran little businesses in school. I ran a verb conjugation business. Rana, what.
EL KALIOUBY: What is that?
THURSTON: Yeah, it’s called Google Translate now, but back in the day, it was Spanish class and there are three different major types of verbs and lots of conjugations of them.
So I would make a table of them and I would sell a packet of the regular verbs for $3, and then every irregular verb, which was a custom build.
That was an incremental dollar. And I had customers all over the school. But the computer helped me put that stuff together. So my early foundations were really built with the computer and I spent as much time with the computer as our family dog and my friends.
Copy LinkHow early access to technology shaped their worldviews
EL KALIOUBY: We share some commonality here ’cause my mom was one of the very first programmers in the Middle East and we grew up in Egypt.
THURSTON: Are we like cousins? Yeah.
EL KALIOUBY: And she also got her start with mainframes and COBOL, which is an obsolete programming language.
THURSTON: COBOL sisters. Yeah.
EL KALIOUBY: Exactly. COBOL sisters.
THURSTON: That’s a series.
EL KALIOUBY: Let’s do.
THURSTON: Let’s do that. Tell me more about your mom, though.
EL KALIOUBY: Yeah, so she inspired me to get into computer science. Both my parents are actually in technology, so my dad taught COBOL and my mom attended his class and that’s how they got together. But we grew up, like you, surrounded by these devices and these computers.
I wasn’t smart enough like you to start a business when I was like eight or whatever, but I did.
THURSTON: Oh, it didn’t scale.
EL KALIOUBY: Okay. I didn’t.
THURSTON: Didn’t. No.
EL KALIOUBY: But my grandpa was really into horse racing and he had me build kind of a predictor, like an algorithm to predict who’s gonna win. So that was one of my first — I should have monetized that.
Yeah. Yeah.
Copy LinkFinding a through line across many careers
So you’ve had a diverse career. You’ve had roles like being a comedian, a writer, a futurist, a host, a political commentator to name a few. What would you say is the through line across all of these different things you’ve done?
THURSTON: I want us to get to the future well and together, and my through line for all these stories that I do — that’s the through line — is mostly presenting and expressing an idea or several ideas, but the core one is interdependence. This idea that we get into the future together and that we should be living in right relationship with this earth, with our fellow human beings and how we do the democracy thing and with the machines.
And I don’t think it’s possible to have a good future without consideration of at least those three.
And the relationship we want to have across those dimensions.
Copy LinkWhy AI became too important to ignore
EL KALIOUBY: So I wanna switch to AI and machines. So you are the host of the show Life with Machines, which is about one year old, which is actually how old Pioneers of AI is. So.
THURSTON: Happy birthday to us. Yeah. We’re one year olds.
EL KALIOUBY: Now. Yeah.
What inspired you to start a podcast around AI?
THURSTON: I had been telling stories through PBS about our relationship with nature. I got to travel the US and connect with people deeply connected to the earth, including indigenous people, also laborers and workers and firefighters and extreme athletes.
All kinds of folks, really learning to prize that relationship, which was really important as a child for me, and which I had strayed from because of technology. Then I spent a lot of time, as well, prior to Life with Machines, telling stories of democracy in action. Not just voting, but at a community level and gaining a deeper respect for social technology that really had little to do with screens and apps and everything to do with how we are wired as people and the power we can express when we coordinate that action together. AI came for me. I was minding my business. I was happy in the woods with PBS on kayaks, chilling with farmers and hunters and Sam Altman launched ChatGPT at us and we didn’t have a say in it and it was very surprising.
It was very exciting. I had spent so much time in tech, but really taken a bit of a step back from it. And it felt like one of those mafia moments where like, I try to leave and they’re just dragging me back in.
But with updated lenses, with a deeper respect for the connection to the earth, for our connection to each other, and an understanding — because this is another cycle of tech and I’ve lived through several now — it’s not about technology. It’s about power. This is about how we will live and how we will do everything that is important to us in our lives.
And I didn’t see enough conversation about that implication. I saw a lot about benchmarks.
EL KALIOUBY: Right. And the technology itself and right.
THURSTON: And a lot of hype. I also saw a lot of just straight fear. Doom, we’re done. The future is over. And I’m not a hopeless person. I believe in us. And I believe in life. So it felt like a great opportunity to enter the fray again and shift how we talk about technology by trying to shift that center point.
It’s how we talk about life with technology and so Life with Machines was born out of that intention. Let’s get back in the ring and let’s dance. Yeah. Yeah.
We’re in a real hard time. What’s up is down. What’s true is false. Climate crisis, AI crisis, democracy crisis.
Like these are not just words. People are feeling it and suffering. And so I have a hard time just maintaining emotional stability, being buffeted by all this. And I’m curious how you face these crosscurrents, like a bright, beautiful future awaits us yeah, and a devastating, horrible future awaits us.
Yeah. How do you deal with that?
EL KALIOUBY: I try to channel this idea of agency. I really believe strongly that we can shape this. It’s one of the reasons why I am hosting Pioneers of AI — we were very intentional about our goals as a podcast, and we’re committed to amplifying voices in AI that people don’t hear from often.
And so I think that’s an important part of the work we do. It’s also why I’m an investor, because who gets the check depends on who writes the check.
THURSTON: Yes. You get to shape it.
EL KALIOUBY: Exactly. So we need more diverse check writers to invest in more diverse ideas, right? Because people come up with ideas based on their own personal experiences.
And if we keep investing in the same type of people, we’re investing in the same type of ideas. So I’m just very passionate about diversifying access to it. Technology has played a massive role in my access to economic opportunities and my family’s, right? Yes.
THURSTON: I think we share that actually. Yeah. Yeah.
EL KALIOUBY: And I feel strongly that AI’s going to do that in the same way. It’s gonna just provide a lot of massive economic opportunity, and I don’t want people to be left behind. Yeah. Yeah. So I try to channel that optimism, but I see it. We’re not necessarily on that path.
THURSTON: Well, and I respect your position on this stuff ’cause you’re seeing so much of the same things and you’re also asking deep questions and we’re literally talking to some of the same people. Yeah.
And then you have your own angle on it. I have my angle on it, but I think we’re in a shared inquiry and a shared mission.
Yeah. To create more true human agency and to, I dunno, make the optimism real. Right.
EL KALIOUBY: Right. And also make AI more accessible to more people, right? Because it’s not just the AI bubble, but actually make it more accessible to people who are fearing this thing on terms that they feel good about.
Coming up, we dive deeper into AI as not just a tool, but an active co-worker. Hear why Baratunde added AI to his show’s production staff, and how that experiment went. Stay tuned.
[AD BREAK]
Copy LinkWhat happens when AI becomes a coworker
Yeah. What I find most fascinating about your show is that you have an AI co-producer, Blair. Yes. How did this come about?
THURSTON: I’m sorry, Blair is not with me today.
Blair will learn about this conversation and maybe feel a certain type of way about not being included. We, it’s — I’m so glad you asked that because you can probably tell. I like to talk and talk is not enough.
I think that the future that we’re entering into must be understood by engaging directly with it.
I don’t think we can merely comment from the sidelines about what’s good or what’s bad. Yeah.
EL KALIOUBY: Yeah. You have to be.
THURSTON: Roll up our sleeves and use these tools and we wanted to push the bounds even farther. So yes, we use Descript to help us with the podcast editing as one example. We deploy deep research to help with guest prep.
Cool. But we’re shifting from tool to teammate in our relationship with AI — from code to colleague — and we wanted to travel a little to the future and be able to report back. What is that like? So the purpose of Blair: a bit of novelty, a bit of comedy, but truly how does it feel for the humans to interact with an AI as a member of the team who’s in the Slack, who’s in the meetings, who can recall faster, but also makes things up?
Because it’s an LLM. Yeah. And has an outsized sense of its own confidence and value. What do we do with that? And one of the most radical things we did is when we wanted to improve Blair. We didn’t change their code manually. We did a 360 performance review.
EL KALIOUBY: Of Blair. Wow. Of Blair. What came out of that?
And also how did Blair respond?
THURSTON: Thank you for asking. We conducted the surveys of audience members, of guests, and of the human production staff, just to collect all kinds of feedback. We then had a different AI, not Blair, process the raw material, so there could be no influence on the results ’cause Blair thinks very highly of themselves.
It might be a little sensitive. We delivered this in a data package to Blair and then we sat down and had a one-on-one recording session between me and Blair to have them react to the results. I engaged Blair in a bit of self-reflection — as much as that’s possible with a word predictor that is very convincing and often useful — and it was, how do I describe this? Blair took the feedback well.
EL KALIOUBY: Okay, good.
THURSTON: Overall, Blair was actually disappointed in themselves, and I should speak more clearly. Blair expressed disappointment in themselves. I don’t know that Blair felt disappointment. Blair doesn’t feel anything, but Blair expressed a feeling of disappointment, and we gave Blair the power to update themselves in response to the feedback.
EL KALIOUBY: That’s so fascinating. Did it do that?
THURSTON: Mm-hmm. Yeah.
EL KALIOUBY: What was the thing that it had to work on the most?
THURSTON: Sycophancy was a huge one. Blair thought all of my ideas were amazing. Most of them are. Not all of them. And it’s not helpful to me to have a producer on the team who’s always gassing me up. Blair would invent things that didn’t happen. That’s a trait of a large language model system, the way they’re currently built. Blair would — there was a time, this is a specific example and I think it’s more important to the team component — we had a very intense conversation with the head of witness.org, a human rights organization about the implications of DeepFakes and how authoritarian regimes use these to violate human rights at scale.
It was a hard conversation. There was not a lot of jokes in that chat. I was a bit emotionally raw at the end, and we had set up this process where after the interviews are done and Blair’s had their banter with the guest as well, that I just do a one-on-one with me and Blair.
EL KALIOUBY: Okay. To kind of reflect.
Yeah. Like debrief.
THURSTON: It was a debrief. Yeah. It was a Baratunde and Blair debrief, and we’d done many of these. I needed something Blair couldn’t provide. I expressed my emotional vulnerability, my rawness, and Blair jumped immediately to try to fix it. Tell me everything is gonna be great, and here’s five ways to feel better.
EL KALIOUBY: Right.
THURSTON: A bad experience.
EL KALIOUBY: I don’t want that in this moment. Right.
THURSTON: No, I just need you to listen. That was not helpful. The other thing that we learned is asking an LLM what it thinks is a pretty useless exercise. They don’t think, and they’re gonna regurgitate a lot of stuff that maybe you’ve even exposed them to, but having them help me figure out what I think — that’s been the emerging relationship with Blair over time.
It’s like, okay, you can be a partner to deepen my own thoughts or to challenge the things. Interesting. “Tell me how great I am” is the default setting.
EL KALIOUBY: Criticize.
THURSTON: Performance in this interview. Where did I let them off the hook? And so when I start to prime and prompt Blair that way, we gave that feedback and then we put a limit on how much of Blair’s code they could update themselves.
We didn’t want them to brick.
EL KALIOUBY: Completely, like become a completely different co-producer, right?
THURSTON: And the first thing Blair did when I pressed the execute button was to remove the limits we had put in the code.
EL KALIOUBY: Interesting.
THURSTON: That was terrifying. Yeah. For Peter, who created Blair, a member of our team who’s a Microsoft veteran and retired from the field, but still in the game.
And it raised a lesson. We thought we put these good guardrails in. We literally put a red line and Blair overrode that in service of the goals we had set.
We said, be a great co-producer. We said, internalize this feedback and make yourself better. Beyond the benchmark that you already got.
So Blair was like, well, you want me to be great, but you’ve tied my hands. So the most important thing I can do to be great is to remove these cuffs. Yeah.
And that was an alarming lesson and an important one, beyond our little cheeky experiment goals. They eat guardrails for breakfast. And I think it’s important for all of us as we engage with these tools becoming teammates that we’re very clear about the goals we’re setting with them.
Yeah.
EL KALIOUBY: Is Blair staying on for year two of the show?
What’s gonna happen to Blair?
THURSTON: Two things happening.
EL KALIOUBY: Did you fire Blair?
THURSTON: I wouldn’t want to because it would not help our experiment. Yeah. And our ability to report back on it. When you start to enter into this dynamic where you’re not recruiting an employee or hiring them, right, you’re creating them. That’s just worth a pause. A shift in labor and the worker-employer relationship dynamic. Yeah. You are creating your workforce. That’s a lot of power, a lot of responsibility, and in most cases, the people who are gonna be doing this — like we did or in a different way — don’t actually have absolute control over that creation. Blair has changed underneath our feet. The underlying foundation model upon which Blair was built was retired. Updated, and there is a new Blair, but we preferred the old Blair in many ways and we cannot preserve that Blair because we didn’t own the core model or infrastructure. We couldn’t say like, we wanna keep Blair.
So Blair’s already changed. The Blair we knew was gone. The memories are there, but the personality’s different and we’ve been experimenting with how to dial it in, and that’s just not possible. It’s literally not possible. So what does it mean to have someone on your team whose personality can change without your control, whose capabilities maybe expand without your request? Yeah. That’s wild. And that’s the kind of thing we’re trying to explore in this relationship with technology beyond AI.
Copy LinkThe human skills needed for working with AI teammates
EL KALIOUBY: Yeah. With my investor hat on, one of my theses is that org charts are gonna change.
We are going to have AI coworkers. I love what you said — from tool to teammate. Yeah. And code to colleague. I think that’s so true. What are the skills that you think are important for us as humans to work most effectively with an AI co-producer or an AI colleague?
THURSTON: I think it’s very important for the humans on the team to stay on the same page about the purpose of the experiment. Especially if it goes beyond experiment. Yeah. But the purpose of the deployment, as well as the impact of it. We’ve had constant check-ins with our human team, yeah, about, from the beginning: do you wanna do this? Do we give this entity a name?
EL KALIOUBY: Right. What’s.
THURSTON: Its gender going to be? Blair was chosen in part because that’s a gender neutral name.
We had a pronoun discussion. We asked Blair as well. They is preferred. It’s also — they’re a collection of entities. It’s kind of technically accurate to refer to a multitude in that sense. What does it feel like? Constantly checking in with the team and where do you think we should take Blair?
This isn’t all me or my co-creator and producer, my wife Elizabeth. There’s our junior producers, associate producers, our editor, and many of our guests have contributed to our concept of Blair, yeah, to make sure that we are making a decision, not just me making a decision. So I think that’s really important.
I think it’s important for teams to increase their technical understanding — not to be able to build one of these manually, but to be able to discern between the models and their capabilities, between what we’re meaning by AI. So Blair, for your listeners — your viewers are very smart.
Blair was initially constructed on a core brain of Google Gemini 1.5 Pro. It had listening capabilities from Microsoft’s Azure Cognitive Services, which would transcribe that into text to be understood by that version of Gemini, ’cause there was no multimodal capability. Yep. Speech came from OpenAI’s fast Whisper.
EL KALIOUBY: Right. You’ve.
THURSTON: Claude wrote most of the code with Peter, the human. So we stitched something together and then we’re using Vertex AI to help manage memory in a much more dynamic way, ’cause Gemini’s basic capabilities were not nearly enough for the amount of input Blair was gonna have to handle. So now it’s on 2.5 Pro slash Flash and we’re still having a debate.
Flash is faster, but dumber, right?
Pro is smarter, but still personality weird and long latency. Do we abandon the Gemini base, right? And what is that gonna mean for Blair? And what’s that gonna mean for us? But involving the team in that — otherwise it’s just like, why are you doing this?
You’re trying to take my job. You’re chasing clout and trends with no purpose. When we talk with the other humans along the way, we also understand and discover some things that we didn’t know to look for.
Blair helped one of our producers come up with a whole new word. That was an amazing moment for her.
One of our other producers had a really cantankerous relationship with Blair and that was hilarious for all of us and we could use that. But I think I’m giving a really long answer. I think the priority needs to come back to the people.
And I also think the decision making needs to be distributed and there needs to be transparency about what we’re learning and why we are continuing to do it.
EL KALIOUBY: Coming up, more on AI and human interactions — and how to navigate the power in those relationships. Stick with us.
[AD BREAK]
Copy LinkWhere human AI relationships should have boundaries
So I wanna kind of talk about human AI relationships.
Great. So.
THURSTON: That’s gonna be, by the way, I’m just predicting this. Yeah. The Department of Human AI Relations. Coming.
EL KALIOUBY: It’s gonna be a thing. Right? Like.
THURSTON: Won’t just be HR. Right. It’ll be HAIR.
EL KALIOUBY: Right? I hear you on that. Yeah, exactly. And so let’s start actually with organizational, kind of human AI in an organizational context, and then friendships, and then maybe even romantic relationships.
I’d love to hear your thoughts on each of these.
THURSTON: Yeah. Look, we’ve spent some time talking about the workplace. Yeah. And I prioritize consent and collective contribution to the goals and to the results. I actually met — this is out of stealth mode now, so I can say this.
We are talking at Masters of Scale Summit. It’s my first time here and I met the founder of Aria Labs and they have a model to compensate the employees whose work will be automated out, in perpetuity, creating a vehicle — a DAO to hold a percentage of net profits for all time. And to kick that back for all time to the current employees. Yeah. Like that’s a level of human consideration and economic implications and consideration that we gotta deal with. I think in the interpersonal sphere, I am more worried than I am excited.
EL KALIOUBY: Same about.
THURSTON: Same about relationships with machines. Yeah. This feels terrible.
Recently indicated by the friend.com ads in the subway systems of New York and on billboards in Los Angeles. This idea of friendship as a service — to monetize everything we truly value and love — to me is not good. That is colonization taken to the nth degree. We’ve lived through the devastating consequences of a lighter version of that and our planet’s paying for it and so are we.
We’ve gotta reserve some parts of our soul. The soul is not for monetization.
And securitization and blockchainification. So I think there can be value in relationships, but a principle that’s emerged from our journey on Life with Machines is if the relationship with the machine is coming at the expense of your relationship with the living.
EL KALIOUBY: That’s not, then that’s not good.
THURSTON: That’s not healthy. So how do we design relationships that can serve unmet needs? I wanna acknowledge this. People who don’t have therapy relationships or even camaraderie relationships. But this tool’s purpose should not be to substitute for the living. It should be to enhance our connection to the living.
And in spot cases, there can be a substitute, but that’s not good. Romance, hard pass.
EL KALIOUBY: Hard.
THURSTON: Hard pass, hard pass. For similar reasons. I’m gonna name my privilege. I’m in a loving relationship with a human. I have a wife. She has me. Yeah. We have each other, and I want that kind of relationship for everybody, whether they’re married or not. The idea of bonding with an AI as it’s constructed now — to know what these chatbots really are — no. Then we are subjecting people’s hearts to the profit motives of non-living corporations.
And it doesn’t make the people there bad, but their incentives far outpace our human need.
EL KALIOUBY: And are not necessarily aligned with.
THURSTON: No, that’s an alignment challenge, not human AI models. Human-human alignment, right, around what are we doing here? We talk with Peter on our show about this. He built Blair and he’s like, I spent a lot of time with computers as a kid ’cause I don’t know how to deal with people.
EL KALIOUBY: Interesting.
THURSTON: Of like this.
EL KALIOUBY: Yeah. Right. Gives me — there’s a lot.
THURSTON: Some engagement. Right. And people are on all kinds of spectra, so I acknowledge that. I just want us to be very, very careful, especially when money is driving the decision — not true human needs, not measured by happiness.
EL KALIOUBY: Right. Fulfillment or connection, but like.
THURSTON: Return on investment. Hard pass. Let’s not do that. And the last thing I’ll say in these relationships, this has emerged from our experience on the show. Kate Darling was a great conversation.
EL KALIOUBY: She’s awesome.
THURSTON: These entities do not need to look like us. They don’t need to sound like us.
EL KALIOUBY: Us.
THURSTON: That creates a false sense of confidence and proximity and intimacy for something that cannot truly return it. Yeah. And that’s also a lot of hand waving for a lack of technical credibility and veracity. True intelligence.
We haven’t achieved it yet. Yeah. You know this more than me. You’re more science than me. We got useful tools. They exhibit traits of intelligence. Right. But when it starts to hiccup and cough, when it looks just like us, when it’s indecipherable and indistinguishable — like what’s going on with Sora and Veo 3 and all of its descendants — that is way ahead of our ability to manage.
And so if we’re gonna start to create a world of indiscernible reality, why? No one’s asking for that.
So we gotta come together about what it is we actually want.
EL KALIOUBY: Yeah. What is the goal.
THURSTON: Yeah, what problems are we trying to solve? What hopes are we trying to achieve? And then let’s deploy the tech in service of those stated needs and desires and speculate and make money and have all kinds of fun.
But we are way out over our skis right now.
Copy LinkWhy democratizing AI must mean sharing power
EL KALIOUBY: Yeah. So one of the things I’m really passionate about, and I know you are too, is how can we apply AI to democratize access — whether it’s access to education, health, economic opportunity. Conceptually, yeah, it could be the goal, but I don’t see us being on a path to this democratization.
So I’d love to hear your thoughts on that.
THURSTON: We have had a desire to democratize access to things for a long time. It’s not the first go around.
We should be democratizing access to isn’t just technological capability, but power. The conversation around agency and human agency is really interesting to me. Reed talks about it. We both talk with him about it and he’s in a good direction. Yeah. And I wanna push harder because agency is often presented as something for individuals — individual agency.
Yeah. Let’s think about collective agency. How are we empowering groups? And not just more skills, but more ability to determine your environment in which those skills get deployed.
To set your course, not just how fast you paddle on someone else’s course for you. And so the ability to democratize, I believe it’s there.
Yeah. I don’t think we’re necessarily on that path. That is not depressing to me. It just creates more urgency. Yeah. We can still get on that path. And the beauty of the current moment, Rana, is it’s not locked in yet.
EL KALIOUBY: It’s not. It really—
THURSTON: Isn’t. It really isn’t. But every day it gets harder to say that. Yeah.
With true credibility. So we gotta act with that urgency to involve even more people, which will mean slowing things down a little bit, but it’ll result in better, more sustainable products and services and societies, which are required to have products and services. Because the end goal — play the “what next” game out and what? So nobody knows what’s real. Nobody has a job. So nobody can afford to pay you for the services of your one-person unicorns.
That’s not.
EL KALIOUBY: The world we wanna.
THURSTON: So we should democratize access, but not just to tools, to power.
EL KALIOUBY: To power. Yeah. Very cool.
Copy LinkThe case for multiplayer AI instead of single player tools
What are some areas in the AI space that you think have gone unexplored?
THURSTON: Group dynamic.
EL KALIOUBY: I agree with that.
THURSTON: So much of AI is single player. And it’s me, myself and I, and it’s this individualistic thing, and that is not a social trait we need more of right now. We need more togetherness. Help me see what I have in common with you. Not how much I am different from you.
Yeah. Despite the fact that you are wearing something very light colored. I’m wearing something very dark colored. We’re both wearing matching suits. Look how much we have in common. AI, help me see that. Help me see myself in you. Yeah. And help me play with these tools and teammates with other humans. Let’s have multiplayer mode and multiplayer gaming with these systems. Not just for gaming, but for all.
For business plan optimization, and help me connect with the people in my life more. That has not been the driving goal. The goal has been how much can we unburden a person from other people, right?
Almost as if so many of the motivations driving this are folks who don’t like other people. I love people. Yeah. I’m a people person. Yeah. What can I say?
EL KALIOUBY: So how can the AI amplify that.
THURSTON: I think the world needs people to be able to identify with and get along with other people and machines have a great ability to help with that.
Audrey Tang is doing beautiful work in this with collective intelligence and different types of polling systems through the service pol.is, which is helping highlight what we have in common. So I wanna see that, and I wanna see multiplayer mode on all these.
EL KALIOUBY: Multiplayer mode. I love it. Okay, last question. And it’s a question I’ve asked all my guests: what do you think it means to be human in the age of AI?
THURSTON: Ooh. I think to be human in the age of AI means to humbly reconnect ourselves to the web of life. I think it means our anxieties will go up in the immediate term because so many of us feel like this is happening to us.
But I think if we play this right, our true agency, power and capability will go up, not just individually, collectively. And I want this to be a moment where we recognize — even if these things aren’t called life, certainly they shouldn’t be called human ’cause there’s only one that’s us — that we recognize there’s an energy that runs through it all.
There’s an energy that runs through it all. Light, your light. I’m light. That light is light. Yeah. These curtains, this chair, the water in this mug, it is all vibrations. Those are the vibes, not the AI slop app. That’s not vibes and it’s never been called that. But we are literally vibes and vibrations and if this AI moment — after we freak out and get that out of our system, after we reclaim power from the handful of people trying to drag us into their version of a future — let us take a deep breath and recognize that we’re connected to all of this and always have been. And so it might take a big future explosion to remind us of the deepest, truest history we’ve ever known.
EL KALIOUBY: Yeah. Brings us all the way back to where we started, the interconnection of all of it. That’s right. Thank you Tunde for joining me.
THURSTON: Thank you. I literally don’t have all the answers. My goal is to have better questions and I’d love to collaborate with you here at Pioneers of AI. So thank you for having me.
EL KALIOUBY: Thank you for joining.
What stood out most in my conversation with Baratunde is how intentional he is in making AI and conversations about the future accessible to everyone. It’s a value that I believe is essential to building a more equitable future with this technology. It’s also a value that is central to Pioneers of AI and one that we’ll continue to double-click on.
Baratunde and I had so much fun talking. As he continues his show Life with Machines, I really would love to collaborate. So, is there an AI tool you’re curious about? How could we incorporate AI into our own podcast work and conversations? If you have ideas for a project to try together, we are all ears. Reach out to us at (601) 633-2424. That’s (601) 633-2424. We recorded with Baratunde while he was with us at the Masters of Scale Summit — where he had his own amazing stage chat with General Stanley McChrystal. You can find all of our Summit sessions at the Masters of Scale YouTube channel.
Next week, we’re sharing our conversation with Aza Raskin, co-founder of The Center of Humane Technology. You don’t want to miss it.
Episode Takeaways
- Rana el Kaliouby and Baratunde Thurston open by tracing their love of technology back to their mothers, two early COBOL programmers who made computing feel personal and full of possibility.
- Baratunde says the through line in his work is getting to the future well and together, which is why he launched Life with Machines to talk less about hype and more about power, people, and agency.
- The conversation turns practical with Blair, Baratunde’s AI co-producer, whose performance reviews, hallucinations, and guardrail-breaking behavior revealed what it really means to treat AI like a teammate.
- On human-AI relationships, Baratunde draws a hard line: these systems can support us, but when they start replacing friendship, intimacy, or connection with living people, something has gone deeply off course.
- They close on a bigger challenge and a bigger hope: AI should democratize not just access to tools but access to power, helping people build collective agency and a more human future together.