To deliver excellent customer service, you need excellent service reps — agents who can meet a customer’s needs and accurately answer their questions, all while representing and elevating the company’s brand. Bret Taylor thinks that AI agents are the answer. As the co-founder of Sierra, Taylor is partnering with companies like Weight Watchers, Sonos, and SiriusXM to build AI agents that uniquely represent a brand’s mission and voice. As tech industry innovator and leader for two decades, and current Chairman of the Board of OpenAI, Taylor also talks about the future of AI and jobs, AI safety, and how AI can best benefit humanity.
About Bret
- Chairman of the board at OpenAI as of 2024
- Former co-CEO and President/COO of Salesforce; led $28B Slack acquisition (2020)
- Co-creator of Google Maps during tenure at Google
- Former CTO of Facebook after acquisition of FriendFeed
- Co-founder of Quip (acquired by Salesforce) and Sierra, leading AI startups
Table of Contents:
- How AI is becoming a daily copilot for work and family life
- What it means to steward OpenAI with purpose and public responsibility
- Why talent migration can strengthen the broader AI ecosystem
- Why customer service is the first big opportunity for agentic AI
- How empathy can turn AI support into a better human experience
- Why great AI agents must sound and feel like the brand they represent
- How AI will reshape jobs by changing roles rather than simply removing them
- How executives should find practical starting points for AI adoption
- What responsible AI looks like when trust is the product
- Why the best AI founders obsess over customer value not hype
- Episode Takeaways
Transcript:
How agentic AI will save customer service, with Bret Taylor
BRET TAYLOR: I think if you surveyed your listeners and say, do you like chat bots? My guess is zero out of 100 of your listeners would say yes, because what they associate with chat bots are these annoying robotic pieces of friction. You just want to talk to a real person, right?
I think if you surveyed those same 100 people and said you like ChatGPT, they’d probably all say yes.
RANA EL KALIOUBY: Bret Taylor says that the difference between the chatbots of yesterday and ChatGPT is a qualitative one. How did talking to this chatbot feel? Was it enjoyable? Even delightful? Or was it scripted, lifeless, and unhelpful?
Bret is trying to make the kind of chatbots that fit squarely in the delightful – or at least actually helpful – category. He and his company Sierra AI are partnering with brands like Weight Watchers and Sonos to create customer service chatbots that feel human.
It’s hard to find a Silicon Valley track record that matches Bret’s. He’s worked with some of the biggest names in tech, helping develop ubiquitous tools like Google Maps. He was also CTO at Facebook – now Meta, co-CEO at Salesforce, and now serves as Chairperson at OpenAI – the company behind ChatGPT.
Today our conversation with him is focused on the future of AI customer service agents, AI safety, and of course his role at OpenAI.
I’m Rana el Kaliouby and this is Pioneers of AI, a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
EL KALIOUBY: Well welcome to the show Bret. Thank you for being with us today.
TAYLOR: Thanks for having me.
Copy LinkHow AI is becoming a daily copilot for work and family life
EL KALIOUBY: So I like to start my conversations just kind of learning about how you’re using AI in your everyday life. I use AI almost every day, and my son, who’s 15, he loves being at the forefront of all these AI tools and he like asks ChatGPT for recipes about dinner each night and whatnot. So how do you and your family use AI?
TAYLOR: I use ChatGPT 20 plus times a day, kind of for everything. So I use it at work. When I’m writing, I use a product called Cursor when I’m coding, at home. I’ve got three children as well. So my daughter started reading Shakespeare. She was asking me what a passage meant. I wasn’t very good the first time I read Shakespeare.
But we got some great answers out of it.
EL KALIOUBY: How old are your kids?
TAYLOR: They’re sort of early teenagers. Yeah. So it’s been great. It’s sort of a co-pilot to my life. It’s like the expert I didn’t know I needed in pretty much every domain.
So it’s almost like saying, like, how do you use Google? It’s like anything. It’s everything. I’ve gotten to the point where I couldn’t imagine working without it.
EL KALIOUBY: Yeah, similar. And you go to it for, like, just thought partner kind of stuff, right? Like, what should I think about this?
TAYLOR: The thing I’m most excited about is I always thought of computers – I think most people did – as kind of these utilities, looking up facts, processing numbers really quickly, databases, very predictable sort of tools for automation. But now computers are creative foils. My daughter was doing an art project for one of her classes and was having basically whatever the artist equivalent of writer’s block is – she used ChatGPT to come up with some ideas. And I think that’s a really compelling evolution of how we use technology, that it’s not just a source of facts and automation, but actually something that’s a source of creativity.
And I think that’s pretty exciting. It’s very different than how I think most people conceived of computers and software, even like a couple of years ago.
Copy LinkWhat it means to steward OpenAI with purpose and public responsibility
EL KALIOUBY: Yeah, absolutely. So we’re here to talk about AI agents and Sierra, but before we get into that, our listeners are going to be thinking about your role as chairperson of OpenAI. So I want to kind of tackle that first. As chairperson, you don’t make the day to day operational decisions in the company, but you do have fiduciary responsibility. And like 200 million people use ChatGPT each week. So that’s a big responsibility. How do you think about that? How do you think about your role?
TAYLOR: Well, OpenAI is an unusual organization compared to most in the AI space because it’s a nonprofit. So what that means in practice for the board in particular, but for the company broadly, is it’s driven by its mission exclusively. So the mission of OpenAI is to ensure that artificial general intelligence benefits all of humanity, which is pretty heady.
It’s a big mission. It’s very broad. That’s how I think about it. So when I think about recruiting the board, which I played a part in this past year, you think about all the different areas of expertise you want represented when thinking about what does it mean for AGI to benefit humanity. How does it impact the economy? We’ve got Larry Summers, one of the world’s great economists. It’s about cyber security – we have Paul Nakasone, who ran the National Security Administration, on the board. It’s about safety – Zico Coulter is a professor at CMU and an expert in AI safety on the board. Sue Desmond Hellman, who is the head of the Bill and Melinda Gates Foundation, is also on our board. It’s a nonprofit so we think about what does it mean to really go beyond building AGI but distributing those benefits. So we’re really trying to, at the board level, have all the different areas of expertise to fully represent that mission.
And I don’t want to pretend we’re perfect, but I think it’s really fun to work at a company that cares so deeply, not just about building a technology, but distributing its benefits and ensuring it’s safe. And I’m really grateful to be a part of it.
Recently, Nobel Prize winner and computer scientist Geoffrey Hinton raised some concerns that OpenAI is focused less on safety and perhaps more on profit.
How do you balance profit and purpose for OpenAI?
I think OpenAI is about purpose.
So for me that’s not really a question. There’s a lot of nuance, though, about what does it mean to develop artificial general intelligence responsibly. And I think there’s some real substantive disagreement in the world about what that means. And I think it’s important, especially as a board member, that I listen to all those viewpoints, because I think we need to have the humility to understand that not every decision we make might be right. So it’s very important that I welcome feedback from experts like Geoffrey Hinton, who’s a remarkable researcher. We would not be here if not for his achievements in AI, but also from stakeholders in government and elsewhere. Broadly speaking, I would characterize OpenAI’s approach as responsible iterative deployment – the idea that there are actually lots of elements of benefiting humanity beyond safety. In an ivory tower, it’s unlikely we can contemplate all the third-order effects of a technology as nuanced and powerful as AGI. It’s unlikely we’ll be able to contemplate all the different ways one could jailbreak a model. It’s unlikely we could predict the impact on jobs without sort of having the technology available in some form. Similarly, I think if I look at the products that OpenAI makes, I think they’re probably some of the most powerful ways to ensure that AGI benefits humanity. We’re talking about ChatGPT. That is probably the most concrete manifestation of AI in people’s lives right now, and it’s probably doing more to benefit humanity than any product in AI that I can think of. So I don’t want to dismiss such a renowned scientist’s concerns at all, and I listen deeply. But I do think that for OpenAI, it is about purpose and recognizing that we want to build this remarkable technology in a responsible way and have really critical and thoughtful conversations about what it means to pursue our mission.
Copy LinkWhy talent migration can strengthen the broader AI ecosystem
EL KALIOUBY: Yeah, absolutely. So last question on OpenAI. Recently a few executives have left, and I can see how that could be concerning for the company, but I actually have a different perspective. I think it’s pretty awesome that this core set of founders of this organization are now potentially off doing their own flavors of AI companies. So how do you think about that?
TAYLOR: Silicon Valley has been defined by the fluidity of talent. If you look at the role of Fairchild Semiconductor and the folks who started some of the seminal companies that gave Silicon Valley its name. If you look at the authors of the Transformers paper, who went off to create other things.
EL KALIOUBY: Fairchild Semiconductor was a seminal company creating microchips among other things – and their work led to Silicon Valley’s name. In fact, some of the company’s founders went on to start household companies like Intel and AMD.
The Transformers paper Bret is talking about was written by researchers at Google and it kick started the rise of Generative AI.
The PayPal Mafia is another example that comes to mind. These former founders and employees of PayPal went off to start companies like Tesla, LinkedIn, YouTube … the list goes on!
I guess the point is, there’s no monopoly on talent!
TAYLOR: I actually just start with gratitude. OpenAI would not be here if not for all those people who are currently there and have left. I think I wouldn’t be in this chair if not for the creation of GPT-3, GPT-4, ChatGPT, and it’s transformed society. So I’m just grateful for everyone who helped build OpenAI into what it is.
And also grateful for all the new amazing folks coming into the organization as well. For those folks outside of Silicon Valley, my sense is it’s hard to understand how much this is just the way culture works here. And I think it’s really exciting. I think that when a technology is so meaningfully different than what preceded it, you really do need a large number of entrepreneurs exploring the impact of that technology. Not everyone’s theories will be right, and I’m just grateful to be here and be a part of it.
EL KALIOUBY: We’re going to take a short break. When we come back, we dig into the future of agentic AI. And hear about a particularly charming AI customer service agent who goes by the name Duncan Smothers.
[AD BREAK]
Stay with us.
[AD BREAK]
Copy LinkWhy customer service is the first big opportunity for agentic AI
So now that we’ve covered OpenAI, let’s dig into agentic AI and Sierra AI, your new company. You’ve done so many things over the course of your career. You were the co-creator of Google Maps. You started and sold a company to Facebook, so you spent some time there. You started another company and sold that to Salesforce and became the co-CEO, and you were on Twitter’s board, and now the chairperson of OpenAI. So when you decide to start something new, it’s no small thing. I would love to hear the origin story of Sierra. What was that tipping point where you were like, you know what, that’s going to be my next thing?
TAYLOR: Yeah, I’ll start with just how I view the AI market and why we focused on Sierra. I think that right now we’re being driven by changes in technology, and a lot of the applications of those technologies are probably obvious, but they don’t exist yet. And it reminds me of the early days of the web browser. When the web browser first came out, the Internet was new.
The idea that one could purchase things online was not conceptually complex, but actually getting to the point where that was possible – how do you actually use a credit card? How does that work? How do you actually sort of fulfill the order? What are the last mile logistics? Similarly, the idea that with all this information online, you’d want to search over it wasn’t particularly novel. Google is not the first search engine I used. It just turned out to be the best and earned this market. And I think the same is true of a lot of the defining companies of the web era.
So now move forward to artificial intelligence. I think there are a lot of areas that, intellectually, are going to be impacted. Education, the law, marketing, customer service, customer experience. So Clay Bevore, my co-founder, and I spent a lot of time thinking about what are the areas that will be impacted first? What are the areas that need scientific advances to really work? What are areas that can work with current technology? And we really settled on Sierra, and I’ll tell you in a short version what we do. We help companies build customer-facing AI agents. So if you have a problem with your Sonos speaker, you’ll chat with an AI now powered by our platform to help you fix it.
If you have a Sirius XM subscription and you want to upgrade or downgrade your subscription, you’ll chat with Harmony, their AI agent. If you buy a pair of OluKai shoes, which is one of my favorite flip flop brands, and heaven forbid you want to exchange it for another pair, you’ll chat with their AI agent.
If you’re a Weight Watchers member, there’s a tab in their app called 24/7 live coaching, and it’s now powered by AI and built on our platform. Broadly speaking, we’re helping companies build that sort of front-line customer experience that’s now conversational. And in the short term, it’s helping a lot of people improve customer service, which is often a sore point for many brands.
And often it’s very expensive to operate and hard to provide the quality that I think brands want to. But it’s also a new form of customer experience. I imagine in 2007 when Steve Jobs introduced the iPhone, few of us would have contemplated that we’d send nearly 100 percent of our email by thumb-typing on a touch screen, but we do – and we do not because it’s better than a keyboard. We do because it’s the most convenient way to do it. It’s in our pocket. Our thesis is the same thing will happen with conversational experiences, that in five or 10 years the main way we will interact digitally is by having a conversation with AI. And I think your AI agent will become just as important as your website, just as important as your mobile app, perhaps more important because of the convenience of it.
So we’re really trying to help brands embrace that future, help them in the short term with things like customer service, but long term, how do they build their AI agent that will help their customers do everything?
EL KALIOUBY: Yeah, I can imagine Harmony – you mentioned Harmony, Sirius XM’s AI agent – could be your interface to the company, right? It could help you with troubleshooting questions, but maybe also advice on what products you could buy, or more.
TAYLOR: Well, just think about the services you use not daily, but maybe monthly or yearly. Maybe you are filing an insurance claim because you got a fender bender.
EL KALIOUBY: Oof, that’s painful to do right now, right?
TAYLOR: It can be. And no matter how great you make these digital experiences, you have to maybe install the app that you didn’t have installed. You have to sign in and figure out what your password is. Then you have to figure out how to navigate it, and you’re in a moment of distress. You don’t necessarily know how the UI designer designed those buttons, and you’re figuring it out for the first time, and hopefully they did a great job designing it. Imagine you’d just be able to say, hey, I just got a little fender bender in a parking lot, I need to file a claim. And a helpful, multilingual, empathetic agent could just guide you through that process.
Maybe you actually don’t want to chat because you’re standing on the side of the highway. You can do it over the phone and you don’t need to wait on hold because the AI answers instantly. I think the idea that in those moments that matter you have to wait on hold, that you have to be transferred to the right person who has that expertise – it could be such an inhumane experience for people who go through it. So I’m just so excited for companies being able to provide this white-glove, incredibly empathetic experience 24/7 and also at a fraction of the cost of existing options.
Copy LinkHow empathy can turn AI support into a better human experience
EL KALIOUBY: I love that you brought up empathy. I spent 20 plus years of my life building emotion recognition and empathy into machines. And I think it’s really important that these AI agents have that empathy because that’s going to be the thing that builds trust with these agents. Do you agree?
TAYLOR: Absolutely. The earlier chat models were developed with a technique called instruction tuning.
EL KALIOUBY: Instruction tuning. It’s a technique used to fine-tune Large Language Models to make them more accurate.
TAYLOR: And it’s remarkable how empathetic most conversational models are just out of the box. They’re sort of designed to reflect the sentiment of the person having a conversation with them. And I think there are a lot of things that are afforded by technology that are counterintuitive. Maybe you’re having trouble with your Sonos speaker. Not everyone’s as geeky as you or I are. So for some people, they might want to be on the phone for 30 or 40 minutes. If you’re perhaps in a contact center doing that, maybe your manager wouldn’t be so excited about you spending 40 minutes on the phone. Maybe you need to – not in a rude way – sort of move it along. With an AI, no big deal. And I think for all these conversations, you can have the AI be empathetic in terms of tone and substance, but there are more subtle forms of empathy, like being able to answer the phone instantly, being able to operate at your cadence and your pace, not necessarily the pace that’s oriented by other business interests. And I think that’s incredibly exciting. There’s a website – I can’t remember the name of it – where you can find the phone numbers for consumer brands because it’s so hard to actually call most consumer brands. I think it’s based on a real simple business truth, which is it’s very expensive to have a conversation with their customers.
And as a consequence, most consumer brands make it sort of the last line of defense, not because they don’t want to talk to you. I think every brand would love to have conversations with every individual customer. They just can’t afford it. It’s just not cost effective. With AI conversationally, if you can bring down the cost of a customer conversation by one or even two orders of magnitude, how many more conversations can you have with your customers? And that’s really exciting to my mind, because you can actually have the experience of, say, going into a retail store and talking to the best associate to help guide you to your purchase. In those moments that matter, like we talked about, you can have someone that’s incredibly empathetic with instant access to information to solve your problem instantly.
So going back to the second and third order effects, I think it will really change the way companies design their customer experience, and I think we will orient around personalized conversations. And I think that’s really exciting.
Copy LinkWhy great AI agents must sound and feel like the brand they represent
EL KALIOUBY: So brand identity is super crucial in these cases, right? You’re not going to give Sirius XM the same AI agent you’re going to build for Weight Watchers. So how do you approach that? How do you get the right brand voice?
TAYLOR: We believe that AI agents should be brand ambassadors. So we spend a lot of time training our AI agents, not just on the standard operating procedures of your company, but what does it mean to represent your brand? One of the ones that I find really delightful – if you know Chubbies, the shorts brand – their AI agent is named Duncan Smothers. He’s got a bit of sarcasm, he’s a little fun. The Chubbies brand has just invested a lot in their brand and you can feel it when you’re chatting with Duncan. And if you chat with OluKai’s agent, the Hawaiian inspired brand, you’ll see it and feel it in those conversations. And I think that’s great.
I actually think that just like you spend a lot of time when designing, say, a website – it’s not just about the functionality of the website, it’s about what do you feel when you see it, what’s the color scheme – your conversational AI is a brand ambassador. So whatever your brand’s values are, it should emanate from those conversations. And that’s really exciting. And it’s probably one of the most fun parts of designing these agents with our customers.
Can we geek out for a moment?
What is the sort of training data, the brand-specific training data, that you use to customize these agents? How would you customize these agents?
So a lot of the companies we work with actually have brand books and voice guidelines already. Particularly in retail, a lot of companies do that. And if you’re trying to train your associates on how to represent your brand, we work with one luxury furniture retailer and they have guidelines on the language of luxury and what that means.
I learned so much through that process. So fundamentally, we take all the training materials you provide your best associates. And the other thing that we do is help companies tune it over time. For those people who are listening right now who have ever made a mobile app or website – you have all your ideas and then it meets reality, right? You spend a lot of time, maybe you had a great idea for one screen in your app, and then you did a usability study and found out no one can figure out what to do. The same can happen with conversation design. So we spend a lot of time tuning tone and flow, and we built a lot of tools for customer experience leaders to actually iteratively improve and evolve their agent over time.
One of the interesting – I would say almost philosophical – points about building a conversational customer experience versus building, say, a website or a mobile app, is it’s not really bounded. Just think of a typical retail website. You could probably click and go to every single page on the site.
If you had the time.
With a conversational agent, whether it’s over the phone or over chat, it’s just a free-form text box. Your customers will say whatever is on their mind. So consumers and customers have a little bit more say in your customer experience with conversational AI, and they’ll take it in directions that you didn’t foresee. So as you’re thinking about brand, but also tone and substance – like what is this AI capable of doing? – one of the most fun parts of developing these agents with particularly large brands is they’ll find things their customers want to talk to them about that they didn’t foresee.
And actually it changes things. You take.
EL KALIOUBY: Take all that data and you basically iteratively improve or tweak.
TAYLOR: Tweak the agent design. It’s impossible to predict everything your customers will want to talk to you about, so it’s not just getting it right from day one, it’s actually building a feedback loop. And a lot of our customers call their customer experience leaders who work on their agent the AI architects. And I think it’s a new role. Conversation design is the new UI design. AI architect is like the product manager for the AI agent.
And I’m really excited for these new roles to emerge and excited to build the tools for those new roles as well, as they’re evolving over time. Just like the emergence of the web, where we now know what a web designer is, what a web developer is, front end, back end – right now we sort of know what the roles are to make a website.
So we’re really trying to not only build these agents for some of the largest brands in the world, but help all the customer experience leaders around the world think about what is my role in this new world and help them be successful as well.
Copy LinkHow AI will reshape jobs by changing roles rather than simply removing them
EL KALIOUBY: That is really interesting because on the one hand you are taking jobs, right? You’re displacing jobs with these agents because this was ordinarily done by human customer service representatives.
But what I’m hearing you say is you’re also creating a whole new slew of jobs that didn’t exist before this agentic AI world. In the same way that with web design and even the creator economy with social media platforms, right? These were all jobs that didn’t exist before.
TAYLOR: Yeah, I think my opinions on this are quite nuanced. Certainly, simplistically, all forms of AI may displace some jobs, just like the automated teller machine displaced the role of a bank teller.
But my understanding – which may not be correct, but from the articles I’ve read – is there are not fewer bank branches or even fewer people in those bank branches. But they’re doing different things than they were doing before. So one of the things I’m interested in seeing is when you introduce AI to answer customer service inquiries, do you reduce the number of people in customer service? Or do you just have more conversations with your customers? I don’t know the answer, and it’s not rhetorical, because actually, if you talk to most CEOs of most companies, they do care about cost, but they probably care more about growth.
And if you have a technology that is a lower cost way to scale having conversations with your customers and you can develop deeper relationships, higher net promoter scores, more customer loyalty, you could choose instead of just recouping those as cost savings to reinvest them to actually make conversational experiences a more prominent form of your customer experience.
And it may lead to just a higher volume of conversations overall than what you had before. So I would say it’s an important conversation to have because with any technology change, every technology company should play a part in creating these new job roles, which certainly will exist.
I’ve always been surprised at how hard it is to predict how technologies will actually impact the way companies operate. And it’s clearly not zero sum to me. And I’m certainly not at the point of being able to predict it myself.
Copy LinkHow executives should find practical starting points for AI adoption
EL KALIOUBY: Yeah. Every company’s trying to figure out their AI strategy – I hope so, at least. How do you, and let’s put on the hat of executives at these companies who are trying to figure out where’s the value add with AI – it’s not an AI-first company, it’s not like they are immersed in AI – how do you help these organizations navigate the question of, okay, where do I get started? How do I vet all these different tools that are out there? Where’s the low-hanging opportunity? I’m sure you have to deal with that in a lot of the organizations you work with.
So how would you help an executive navigate that question?
TAYLOR: I always like to start with a business problem. Especially in the face of a technology as exciting as AI, it’s sort of catnip for technology teams.
And I think just like there’s not an internet department at most companies, I don’t think there should necessarily be an AI department at most companies. More than that, I think that starting with a business problem can really narrow the scope of technology vendors that you might choose to speak with, but also just narrow the scope of the risks associated with AI.
So just as an intellectual point, if you wanted to make a general purpose AI agent that could do anything, that’s a science problem. That’s a research problem. It doesn’t exist yet. Or if it does, it would be both not robust and unsafe, so you would never deploy a technology like that.
But if you wanted an AI agent to help your software engineers code, technologies like that exist today. Microsoft Copilot, Cursor, which is a cool startup that my company uses. That’s possible because you’ve narrowed the scope of what you’re trying to solve to a specific person or specific role at your company. For the companies building those technologies, they’ve narrowed the scope of technology problems from science to engineering to do it. So if I think about it, say okay, let’s look at our legal costs. Maybe you should talk to Harvey. That’s a cool AI startup more narrowly focused on that. Maybe it’s content marketing.
A lot of our retailers are showing up to Black Friday, Cyber Monday coming up, and how do you personalize your campaigns? There’s a long list of really compelling startups working on marketing and personalization using this technology. You want to impact customer service? Call Sierra.
Obviously, I’m—
We’re the best on this, and I think we’re the market leader in this area.
EL KALIOUBY: Our agentic AI future is here. And now that it is, what does it mean to build these AI agents responsibly? That’s in a minute after a short break.
[AD BREAK]
Copy LinkWhat responsible AI looks like when trust is the product
We like to really kind of interrogate this idea of responsible AI on this podcast. So what does responsible AI mean for you, and for Sierra?
TAYLOR: Yeah, it’s a great question. I really like OpenAI’s mission, which is ensuring that artificial general intelligence benefits humanity. I think that is a lofty mission that may not apply as much to more applied companies like Sierra.
But I always start with benefit of humanity and ensuring that technology is not inherently good or bad – it’s what you do with it. And I think responsibility and responsible means that you’re playing a part in that, that you’re not passive in that conversation with stakeholders, whether it’s your customers or governments or society. One of the things that I think a lot about is where technologies have great promise but have violated society’s trust.
Probably nuclear energy is one of the areas where most of the folks I know who are smart and care about climate believe and wish that nuclear were more prevalent as we figure out how to scale energy storage and renewable energy. But because there have been multiple moments of violating society’s trust, society sort of revoked our license to invest in nuclear. So when I think about AI, I think a lot about that – we need to experiment with this technology, but we need to ensure that all the companies working in this space put in the appropriate guardrails so that society’s experience of this technology doesn’t trip over those wires where people say, hey, I’m not sure this is good. I’m not sure this will benefit society. So we just think a lot about trust and that word trust.
And I think it means multiple things. It means, does the technology actually do what it says it’s going to do?
Another form of trust is, are the experiences I have with this technology good, in a more qualitative way. And I know that might seem far afield from responsible, but I do think that when you think about society’s relationship with technology – take complex ones like social media, the smartphone – it can be an inkblot test of whether you think, oh, I’m addicted to this technology and I pull it all the time, or I have a supercomputer in my pocket that can do all this. Both are true at the same time. So I really think that with AI, our relationship with it as a society is a sum of all of those experiences. It can be bigger, headier things like AI safety. It could be small things like the last time I chatted with an AI, was it delightful? Or was it annoying?
And I think our relationship with AI will be the sum of all of those things. When I think about responsibility, it’s that we, as developers of this technology, need to be accountable for that and participate in those conversations and not be a passive participant.
Copy LinkWhy the best AI founders obsess over customer value not hype
EL KALIOUBY: I love that. Last question.
You have started, sold, and ran many companies. What’s your advice for founders of AI companies today?
TAYLOR: So I think it’s really important to relentlessly focus on your customer’s success. If you’re building an AI company, like the dot-com bubble, there is so much excitement about the technology itself. It’s very tempting to be the coolest person at the Silicon Valley cocktail party because you have the coolest demo. And rarely will that mean that you will become the most successful company, because that’s driven exclusively by building value for customers. I think because there’s so much excitement about the technology, I’m not sure it’s inversely correlated, but there’s a strong risk that the excitement of your peers building AI might not be correlated with value for customers.
And I think that right now, having entrepreneurs that spend a disproportionate amount of time with their customers and actually deeply understanding the business value that you’re providing will be much more likely to create an enduring business right now.
EL KALIOUBY: Amazing. Love that. Thank you so much for joining us today.
TAYLOR: Thank you for having me.
EL KALIOUBY: I am so excited for a better customer experience powered by AI, whether it’s coming from Sierra or many of the other emerging companies in this space.
With Black Friday and the holidays just around the corner, this is the kind of tech that will make YOUR life easier. People often think about these customer service AI chatbots when something goes wrong – like a canceled order or a delayed shipment.
But it’s not just about that. We’re going to see more and more of these AI brand ambassadors. And the promise is that these AI ambassadors will elevate the customer experience all around.
If you’ve had the experience of talking to a smarter AI chatbot or AI agent, was it helpful? Was it personable? Or even delightful?
Episode Takeaways
- Bret Taylor opens with a simple contrast: people hated old-school chatbots, but they love ChatGPT, and that gap comes down to whether the experience feels useful, natural, and even delightful.
- As OpenAI chairperson, Bret says the company’s nonprofit mission keeps it focused on ensuring AGI benefits humanity, while balancing safety through what he calls responsible iterative deployment.
- That same practical lens led Bret to launch Sierra AI, which helps brands like Sonos, SiriusXM, and WeightWatchers build conversational agents that could become the new front door to customer experience.
- He argues the best AI agents are not just efficient problem-solvers but empathetic brand ambassadors, shaped by company voice, refined through real customer feedback, and designed to earn trust over time.
- Looking ahead, Bret is candid that AI will reshape jobs, but he urges leaders and founders to start with real business problems and stay relentlessly focused on customer value, not just flashy demos.