As global leaders disperse from the World Economic Forum, LinkedIn co-founder and tech investor Reid Hoffman joins Rapid Response to break down the biggest challenges and opportunities facing business today, from political headwinds tied to immigration and geopolitics to AI’s real-time impact on industries like music and healthcare. Reid also explains why fears of a tech bubble aren’t shaping his investing, what it really means to be an AI-first organization, and why this moment calls for CEOs to speak up and show courage.
About Reid
- Founder of LinkedIn
- Host of Masters of Scale
- Partner at Greylock Partners
Table of Contents:
- Reid's AI-generated Christmas album
- Reid Hoffman on the AI conversations at the World Economic Forum
- Is there an AI bubble? Reid Hoffman weighs in
- How Reid Hoffman evaluates AI start-ups
- The case for AI providing a second medical opinion
- Energy, climate & the future of AI infrastructure
- How to regulate AI
- Immigration, talent & the prosperity of the tech sector
- Reid Hoffman on why business leaders should speak up
- How Reid Hoffman suggests we use AI in our daily lives
- What companies get wrong about implementing AI
- Episode Takeaways
Transcript:
AI’s new tune, Davos & the need for CEO courage
REID HOFFMAN: People like to have a cause célèbre and virtue signaling of, “I’m really good, and that AI thing is bad.” It’s the greatest human amplifier for quality of life that’s been built in human history. You’re saying, “Go somewhere else.” It’s like the Bernie Sanders stupid, “No data centers here. Build them all in Canada. Have Canada get all the economic benefit. Let’s make sure we Americans don’t.” It’s crazy thinking. It’s stupid thinking. You have responsibilities. They’re: commensurate with your power and so you need to speak up. I, myself, get regularly called out by the White House. We need to be speaking up, and we need to be figuring out how to solve our problems together.
BOB SAFIAN: That’s Reid Hoffman, of course, founding host of Masters of Scale, co-founder of LinkedIn, and tech investor. I’ve had Reid as a guest on the show many times. Rarely has he been this animated with the World Economic Forum in Davos as a backdrop. Reid and I talk about the latest in AI, as well as the leadership challenges and opportunities facing the business community. We talk about music and medical care, how the prospect of a tech bubble is and isn’t impacting his own investing right now. And we dip into the roadblocks posed by political headwinds, from ICE activities to Denmark. It’s been a busy start to 2026, so let’s get to it. I’m Bob Safian, and this is Rapid Response.
[THEME MUSIC]
I’m Bob Safian. I’m here with Reid Hoffman, co-founder of LinkedIn, partner at Greylock, founding host of Master Scale, co-host of podcast Possible. Reid, great as always to chat with you.
HOFFMAN: It is always great to see you, Bob.
Copy LinkReid’s AI-generated Christmas album
SAFIAN: I have to start by thanking you. You sent me a holiday gift of Christmas music, an album of all AI-generated songs. Here it is. Your own creations, what sparked that project?
HOFFMAN: We’re actually going to put it up on Spotify because we got such positive responses to it. It was a couple things. So, one is, when you look at AI and the superagency of increasing human expression, whether it’s text, picture, video, music is another really central thing. And me as a start-up guy, I was thinking, “Are there start-up opportunities here? Is there something that could be done?” And I haven’t yet figured any of that out. I’ve talked to a number of different interesting people. But what I did realize is I could just use the tools to start showing that expression, that expansion of the scope of possibility amongst our imagination. That’s now, with AI, massively extended.
And I was like, “Well, I should use music even though I am not a composer of music. I am not a player of music. And I could use it to create something that’s humanly expressive.” And I said, “Oh, that’s a good idea. What should I do with that?” Most Christmas music, there’s maybe like five Christmas songs I would want to listen to once or twice the season, and that’s it. And most of it, schmaltzy “Happy, happy Christmas, best time of the year.” And you’re like, “Oh my God, it’s mall music. It’s Muzak, et cetera,” especially as it’s massively overplayed. How could I use being affectionate to the holiday, but creating the kind of music that we would want as part of human gesture? Days from now, maybe even before this podcast airs, we’ll have it up on Spotify.
SAFIAN: We recently had Harvey Mason jr., the head of the Recording Academy on the show to talk about the Grammys, and he was surprisingly open to AI-generated music. Are you hoping for a Grammy in your future?
HOFFMAN: I’m not sure that I would ever, ever, even potentially be within the same planet as a Grammy. Harvey, I’ve had email exchanges with. I actually sent them the record so that they’d get a little exposure to it. But I do think that one of the things that people have to get through is, look, not try to retroactively try to hold onto the past. Part of art and everything else is: how do we create the future? If you have ideas and imagination, the tool set for creating stuff completely changes. So, for example, lots of people can code who didn’t know how to code before because of coding tools. Do engineers go, “Oh, down with these evil tools because we should have a monopoly on our own coding.” It’s, no, it changes the landscape. If you can’t afford a lawyer and you’re looking at a lease, you can use the AI tools.
All of these things are part of the human world that has opened us to it, and music and creativity is some of it. One of the things I told actually a multiple Grammar winner, I said, “Look, let me tell you two things. The first thing’s going to terrify you, and the second thing I hope will make you insanely optimistic and curious.” The first thing is, “I can create a knockoff of your music and song today in fairly simple prompting,” and go, “Here it goes and here it is.” And you go, “Shit, that’s kind of my voice and that’s my kind of song and I’m known for that.” And you go, “Right.”
What makes the AI tool interesting is there is an award-winning global architect in Japan. The way this architect creates his initial thing when someone says, “Hey, I might want to hire you for a building, please come give me some ideas.” Literally, we’ll go into ChatGPT and actually, I think DALL·E and Midjourney and I think probably is using Nano Banana now and say, “Create me 20 things in my style with different kinds of prompts.” And then he goes, “Okay, three, seven, and 15 are really good. Let’s iterate on those some, let’s pack you up, let’s bring them.” And that’s my style. It’s actually massively helpful. And the same thing is true for music. So, when I was talking, it’s like, “Look, you can now create stuff in your style, and you can create hours of interesting music in minutes.”
And then you can go, “Well, okay, this song’s not so good. Hey, actually this bit, seconds 32 through 57 are really good. We’re going to pull that out, and we use that as an anchor for the next genius thing that we’re doing.” And that’s part of the creativity that this increases. And so, part of Superagency, the book, as you know, is I’m trying to not just tell, but show. I’m going to do another minimum six records this year and maybe a lot more because it’s given me all kinds of different ideas about what kinds of things could create the light and sharing and community within the human experience. And that’s a great use of music.
SAFIAN: I started with the music because I wanted to hear your spirit and have some good news to start because the early part of 2026 has been quite a hit. I mean, between vaccines and Venezuela and Minneapolis and Greenland. How are you feeling these days?
HOFFMAN: Well, it’s roughly, “Can we hold it together to get the amazing thing that’s coming with AI?” Because obviously, the political world’s going to hell in a handbasket. All these really terrible, terrible things happening. But AI’s capabilities for human empowerment, I mean, imagine the world has changed that your smartphone is essentially a good doctor. Doesn’t have to be perfect, doesn’t have to be, but is there always. If you’re not doing a second opinion today, whenever you have a major medical issue, first opinion, if you have access to a doctor, is still the right thing to do. But if you’re not doing a second opinion, you’re making a huge mistake — that’s amazingly good.
Copy LinkReid Hoffman on the AI conversations at the World Economic Forum
SAFIAN: The World Economic Forum is going on in Davos right now, and along with the geopolitical issues, AI has been a central topic, but it feels like the conversation has been around things like AI security and the impact on jobs and economies. Is the state of the conversation there what you expected?
HOFFMAN: Well, it’s what I expected, but it’s also one of the reasons why I do books like Superagency, podcasts like Masters of Scale and Possible and other things in order to try to say, “Look, you can’t avoid the bad futures by just trying to avoid the bad futures. You have to steer towards the good futures.” It doesn’t mean you ignore bad. It doesn’t mean you ignore cyber risk or you ignore issues of digital sovereignty or you ignore issues of geopolitical power imbalance. But you try to say, “Hey, how do we get to a really good future?” I constantly talk to people who are like, “Well, I’d really just like us to pause. Until we sort out this particular problem, we should all pause.”
And you think 8 billion people are going to pause? So, it’s like, “No, no, no. We’re going in that direction,” and it’s a question of, you’re going with the rapids and it’s a question of how you row your boat, where you’re going to, what you’re trying to do, and it is shapable. You just have to take agency and do it. And so, the Davos conversation, I think 80% of it is a non-productive conversation or even a counterproductive conversation. We’re squandering this huge opportunity to say, “Hey, AI is American intelligence, and this is just like the American world order, this is something we want to build collaboratively and to be helpful to all of every society that we’re not enemies with.”
SAFIAN: The enemy is not necessarily Denmark, is what you’re saying.
HOFFMAN: No.
SAFIAN: Denmark should be at the top of the friends list, I mean, you and I have talked about some of these issues now for so long that I sometimes forget. You have been at the center of so many key AI developments. You’re a co-founder and early board member at OpenAI, played a role in bringing Microsoft and OpenAI together. You helped launch Inflection AI and Manas AI. Your investments and your books and your podcasts. And all of it is helping to usher this new era into being and you’ve taken on that role. How conscious was that choice and why? And do you feel the weight of it?
HOFFMAN: Well, I definitely feel the weight of it. And part of the thing that I think is a mistake about how some people think about this is we are better as a society when you have a, call it a, very broad conception of self-interest. It’s like the way that I think of things is I think humanity, society, and then me. And it doesn’t mean: me, not present. It means me as part of industry, part of society, part of humanity. And some of it is obviously to your own benefit too. And then when it’s aligned, this is part of what makes the genius of capitalism and what makes certain societies so much more productive because they go, “Hey, let’s align the areas where there is a me that also goes to a very broad us.” And so, when I approach AI and technology, I think of it on that level.
For example, I would trade off my own economic interest in a heartbeat against society and humanity. And that’s the journey we are on with AI. It’s the greatest human amplifier for quality of life that’s been built in human history. It’s like the same reason why we have cars and not horses and buggies. “Hey, I make my living grooming horses.” It’s like, “Well, horses’ role in society has changed. It’s no longer primary transport.” You have to go through the change. “Well, I would like the change to happen after my generation.” It’s like, “Well, that’s never the way it works.” Those folks who adopt these technologies and shape them to how you live good human lives is part of how you enable the next generation and the next generation.
I, obviously, deploy commercially when it makes sense because commercial models are part of what gets you to scale. But what matters first: humanity. What matters second is society. What matters third is functional industries, which matter for economic and others, and as part of that gets down to matters for me.
Copy LinkIs there an AI bubble? Reid Hoffman weighs in
SAFIAN: Your optimism, what you call reasoned optimism about AI, there’s also at the same time this constant talk about an AI bubble, valuations and hype about some use cases. Is an AI bubble or some bubbly parts of AI inevitable? I mean, in any tech cycle, there’s overinvestment and some big bets that don’t pay off, or is bubble a word that’s too loaded and you would describe it a different way?
HOFFMAN: Good use of the term bubble is when you think the economic frenzy has got to a point that when it breaks, it will take the economic system down with it. It’s horrific suffering across, frankly, the globe. So, a bubble is a substantive risk of that. It is not, “Hey, valuations are higher this month than they should be, and they’ll be corrected in four months or 12 months.” So, the question is, “bubble” is this catastrophic unwind, whereas it might be in various ways, we have speculative investment. For example, you say, “Hey, if you’re managing a pension fund, should you be buying the stock market at the top of the stock market? Because the answer is probably not.” Even though, by the way, buying the stock market over 10-20 years is exactly what you should be doing.
So, there’s some timing issues. I’ve passed on a number of investments because of valuations because I’ve gone, “Look, the risk-reward on that valuation is not right. I’m still making other investments.” Some people go, “Well, the AI revolution, you should buy at any price.” Once a lot of people start thinking ‘buy at any price,’ you know that you’re at risk of a bubble. Whereas when people are going, “Oh no, this one’s a good bet, that one’s not a good bet, and some of these will be wrong.” And by the way, some of them, like say we built data centers two or three years earlier in the full capacity of what we needed. By the way, nothing suggests that. There’s huge demand for training, there’s huge demand for product development, and there’s huge demand for inference.
The expectation is the data centers that we’re going to be multi-years of, “Can we have 5X the number of data centers? Can we have 10X the number?” And by the way, there’s these U.S. political things, “Oh, is this data center bad for communities?” It’s, look, I would allow any community to build the data centers they want. It’s having industry. It’s useful to have local power plans. They say, “Hey, you got to be careful about individual citizens’ electricity prices going up.” You need to add as much power as you’re going to use or more on a green-effective basis. Great, because then you get generally more power on the grid, and it having compute available, this is precisely what’s going on.
Davos is going, “Oh my God, it looks like the two countries that are taking the lead at having a lot of compute available are the U.S. and China, we all need to be included too.” And yet, you’ve got idiots like Senator Sanders in this country who are like, “Oh no, no, we’d rather have horses and carts. Please don’t have data centers here.”
SAFIAN: Reid’s passion about AI is so compelling, and he’s an equal-opportunity critic of what he sees as drags on U.S. competitiveness, whether it’s coming from the Trump White House or from Bernie Sanders. So, where does Reid see the most risk and the most opportunity in technology right now and how is right now a moment for courage from business leaders? We’ll talk about that and more after the break. Stay with us.
[AD BREAK]
Before the break, Reid Hoffman explained why he sees AI as the greatest tool to improve human quality of life. Now, he talks about what it’ll take to make the journey from here to there as investors, business leaders, and society. And he details his own personal tactics as an investor, plus a rapid-fire round on AI talent, what it takes to be an AI first organization, and the case for CEO courage. Let’s jump back in.
Copy LinkHow Reid Hoffman evaluates AI start-ups
When you look at AI start-ups today versus, say two years ago, is what defines a defensible, attractive AI start-up today, has that parameter changed for you, whether it’s valuation or whether it’s what a moat is?
HOFFMAN: So, moat’s more important. Valuation is a risk-reward thing. You can take a big risk when you get to something that’s a really good moat. When you are uncertain about moats, then you want to take less risk. The mistake people frequently make is that, say for example, they go, “Well, coding assistance is going to be really valuable. My first round will be raising at a $10 billion valuation, and I want to raise $2 billion.” And you’re like, “Well, there aren’t that many companies that are worth more than $10 billion.” So, you’re saying that you are on a risk-adjusted basis, you’re going to be so high on that list that you’re going to get a multiple from where you are now, even though you haven’t built any product.
Part of being smart about investing is, and this gets to your moats question, is like, “Well, what is everyone else doing in coding? And by the way, what is Microsoft doing in coding? What is OpenAI doing coding? What is Anthropic doing coding? What is Google doing in coding?” And you have to analyze that risk-return.
Now, sometimes part of how some growth investors, epic careers are made, is they say, “Well, everyone else thinks that investing in this thing at $10 million is nutty, but I know this is going to be – it has actually a pretty good shot at a multi-hundred-billion-dollar valuation without tons of additional capital.” If you have a reasonably high probability of that, then you can go. If it’s a 1% probability, you’ve got a problem. If you’ve got a 50% probability, woo, that’s probably a really damn good investment. And I know something other people don’t know.
Part of being a good investor, it’s not just, “Do you identify a technology trend? Do you identify good founders?”, but also, valuation is one of the things that makes good investors. Now, part of the reason why we have a lot of people talking about a bubble is they go, “Oh my God, there’s these new valuations that we’ve never seen before ever in the history of technology investments.” And the answer is, yes. And by the way, some of those will be absolutely right. People who have invested in OpenAI and Anthropic earlier, where people say, “Oh, that’s totally crazy,” have made historic returns already today. You can’t presume that you pick 100%.
That’s part of investors having a portfolio. I go, “Okay, I try to pick 20 things and I try to make four-plus of them epic and I try to make as many of them money-making as possible. But by the way, probably in my 20, I probably have five to 10 that are like, “Oops,” but that’s because I’m being bold enough in my investments.”
SAFIAN: For a public markets investor, and I’ve tried to explain this sometimes to my kids. Sometimes, you can lose money on a good company and make money on a crappy company because of when you get in and when you get out. But when you’re talking about for venture investing, it has to be a good company, right?
HOFFMAN: Yes.
SAFIAN: I mean, you can lose good money on a good company, but you’re never going to make money on a bad company.
HOFFMAN: Actually, occasionally, it does happen because occasionally, you’re lucky. Random company is bought strategically for a ton of money that you’re like, “Ooh, good. You take it.”
SAFIAN: Got out of that one.
HOFFMAN: It does happen. But the key thing, and this is the thing, is I try to be a long-term builder, not a market timer. When I invest in that company, it’s because I think 10 years from now, it’ll be industry-transforming. It’ll be a LinkedIn, it’ll be an Airbnb, be an OpenAI. Those are the kinds of things that I’m essentially trying to do. And so, you have to be the compounding value to society in the market in order to do that. And that’s what I put all of my thinking into.
SAFIAN: If you’re game, I’d love to do a rapid-fire round with you, give you some AI topics.
HOFFMAN: Absolutely. Yeah.
SAFIAN: From an investor perspective, are we at a consolidation phase when it comes to AI, things are coming together? Or is it still unbridled innovation?
HOFFMAN: Much closer to unbridled innovation. There’s a number of things that people haven’t seen yet that it isn’t just, “Oh, a small number of the big frontier models are all the winners.” I think there’s going to be a bunch of other stuff.
Copy LinkThe case for AI providing a second medical opinion
SAFIAN: You said that you want your doctors to use AI to double-check their work. So, are you saying that I should trust AI to know better than my doctor?
HOFFMAN: Well, sometimes it will, but that’s the reason I was very precise. The second opinion with AI is totally cheap. It’s easy. And the second opinion contradicts the first opinion, get a third. Go talk to another doctor. Sometimes, your doctor is wrong, by the way. Sometimes, ChatGPT is wrong too, but that’s the reason why we consult multiple opinions because you go, “Well, now I’ve got two doctors and a frontier model or two.” At this point, you should have a pretty good sense of what you need to dig into, if not the answer.
SAFIAN: And the doctor themselves should be using the models to double-check themselves.
HOFFMAN: I actually think it’s literally almost malpractice not to be doing that today.
Copy LinkEnergy, climate & the future of AI infrastructure
SAFIAN: Energy and AI. We talked about this a little bit, this political pressure on tech firms to absorb grid costs and communities pushing back. Do you see the energy economics of AI playing out in a particular way in the year ahead?
HOFFMAN: Well, this is one of the areas where people like to have a cause célèbre and virtue signaling of, “I’m really good.” And that AI thing is bad because it’s bad on energy, it’s bad on climate, it’s bad on local electricity costs. And by the way, it doesn’t mean that there won’t be something there. But for example, when people say, “Oh, AI is already raising electric costs,” and they give you their data analysis. It’s like, “Well, you’ve got a whole bunch of raising electric costs that have nothing to do with AI.” There’s no data centers there, and you’ve got some other areas where there’s data centers with the electric costs the same. You haven’t actually even yet made the case. Now, I’m not saying it’s impossible to happen, but part of the responsibility of building that is we should be building out clean power.
Microsoft and Google and so forth are taking huge risks on helping buy stuff from geothermal plants and other kinds of things to prototype green power. Which should then be spread through the use in your HVAC system or your washing machine, your microwave. And by the way, then you begin to apply AI itself. I could easily imagine you can train AI models today to save 20 to 30% of the power of your average middle-class household with no impact on quality of life. So, the demand is not like, “Stop this AI because it’s bad for climate.” It’s like, “Hey, make sure you’re doing all of these things in your data center development, in your power development, and your use of AI that makes us net better on electricity and makes us net better on our environment.”
Copy LinkHow to regulate AI
SAFIAN: AI and regulations. Last time we talked, you used an example of cars not being invented at the same time as seatbelts. Are we any closer to fastening ourselves in when it comes to AI?
HOFFMAN: Not really. Maybe we’re closer on, “Hey, we should have more protection of children in part because we have bad actors like xAI and Grok that are saying, ‘Hey, we don’t mind if we create sexualized images of children.'” So, maybe we have to lean in more on those things. But broadly, for example, if you say, “Well, what should regulation do?” It’s like, “Well, create a clear safe harbor for how all tech companies can create medical assistance to be more of assistance to the millions and billions of people.” Simple, safe harbor. Remember, I’m not a doctor and remember, you should talk to a doctor if you have access to one.
Maybe a part of it is that when I can go see a doctor, I can say, “Hey, AI of my choice, please produce a summary for a doctor about the whole conversation we had about this, and please talk to my doctor to help my doctor, she or he, understand what it is I’ve been trying to navigate.” If you have those things, it’s a safe harbor for experimentation, even when it gets it wrong sometimes. Now, should we be figuring out what the benchmark is to say, “Well, what’s the number of errors that’s acceptable?” Yes, but we can only begin to understand that once we begin to deploy it and see what the data is.
Copy LinkImmigration, talent & the prosperity of the tech sector
SAFIAN: AI and talent. One of the biggest drivers of U.S. tech leadership has been attracting talent from outside the U.S. Immigration policies tightened up. The outside pipeline has basically been cut off. Are we going to start to see any implications from that this year?
HOFFMAN: Well, I think we already are. A huge amount of the technology advantages the U.S. has had is because of Indian and Chinese talent that’s come over. Well, now, the Indian talent’s going to stay there. It’s going to go to Canada. It’s going to go to Europe. When someone comes and builds a huge company here, it creates lots of jobs for restaurants and accountants and all kinds of services and then buying stuff from American manufacturing and staying in American hotels and all the rest of the stuff. You’re wiping out all of that, and you’re saying, “Go somewhere else.” It’s like the Bernie Sanders stupid, “No data centers here. Build them all in Canada. Have Canada get all the economic benefit. Let’s make sure we Americans don’t.” And it’s just literally, up is down. It’s crazy thinking, it’s stupid thinking.
And so, you want that immigration. That’s how we built the prosperity of this country. In 250 years it comes entirely from a generation of us going, “Here is the way that we will take immigration and be a competitive advantage to every other country in the world.” It’s like, “Oh, well, let’s take our competitive advantage and let’s sabotage it.” Now, none of this says that we haven’t gotten to a place where we have problems with the borders, we have problems with asylum, we have a set of other things that of course need to get fixed. People are saying, “Hey, I’m feeling pain in my job, in my community, my environment. What’s going wrong?” and help fix it. And we should be doing that. But by the way, completely closing the border is not the right idea.
I mean, you could do that as a start just as saying, “Hey, let’s re-normalize.” But then you have to understand, for example, the earlier, “Well, we’re going to send ICE after all of the agricultural work.” And then I was like, “Oh, our farm’s going to stop working.” “Oh, don’t do that. No, no. Send them into the center streets of Minnesota, so they can beat up people and shoot people. Do that instead.” You’re like, “Okay, that’s not good either.” It’s frankly catastrophic and terrorizing. So, if you want to see domestic terrorism, see how ICE is operating in some cities and some environments. And so, it’s like, “Okay, what are the things to do to actually really solve Americans’ problems? That’s what we need. And some rationality in immigration is absolutely essential.
How do we have prosperity for our society, for our children, for our grandchildren, and including a bunch of communities that right now feel a lot of pain? How do we solve all those problems?” That’s the thing we need to be doing.
Copy LinkReid Hoffman on why business leaders should speak up
SAFIAN: The political climate has made business leaders more cautious about commenting on societal issues. What do you say to people when it’s worth weighing in or even necessary to weigh in or whether now just isn’t the moment?
HOFFMAN: Look, the theory that if you just keep your mouth shut, the storm will blow over, and it won’t be a problem – you should be disabused of that theory now. That is not what’s happening. Lots of people say, “Oh no, no, this tariffs thing. This is just an early negotiation tactic.” And it’s, “Look, the volatility is a massive sabotage to business. Our young people aren’t being hired.” It’s, “Well, yeah, businesses are in a highly volatile situation.” I’ll say, “Well, we’re not going to do no hiring until we understand what’s going on.” That’s the message that’s being sent from the White House out to the whole business community. And so, you need to speak up and you go, “Well, but what if I speak up, then they’re going to penalize me.”
And they’re like, “Well, by the way, precisely when you feel fears, you should think about, is this a time for courage?” Because by the way, of course, it shouldn’t be punitive for you speaking up about what your knowledge and expertise and experience and what’s going on is.” I, myself, get regularly called out by the White House and basically only for political persecution purposes. If they would say, “Hey, unlike Trump who has all these pictures of Epstein at parties, I did a little bit of fundraising for MIT. Well, I’m a close associate.” Well, you guys have all the documents. Release all of them. Let’s let people decide the truth of this themselves. So, stop lying about me and reveal all the documents. So, speaking up is actually, I think, really important.
And part of the reason why I do so, is not just because of me and because my sense of moral right, like First Amendment, freedom of speech, freedom of assembly, but it’s also to try to give other people a sense of, look, you should speak up about the things that you think are real. And if you feel fear, get some other people to speak up along with you and put the energy into it. Don’t just go, “Oh, I’m going to create a rationalization. I’m going to say, ‘Hey, I don’t need… I’m not being fearful. I’m not being a coward. It’s the right thing. It’s the right thing for my business. It’s the right thing.'” And look, human beings first, humanity, society, and you are members of both of those. Speak up, be present on those things.
And by the way, when you’re powerful, one form of power is wealthy. Anyone who’s wealthy in the society should be extremely grateful for being part of the society. You have responsibilities. They’re: commensurate with your power. And so, you need to speak up. And by the way, not only does the current administration want to silence all of this as speech and say, “No, no, you’re not allowed to. You must take pledges of loyalty.” But I also get arguments from the lefties who go, “Oh, well, you as a wealthy person, you have no moral right to speak.” I’m like, “Yes, I do. We all have a right to speak.” Some people might value my speaking by knowledge of how companies are created, how prosperity is created, how you have a vibrant economy.
And that’s part of what creates jobs. I mean, I’m a guy who’s created a site that has many hundreds of millions of people participating in it in order to find work. Should people weigh my opinion on some things more than others? Absolutely. Should they weigh them less than certain things than others? Absolutely. But it’s, we need to be speaking up, and we need to be figuring out how to solve our problems together.
Copy LinkHow Reid Hoffman suggests we use AI in our daily lives
SAFIAN: I have two more rapid fire questions. AI role prompting. You recently talked about this on Possible. Don’t ask AI for the answer, but for different answers in different styles. Are there roles, styles that you rely on when you’re doing role prompting?
HOFFMAN: Yes. Maybe call it a minimum three levels. First one, first simple role prompting is, “Be my critic. I think X, how would you argue against it?” It doesn’t have to only be the contrarian, the so-called devil’s advocate. It can also be the, “Elaborate more, what arguments did I miss or where is it complicated, et cetera.” The next level down is, “Which expert points of view might bear well on this question? Are there really good perspectives that I should consider on this?” Part of my book, Superagency, was this question of human technological development. “Well, as a historian of technology, how would you critique this?”
And so, you have this expert role description. Now, the final thing is to think about these as teams of roles. You should be thinking about roles in a team and how you’re bringing that team together. And this is actually what I think the future of work looks like because I think what happens is we don’t really have individual contributors. Maybe we have individual contributors like you and I talking to each other right now, Bob. But I think within a year, we’re going to have little AI agents that are going to have this window that says, “Oh, Bob, Reid forgot to mention this. Ask him about that.” Or, “This parallel to what Reid’s doing, or this parallel to this piece of news from Davos, that’d be a great thing to bring up, great.” And so, it’s managing teams of agents even in our roles as “individual contributors.” And that’s part of how we’re going to create a more human, more productive, more amplified, more super agency future.
Copy LinkWhat companies get wrong about implementing AI
SAFIAN: As you’re talking about the workplace, it leads me to my last question here, which is about this term, AI-first and the AI-first organization. And I’m curious what you think the biggest bottlenecks are for organizations that are trying to become AI-first or talking about becoming AI-first?
HOFFMAN: Massive reinventions are very difficult. So, the easiest ones that become AI-first are like individuals or very small organizations. And that’s where I think you see most of it happening. Now, because society also runs through a lot of large institutions, we need the renovation of these as well. Now, part of that is to start saying, “Hey, are we on a, call it a, weekly basis, playing with understanding the trajectory of AI and thinking about the ways that we should change?” And that’s just, almost, it comes full circle. Part of the reason why I’m trying to use AI in many variables, in as many variations as I can, including making records, is because that is what gives me a lens into not just how I can operate, but how my team can operate, and how the portfolio companies I work with can operate.
And that evolution, you have to presume it’s dynamic. A lot of people say, “Well, I’m going to wait until it all sets out, and then I’ll evaluate it, and I’ll adopt it.” It’s like, “Yeah, you’re going to wait forever.” You need to be adopting now every week. “What is something I tried that worked that maybe we should build more around or anticipate its trajectory? And what’s something I tried that didn’t work, and what do we learn from it? And do we think it won’t work forever? We don’t think it won’t work. We should try it again in six months, et cetera.” And that’s the thing you should be doing, not just as individuals, but as teams and as companies.
And that means, by the way, you’ll have some points of failure, you’ll have some breakages, you’ll have some mistakes. You have to understand what the parameters are of that, how to correct it, et cetera, but that is central for the journey to becoming AI-native.
SAFIAN: Well, great as always to chat. I look forward to the next time.
HOFFMAN: Awesome to talk with you, Bob. Happy New Year.
SAFIAN: I so enjoy talking with Reid and part because he’s such an eclectic thinker. He’s one of the central figures in AI expansion as an investor and a founder, but also, by modeling what’s possible more broadly as his AI Christmas album shows. In a similar way, Reid is modeling courage for others in the business community by challenging political figures from the White House to Bernie Sanders. Whatever you think of Reid’s specific policy positions, it’s instructive to call out assumptions and to champion the open, respectful sharing of ideas.
I don’t agree with everything Reid believes, whether about AI or about Trump, but I fully endorse his spirit of courageous engagement, active experimentation, and steadfast optimism, pushing for a better tomorrow and believing that we have the agency to create it. That’s the American way. I’m Bob Safian. Thanks for listening.
Episode Takeaways
- Reid Hoffman highlights the transformative power of AI, describing it as the greatest human amplifier for quality of life and challenging resistance to its rapid integration.
- He shares his personal experiments with AI-generated music, noting how new creative tools open possibilities for artistic and business innovation.
- The conversation explores the risks and realities of a potential AI investment bubble, with Reid stressing the importance of moats, careful valuations, and long-term impact over hype.
- Reid advocates for rational immigration policies, emphasizing how international talent has driven U.S. tech leadership and warning against shortsighted political moves.
- He calls for business leaders to display courage by speaking out on societal issues, insisting that power brings responsibility, especially amid turbulent political and technological change.