Vinod Khosla is one of the leading investors and technologists of our time, consistently making Forbes’ Midas List (basically the Oscars for venture investors) as head of Khosla Ventures. He was an early innovator in the world of computer chips and an early investor in OpenAI, as well as backing successes like DoorDash, Stripe, and Headspace. We unpack Vinod’s investment thesis, his utopic vision for our future, and how he thinks founders in AI and beyond can build a winning strategy. Plus we dive into OpenAI’s plans to restructure, what it means for the company and the AI landscape, with Fortune AI editor Jeremy Kahn.
About Vinod
- Co-founded Sun Microsystems, pioneering open systems & RISC processors
- Founder of Khosla Ventures, backing disruptive tech & climate startups since 2004
- Former general partner at Kleiner Perkins, pivotal in Nexgen & Juniper Networks growth
- Board member, Breakthrough Energy Ventures, advancing climate solutions
- Founding board member, Indian School of Business
Table of Contents:
- Why OpenAI’s unusual structure matters in the AI race
- What OpenAI’s global push says about the competitive landscape
- How role models and ambition shaped Vinod Khosla’s path
- Why startup success depends on designing the right team for risk
- How first principles led to an early conviction in OpenAI
- Why AI workers may matter more than AI copilots
- What it will take to turn AI abundance into shared prosperity
- How long term investors ignore noise and back world changing ideas
- How AI could free people to pursue meaning over survival
- Episode Takeaways
Transcript:
Investing in the age of AI, part 1: Vinod Khosla
VINOD KHOSLA: My life is mostly about an addiction to learning new things. Somebody comes, teaches me about fusion, somebody else teaches me about cell therapy or AI. It’s a wonderful life if you can just keep learning.
KHOSLA: Why does the market in 2025 affect my thinking? It doesn’t. I don’t care about tariffs. I don’t care about what the stock market is doing. I look at the stock market probably once a month or five minutes, that’s all. I’ve learned everything’s possible, flaky as that sounds, if you approach it the right way.
RANA EL KALIOUBY: Vinod Khosla is one of the leading investors and technologists of our time. In fact, he’s been consistently on the Forbes’ Midas List – which is like the Oscars for venture investors. He was an early innovator in the world of computer chips and has since made his name investing in some of the most successful companies like DoorDash, Stripe, and Headspace. Notably, he was an early investor in OpenAI.
For the next two episodes we’re diving into questions around investing in the age of AI. And this week we’re kicking off with Vinod. We’ll get into his investment thesis, his utopic vision for the future, and how founders can build a winning strategy.
I’m Rana el Kaliouby and this is Pioneers of AI – a podcast taking you behind-the-scenes of the AI revolution.
[THEME MUSIC]
EL KALIOUBY: But before we get into my conversation with Vinod – I want to take us to a story that’s been making headlines this past week. If you haven’t heard, there’s been some changes at OpenAI, including an announcement for a new business restructure and a push for further global reach. Here to help us break it down is Jeremy Kahn – he’s the AI editor at Fortune. Hi Jeremy! It’s great to have you on the show again.
JEREMY KAHN: Oh, it’s great to be back Ronna.
Copy LinkWhy OpenAI’s unusual structure matters in the AI race
EL KALIOUBY: So we invited you on the show today to talk about some news related to OpenAI and specifically their business structure. So I wanna start there. What exactly is their structure? Are they a for-profit? Are they a non-profit? And has anything actually changed over the past week?
KAHN: So I’ll address the second bit first.
Nothing has changed yet. But what they are, and it’s a little bit confusing, is they’re a nonprofit entity that owns a for-profit arm, the OpenAI company. All the employees, they all work for the for-profit arm. But the for-profit arm is controlled by a nonprofit, and the nonprofit has a board, and that board is ostensibly in control of the whole company.
Now they would like to restructure that, and have been talking about restructuring that for some time. They want to convert the for-profit company into a public benefit corporation, which is also a for-profit company, but a particular type of for-profit company where the board of directors has a kind of dual fiduciary duty.
They have a fiduciary duty to shareholders to make profit, but they also have a fiduciary duty to some sort of social cause that can be set by the charter of the organization or by the board, to serve the public good in some way. For a long time though, they were saying that what they wanted to do was have the for-profit, the public benefit corporation, not be controlled by the nonprofit. And what they announced last week is that they are abandoning efforts to have the for-profit entity escape the control of the nonprofit entity. They’ve said they still are gonna convert to a public benefit corporation, but that the nonprofit will have a controlling stake in the public benefit corporation. You might look at that and say that’s good for all of us. I think if you’re an investor in the for-profit arm of OpenAI, you might be less thrilled about that outcome, because there is some tension there between the things you might do to make sure that this technology benefits everyone and the things you might do to maximize profit for investors.
EL KALIOUBY: Yeah, one thing that is interesting about the structure is the board of governors of OpenAI are mostly independent directors, and they do not have an equity stake in OpenAI, including Sam Altman, who only has a stake through his investment via Y Combinator. That is very uncommon and unusual. If you think of tech companies, the majority of the board is usually the investor.
KAHN: Yeah. So the structure has been very unusual and one of the things that they’re trying to do with this conversion to a public benefit corporation is actually make the structure more like other companies. So you’re right, now Sam Altman has no equity stake and the other investors in the for-profit arm like Microsoft also don’t have traditional equity, but they do have these other things called profit participation units that give them a right to a certain amount of the for-profit company’s profits, if it makes any. You have to remember that OpenAI is losing a tremendous amount of money right now, but if it ever makes any money, those investors are entitled to a share of the profits up to a certain capped threshold. What’s going to happen in this conversion, if it actually takes place to a public benefit corporation, is that the investors would get traditional equity. If the company’s value went up a thousand times, the investors can make a thousand times their initial investment. But also if it declines, they would lose that amount. So it’s much more of a traditional equity structure.
EL KALIOUBY: So fascinating. So Elon Musk has been one of the most vocal voices against an OpenAI restructure, and he’s actually suing OpenAI. He’s been suing them for a while now. Any updates on that?
KAHN: Yeah, so that lawsuit is ongoing. Elon Musk’s lawyers came out and said this didn’t really change anything about their lawsuit. They still feel like this is an attempt to ultimately erode, over time, the control that the nonprofit would have, and that this is not in the public interest.
This can go ahead with this public benefit corporation setup. So they are continuing to pursue their lawsuit.
We’ll see.
EL KALIOUBY: Yeah. So OpenAI’s mission statement is to ensure that artificial general intelligence benefits all of humanity. I personally think that you can marry purpose and profit, but I wonder if there are any concerns about maintaining this mission in the current structure or if they convert to the public benefit structure as well.
KAHN: Yes. So look, there are a lot of concerns around how exactly you would marry these things in either construct. The big concern with a nonprofit has been does the nonprofit care enough about the kind of for-profit mission, or is it just focused on the kind of research mission? And then it’s always a question of like, who is on that board? So the nonprofit board initially had a lot of people on it who came from sort of the AI safety community. That was the case up until the board’s attempt to fire Sam Altman in November 2023.
And then of course he was rehired, and part of the negotiations that led to him being rehired after the one weekend that he was ousted, they replaced a lot of people on the nonprofit board. And now you have people on the nonprofit board like Larry Summers and Taylor, who come from much more of a traditional board background and probably have a balance of sort of the for-profit concerns and the public benefit concerns. Now, the new structure also has potential issues. They’ve said that the nonprofit will initially have this controlling stake, but they have not said that over time, if they continue to raise more venture funding, that that stake won’t be diluted. So yeah, I think there are still some concerns even with this announcement about what the future will actually be.
Copy LinkWhat OpenAI’s global push says about the competitive landscape
EL KALIOUBY: Yeah. Okay, so in other OpenAI news, there’s an ongoing global race for AI and OpenAI announced that they have a new global initiative to partner with countries to build data centers for localized versions of ChatGPT.
And Sam Altman actually spoke at a Senate hearing last week saying that mandating government approval for AI software would be disastrous for the United States. So where does OpenAI stand in this global race for AI?
KAHN: I mean, I guess it depends how you judge the race, but if you judge it by sort of the broad capabilities of frontier models, then OpenAI remains sort of at the front of this race. Unlike when they came out with ChatGPT in late November 2022, where they had this indisputable lead that looked somewhat substantial, while I would still say they’re sort of at the forefront of that race, they are certainly joined up there in a kind of lead grouping with a lot of other companies, including Google and Anthropic. And then of course we have these several Chinese companies that have done some very interesting things, including DeepSeek. So I’d say they’re still in that lead pack, but they’re not sort of the undisputed leader that they once were, nor do they seem to have a very substantial lead if they even have one.
EL KALIOUBY: Yeah. So it sounds like we will continue to watch this space closely.
KAHN: Absolutely. It’s gonna be fascinating.
EL KALIOUBY: All right, Jeremy, that’s a wrap for now. Thank you for joining us.
KAHN: Thank you so much for having me.
EL KALIOUBY: Okay, after a short break – we’ll dive into my conversation with Vinod Khosla on all things investing in the age of AI. Stay with us.
[AD BREAK]
EL KALIOUBY: Hi Vinod. Welcome to Pioneers of AI. It is a real honor to have you on the show.
KHOSLA: It’s great to be here.
Copy LinkHow role models and ambition shaped Vinod Khosla’s path
EL KALIOUBY: All right, so let’s dive right in. I want to quickly go over your origin story. So you grew up in a middle class family with really no connections to business or technology, but then you went on to co-found Sun Microsystems, and it took a very novel approach to the chip industry and you did really well.
My first question, I’m curious, what was your biggest learning from that experience and what advice would you give yourself back then?
KHOSLA: First, I was always interested in science and then technology following that. But what really was pivotal for me was hearing about Andy Grove, a Hungarian immigrant to the US starting Intel. So I think this idea of role models is very, very important and I realize how powerful they can be. Just reading about it as a 16-year-old kid really influenced me. And I do think it’s pretty important, maybe the most important lesson. It’s also true that I like to say most people are limited, not by what they can do, but what they think they can do. And so you can do a lot more if you just assume you can do a lot more, and that’s been the story of my life.
EL KALIOUBY: So how was the experience of starting and exiting companies, how did that influence your investment philosophy once you kind of switched to the, to the dark side?
KHOSLA: Well first, I don’t think it’s a dark side. It can be the dark side if you take an investment approach. My approach is very different. In 40 years I’ve been doing this, I’ve never called myself an investor. I would say I’m a venture assistant assisting entrepreneurs, building their companies. And what I learned is most of what you run into, you not only didn’t know when you started, you didn’t know you didn’t know. And so I applied that in helping entrepreneurs build their companies and think about it the right way. Most entrepreneurs are experts in one area. They know one thing, maybe they know AI research, but then they have no idea what a CFO does or what somebody else, what marketing really is. And so bringing this broad thinking to an entrepreneur and complimenting them and helping them build the right teams becomes a critical part of what you do.
And that’s one very important lesson from having done it myself. The second lesson is I am much more comfortable with ambiguity. So most startups, most big startups happen in areas that don’t exist. Think of Twitter. When Twitter started, who knew what it was or what it could do?
It’s hard to define and I’m much more comfortable with ambiguity and I think investors require a lot more, and I’m much more comfortable with the discovery process. So one, this ambiguity thing. And the other is, how do you build a team to go after a venture and how to de-risk it? I would say two simple things. One, you have to engineer the gene pool of a startup to the key risks you’re likely to face. The second thing I fervently believe is the team you build is the company you build, not the plan you make, because the plan can change and the plan will change if you have a great team.
But the team you build is what will end up determining what your plan is. So a couple of very important lessons I learned from starting my own company. I find it hilarious, the people I see giving advice to an entrepreneur on how to build a company when they’ve never built a company.
Copy LinkWhy startup success depends on designing the right team for risk
EL KALIOUBY: Yeah, absolutely. There’s a lot of shared experiences when you’ve walked down that path and a lot of learned lessons. I’m so curious about engineering the gene pool of a startup. Can you give us an example?
KHOSLA: Well, so when you’re doing the startup, you wanna clearly define risks that you’re going to face, clearly define the opportunities also in that startup. But the thing that will cause you to fail are the risks. The weakest link in that chain and one weak link can destroy your startup. So what you do is say, what kind of background de-risks that risk? And so my process looks something like this. Find five companies that have worked on that particular risk. At each company, find three or five people who worked on that risk. Now for each of your risks, you have 15 to 25 names, and you go after those people first. You’ll understand those risks better from having interviewed all these people, or chased them as the case may be. And second, you’ve assembled a team specifically geared to the risks you might face. And you do that for the known risks.
So I hate the platitude you often hear of hire great people. I’ve never had somebody come into me and say they’re only hiring bad people.
EL KALIOUBY: Right.
KHOSLA: Right. So what does it mean to hire great people? How do you determine who are great people? That’s a much more important question. That’s useful to an entrepreneur. But unless you’ve lived through hiring and making hiring mistakes, you’re not going to know who the right person is.
EL KALIOUBY: Yeah. Can we talk about what qualities do you look for in a founder that make you excited to invest?
KHOSLA: Well, if I am looking at even a YC founder, if I’m looking at a YC company, the most important question I ask is the PA part, ask of the partner at YC, how much has this person changed or evolved, even over three months. You can tell a lot about what their learning rate is. So learning rate is much more important than absolute knowledge.
Because over the next five years they’ll keep learning if they’re good learners and open-minded learners. If they’re closed and they think they already know the answers and they’re not looking for feedback, they generally will fail to evolve as the world changes. They will fail to adapt as their assumptions turn out to be right or wrong.
The best thing you can do is find where you are wrong in your assumptions as quickly as possible, and slow learners don’t do that, and rapid learners do that really, really well.
Copy LinkHow first principles led to an early conviction in OpenAI
EL KALIOUBY: You were one of the early investors in OpenAI. I believe you committed a $50 million check in 2018 and actually made the investment in 2019. It seems so obvious in retrospect, but I’m sure it was not back then. So why did you invest?
KHOSLA: I will tell you first how uncertain it was. It’s the funny story in the 20 years of Khosla Ventures, it’s the only time I’ve sent an apology letter to our LPs when making the investment in 2018 or early 2019. I sent an apology letter saying, this makes no sense. It’s a nonprofit, it makes no sense. There’s no product plan, there’s no business plan, there’s no revenue plan. But we are gonna make the investment anyway. I didn’t ask for permission. I was just informing them. I knew it looked ridiculous and many people told me it looked ridiculous. I won’t name names, though it’d be fun if I did. So why did I do that? I have to go back to another story before I get to AI. At Sun in 1982, we adapted TCP/IP, which is the protocol of the internet on which internet communication runs, for your non-technical listeners. And I saw it grow over time. In 1996 I saw we had passed the flat part of the exponential growth curve in TCP/IP. We started a company called Juniper to build a TCP/IP router for the public networks. And every customer I talked to, from AT&T to Verizon to Cingular, every single one, with one exception which was a startup, said they will never use TCP/IP in the public networks.
Never.
From 1996, no major Telco was planning on TCP/IP being the core of the internet, but I saw the exponential. I didn’t follow the experts because they were experts in the previous version of the world.
I said the world will go TCP/IP. I knew if the world was gonna change dramatically, which the internet was going to do.
And we’d seen the exponential. I was going by the data, not by what experts were telling me, or what big companies were telling me. And look at exactly what happened as a result. The $3 million investment produced a seven and a half billion dollar return in an era where nobody ever got a billion dollar return. Why? Because we built it and they came. It was belief in your fundamentals and I was looking at the data. Why is it related to your OpenAI question? Year 2000 is the first time I mentioned AI in a public interview with the New York Times in a very vague way. I said, AI will have us redefine what it means to be human.
So I was tracking the data on the progress of AI and the exponential curve. In 2012, a dozen years later, I wrote two blog posts called Do We Need Doctors? They were in TechCrunch, and Do We Need Teachers?
The idea being these two basic human needs – services and positions for everybody on the planet and teachers for every kid on the planet – could be done by an AI.
That was 2012. I kept seeing the progress in how AI was catching up to human levels of performance, even when it was far below human levels of performance. And by 2018, when I talked to Sam, I saw so much progress. It was clear we were going to have breakthroughs at some point. Didn’t know whether it was in three years or 10 years. Didn’t matter if the effect was large enough, and I’d seen the effect of the internet being large enough and companies like Google and Amazon created on that platform.
It was clear it was gonna be profitable if I was right, that there would be breakthroughs. They would enable capabilities that we see today. So you sort of build on first principles, not on what experts are saying or what other investors are investing in. You look at the fundamentals.
Copy LinkWhy AI workers may matter more than AI copilots
EL KALIOUBY: Yeah. So I wanna come back to this idea that AI will replace jobs, including teachers and doctors. Bring this to life for us. It’s been more than a decade since you published that TechCrunch article. Where are we with that and what does that look like?
KHOSLA: Oh, so there’s two modes in which you can use AI. The most famous mode is the Microsoft copilot for software programmers. And Cursor has done that much better than Microsoft. But the idea that co-pilots help a human do their job – we generally switched to a different model two years ago. We don’t do co-pilots as an end goal. We do workers. That’s been our investing philosophy the last two, three years. We switched to this worker builder, AI worker, not an AI copilot. We are making enough progress that in the next five years, most of these AI workers won’t be good enough. So when you get an AI accountant, we sell it as an AI accountant intern. If a CPA firm had gone to your local community college and hired an accounting major to go do accounting, like audit or close the reconciliation of your books, whatever, in these firms there’s always a senior accountant supervising the junior fresh kids they hire out of college.
Well, they can also supervise our AI interns. And in five years, our AI interns will grow up to be senior interns. And as they get more confidence in these interns, they will let them do more, just like they let humans do more. Same thing with doctors. You don’t let an AI go diagnose and prescribe. In fact, it’s not even legal. But you can give each physician in this country five AI interns who do most of the work, but they get to supervise the interns. The AI interns practice under their license, and so they’re okaying the prescription or the diagnosis or whatever the AI is doing.
EL KALIOUBY: Yeah. I am seeing these new business models where AI companies are charging for AI headcounts, as you said, like an AI healthcare administrator or a therapist. And I’m invested in a company where they literally have names for robots. I picture org charts where the org chart is a combination of humans and AI.
KHOSLA: Yeah. So I first talked about a challenge on Twitter that I retweeted a couple of weeks ago. I said, when will we have the first billion dollar revenue company with only 10 human employees? It could be a thousand AI employees. I think that company will be started now, whether it’s already started or starting in the next year or two. I think that will happen. It’s very, very disruptive to think about.
EL KALIOUBY: Yeah. So let’s talk about robotics and embodied AI next because so far a lot of the AI we’re seeing is mostly 2D. But we’re starting to see a lot more physical intelligence, if you like. You predicted that there would be a billion bipedal robots by 2040 and they would play a larger role than the entire automotive industry. So what would a world with a billion robots look like and what are the applications?
KHOSLA: So two things precisely. What I said is somewhere in the early 2040s, we will get a billion bipedal robots. They will do more work than all of humanity does today – physical work.
Separately I said that the robotic business will be larger than the auto industry is today. But what will it look like? It’s a difficult question. They will be doing assembly line work at Tesla, at General Motors, assembling cars. They will be doing farm work.
Bending over in a hundred degree heat picking lettuce. Now people are terrified that these jobs get displaced and they should be. But I don’t consider the vast majority – more than 50% – of the jobs on this planet to be true jobs. If you are working on an auto assembly line or any assembly line for eight hours a day for 30 years, putting a tire on a car, or picking lettuce for eight hours a day in a hundred degree heat, that’s servitude.
You have to do it because it’s the only way you make a living. Nobody aspires to do that for 30 or 40 years. And so I think we need to eliminate the jobs that are not respectful of human beings. And we get the obvious question of how do these people make money? And I did this paper last November called AI Dystopia or Utopia.
And what I just described is a pretty dystopic vision for most people.
First, I think the next five years from 2025 to 2030 will essentially look like productivity gains, which economists love. In the 2030s the displacement will get so large it’ll affect society and politics in a pretty dramatic way. Remember, capitalism is by permission of democracy, and workers who are displaced can want to take capitalism away. So I do think capitalism will have to adjust pretty dramatically because there will be enough goods and services being produced that we will have to share the benefits. You won’t need traditional labor. You will need capital. But mostly it’ll be an economy driven by ideas, an economy of ideas and innovation. And so we will have to take care of people. Now, good news – I think if this scenario happens, and my 2016 article is when I first described it, I think in Fortune, the idea that AI will cause great abundance, great GDP growth, great productivity growth, and increasing income disparity was my tagline to that article. But I still believe whether we have income disparity or not will be societal, political, and social choices, not technology choices. Technology will enable great productivity.
Most goods will tend to be free. Not quite, but I think in the 2030s we’ll see a hugely deflationary economy because the increase in the production of goods and services because of all these AI technologies leads to great abundance in the 2040s or 2050. The abundance in utopic aspects will be for everybody to share. I suspect GDP growth will go from 2% to 5%, which will generate enough additional wealth to share.
Copy LinkWhat it will take to turn AI abundance into shared prosperity
EL KALIOUBY: Yeah. So then what needs to happen politically and socially to ensure that this is equitable and inclusive?
KHOSLA: Medical expertise at least will be free. A surgery may not be free. Cardiac surgery may still be a little bit different, but medical expertise will cost you a dollar a month for every citizen of the planet or a country. Education will cost a dollar a month to serve anybody, not a hundred thousand dollars a year, which is the current cost of education, and each kid will get a personal tutor. So many of these services will be free. Entertainment will tend towards free. Transportation may get 10x cheaper, which we are building – public transit that’s 10x cheaper and better than a personal chauffeur-driven car in every way. No compromises. These are all possible in this world, so we’ll be rich in education, rich in transportation, healthcare. Some things will cost money. Building a house will still cost money.
Though I hope robots do it much cheaper and better, so we will get a deflationary economy there. It’s just because it’s not fully predictable doesn’t mean it won’t happen. We just don’t know exactly how it’ll happen. The end goal is pretty clear to me, and some countries will adopt it more aggressively and win economically, and the countries that don’t will be laggards.
So most of the dystopic elements of this AI-driven economy and world will be choices societies make. They’re not bound to happen, they’re choices. And in a democracy, I hope voters vote to have the benefits more evenly distributed.
Today, the world is exactly the opposite.
Cut taxes, don’t share, don’t take care of the needy. The MAGA world is very extreme in that. Fire everybody you can. I think there’ll be a better solution, but a lot of it will happen and we’ll have to adjust as it happens.
EL KALIOUBY: I guess at the end of the day, it falls back to leadership. And it’s the humans really. It’s not the technology, it’s what the humans decide to do.
KHOSLA: Humans act in their self-interest. We have to acknowledge that, but even capitalists, if they know they’re subject to democracy, the elected leaders will tend to have more permission to do these things. And yes, there’s things like regulatory capture and things like that, but if the consequences of not being fair are large, we will see very disruptive social behavior. So I hope we learn.
EL KALIOUBY: What are some areas or applications of AI that you feel are still undervalued or underinvested in?
KHOSLA: Well, every kind of worker. Think of any job an AI will be able to do except the jobs that humans wanna watch only humans do. Nobody wants to watch an X Games participant that’s a robot. They wanna see human performance competition. So Olympic games, sports, entertainment – celebrities within entertainment won’t go away.
Our need for status won’t go away. There’s a lot of fundamental human characteristics that won’t go away. Wanting to take care of our children – most parents get stressed about how little time they have to take care of their young kids. You won’t see that pressure. You can spend much more time building relationships and taking care of your kids or your elders.
There’s a lot of things that would be really rewarding to do. I was at the veterinary hospital yesterday, and the people there so loved their job taking care of animals. I would do that personally, even if a robot could do it better. I’d still enjoy doing it. So people will do jobs they want to do, not the ones they don’t wanna do, and the ones they want to do will be fun and pleasurable and don’t have to be the most efficacious way to do something.
Copy LinkHow long term investors ignore noise and back world changing ideas
EL KALIOUBY: All right. I do wanna talk about the investing landscape we’re in. The technology’s moving so fast, but also it’s a very uncertain climate. The markets are volatile. There’s a lot of angst and chaos. We’ve kind of talked a little bit about that. How do you think about that as an investor?
KHOSLA: So tell me why that matters. Why is it even relevant? If I make an investment today going to do something substantial, build something of importance – and I like to say not how all hard things are valuable, but most valuable things are hard. Occasionally you get easy paths. If I’m investing today in something, it’s not going to give me a return till 2030 or 2035. Five years is very short to get a return on a new venture investment. 10 years is not unheard of, it’s pretty common. So why does the market in 2025 affect my thinking? It doesn’t. I don’t care about tariffs. I don’t care about what the stock market is doing.
I look at the stock market probably once a month or five minutes, that’s all. I don’t read the Wall Street Journal. I don’t worry about that world. And when people talk about these bubbles and bursts, if you take a longer perspective of creating value, it doesn’t matter.
Now, it’s not strictly true. There’s some exceptions. But the only way I can create value in the 2030s for an investment I’m making in 2025 is by creating fundamental value and then hoping it’ll be valued in 2032. And that’s the only approach you can take. In the true venture business, if you’re trying to make a quick buck, then it matters what sentiment is now and next year and the company gets sold. And if somebody has an exit strategy in their business plan in the first five or ten slides, I almost never continue reading the thing. If they’re thinking exits, I don’t wanna deal with that entrepreneur.
If they’re thinking, here’s the world they want to create, then it’s interesting. I like to say, and I have this bias against experts – experts extrapolate the past.
Entrepreneurs create the world they imagine.
Copy LinkHow AI could free people to pursue meaning over survival
EL KALIOUBY: All right, so back to what we talked about at the top of the show, which is AI is redefining what it means to be human. So let’s talk about that. What do you mean by that? And what do you think it means to be human in the age of AI?
KHOSLA: So let’s say you have a 5-year-old kid today and they’re starting school – first grade, kindergarten. What do you say to them? Hey, go to school, study hard, get to a good college, you’ll get a good job. And in the past I’ve always had to tell kids, if you pursue your passion, you may not be able to afford a house for your kids or pay your mortgage. It’s just reality. Pursue your passion has been bad advice unless your passion coincides with something that makes you enough money to support your family and support the lifestyle you really want. Now in 2035 or 2030, every parent will be saying, and I will be saying, pursue your passion. Because society’s taken care of most of your basic needs. And so you no longer have to first start with pay your mortgage. You still have to pay it, but there’s so much abundance. You can pursue your passion. I think AI frees humanity and human beings to do what they want, not what they need to do. Why do we have this notion of a starving artist? We won’t have that. And if you’re not the best artist, you can still enjoy art or music or any of these. If you’re not the top 0.1% in these fields today, you’re not gonna make a living. And in this new world of AI, life will be taken care of and you’ll be able to pursue your passion. So AI will free humanity to be human.
EL KALIOUBY: I love that.
KHOSLA: Take care of your kids, or have the relationship you don’t have time for, or the parents that you wanna live with and don’t need to move far away just because your job takes you there. I think it’s a wonderful world.
EL KALIOUBY: I love that. I love your vision for the future, and AI will free humanity to be humans. I love that. That’s a perfect way to end the show, Vinod, thank you so much for joining us.
KHOSLA: Thank you very much.
EL KALIOUBY: There is so much I am taking away from my conversation with Vinod. But I have two specific takeaways.
One, if you’re an investor don’t blindly follow what experts are saying – since most experts are basically stuck in a previous version of the world. Instead, Vinod follows the data! And backs innovators and entrepreneurs who paint a version of the world that does not yet exist. That’s how you invest early in transformative companies like OpenAI.
Two, AI is and will be disruptive to our social fabric. And while this disruption may be challenging for a period during this transition, if we democratize access to this technology, we can move towards a world of true abundance.
And whether you’re an investor, founder, or none of the above – I think Vinod’s commitment to lifelong learning is inspiring for all of us. It’s one of my core values and it’s great to see someone like Vinod embody that.
Episode Takeaways
- Rana el Kaliouby opens with a timely look at OpenAI, as Fortune’s Jeremy Kahn explains the company’s unusual nonprofit-controlled structure and proposed public benefit conversion.
- Turning to Vinod Khosla, Rana traces how an Andy Grove role model and Khosla’s own founder journey shaped a venture philosophy centered on learning rate, ambiguity, and building the right team.
- Khosla says his OpenAI bet came from following the data, not the crowd, arguing that great investors spot exponential shifts early and back entrepreneurs building a world that does not yet exist.
- He lays out a bold vision of AI workers and eventually robots taking on more routine labor, while warning that the real risk is not the technology itself but how society chooses to distribute its gains.
- And in Khosla’s most expansive view, AI could free people from survival-driven work so they can pursue passion, relationships, and creativity—redefining what it means to be human.