The AI boom wouldn’t have been possible without decades of academic research. And now that huge sums of money are flowing into RandD, it’s companies – not just universities – on the cutting edge of AI innovation. What does that mean for the future of AI research and the relationship between industry and academia? Rana sat down with a panel of experts who have a foot in both worlds, in another standout session from the stage of Fortune Brainstorm AI in San Francisco. Anima Anandkumar (Caltech, formerly Nvidia), Daphne Koller (Stanford, founder/CEO of insitro), and Raquel Urtasun (University of Toronto, founder/CEO of Waabi) share how research expertise goes hand in hand with business innovation, especially around AI’s ability to more accurately model the physical world.
About Anima
- Invented Neural Operators; created AI-based weather & climate modeling field
- Built first high-res AI weather model; 10,000x+ faster, used by agencies
- Bren Professor at Caltech; Fellow of IEEE, ACM, and AAAI
- Led AI research at NVIDIA; former Principal Scientist at AWS
- Won TIME 100 Impact Award; ACM Gordon Bell Special Prize
Table of Contents:
- Why AI innovation now depends on academia and industry together
- Rethinking the boundaries between labs, startups and universities
- Why physical AI could unlock the next wave of scientific discovery
- How simulation and synthetic data can make autonomy safer
- Building digital twins to transform drug discovery and development
- Why open source still matters for AI progress
- The case for open data as a force multiplier for research
- Why efficient and sustainable AI may drive the next breakthroughs
- The values that should guide AI in the physical world
- Episode Takeaways
Transcript:
AI’s journey from the lab to the marketplace
If you tuned into last week’s episode, you know that I co-chair the Fortune Brainstorm AI conference in San Francisco. It happens every December, and it’s a great way to wind up the year – with big-picture discussions about the most pressing issues in AI from top leaders in the field.
I get to moderate a few of those, and I’m sharing them here on Pioneers of AI.
This week: my panel conversation about the shifting center of gravity in AI research. In the early days of AI, much of the work was done in academic labs. But now that AI has proven its market value – and SO quickly – industry is spending crazy amounts of money on R&D. So, what does this mean for the role of academic research, and how should universities work with the business world to shape the future of AI?
To explore this, I spoke with three AI leaders at the intersection of academia and industry.
Anima Anandkumar is a professor of computing at the California Institute of Technology. She specializes in using AI for modeling real world events, like weather patterns. Previously, she was principal scientist at Amazon Web Services and senior director of AI research at NVIDIA.
Daphne Koller is the CEO and founder of insitro, an AI drug discovery and development company. She is also a longtime professor of computer science at Stanford.
And Raquel Urtasun is the CEO of Waabi, an AI-driven autonomous trucking company.
Our conversation gets into the relative strengths of industry and academia, how to bring the two worlds closer together, and the AI innovations underpinning the next scientific breakthroughs. Especially AI that understands the physical world.
I’m Rana el Kaliouby and this is Pioneers of AI.
[THEME MUSIC]
Copy LinkWhy AI innovation now depends on academia and industry together
RANA EL KALIOUBY: Anima, Daphne and Raquel, welcome to the Fortune stage. Thank you. So I wanna dig right in.
A decade ago, academia was the epicenter of AI breakthroughs, but the power has shifted dramatically. And I wanna share two statistics. 90% of frontier AI models are now released by industry and 70% of new PhDs choose a career in industry over academia.
And so, like me, you’ve both straddled industry and academia in very interesting ways. So I wanna ask you this: has industry officially become the innovation engine for AI? And is that a good thing or something we should worry about? And Daphne, I’m gonna start with you.
DAFNE: Wow. Okay. I wouldn’t say it’s the innovation engine in the sense that there’s no room for additional academic research, but I think that when we look at where the biggest inventions have come from in the last few years, going back to the Transformers model, it’s all really originated in industrial research. And I would go one step further in the direction that I’m hoping this panel will eventually head in, which is, as you look at AI that transcends the purely software environment and starts to penetrate into the more physical world, interdisciplinary work.
It’s very difficult to build that within an academic environment. The incentives are just not there. The infrastructure, the many, many years of build that are necessary. And so I think when you’re looking at that type of innovation, that’s going to come from within industry.
EL KALIOUBY: Anima, what do you think?
ANIMA: As somebody who’s been in AI for more than two decades, but especially over the last decade, being both in industry and academia, first at AWS while being at Caltech and then Nvidia while still keeping my Caltech role. When people ask me this, I found it really seamless to be in both sides.
We did a lot of foundational work in AI and science at Caltech because when I went there it was just the computer vision era and not even natural language. Right. And here we are asking some of the hardest challenges in broader sciences, like scientific discovery.
And so knowing about those challenges, it was really academia that seeded those problems that we see really take off, including weather and climate modeling. We built the first high resolution AI based weather model, but that was in deep collaboration with my team at Nvidia, Caltech Berkeley lab.
And so AI for science is really about deep collaborations and interdisciplinary work. Right. And I would say maybe not in every university, but Caltech being small and interdisciplinary, I see no barriers. And that’s really helped us go in a way, discover all the hardest challenges, I’d like to say, from quantum realm to cosmological realm.
And we’ve seen AI make a deep impact from discovering new materials, better control of quantum devices, to simulating the black hole, understanding plasma and nuclear fusion. And so this breadth of knowledge is in academia. But of course the engineering and the scale is in companies and so I continue my partnership with Nvidia and you continue collaborations there.
And so really my goal is to bring the two together. Right. And also, being an advisor to companies like SK Hynix, guiding them where is the next big thing for memory? And we’ll come to physical AI, but I think this is where it’s not one or the other. It’s not a competition. It’s really a deep collaboration and synergy.
Copy LinkRethinking the boundaries between labs, startups and universities
EL KALIOUBY: Yeah. Raquel, what do you think? What does industry enable in AI that academia can’t and vice versa? I feel like you have found a really productive model of combining the two.
RAQUEL: Yeah. And I would say that I’m fortunate to be an academic, have spent time in big tech at Uber, and then being a founder over the last almost five years.
And one of the things I learned over there is that we need to reinvent the model of how academia works because, as Daphne was talking about, if you work on physical AI it’s simply impossible to understand what the problems are that haven’t been solved yet in industry, if you are simply in an academic lab with just a few students and some small resources.
So for me, really the collaboration of industry and academia is this next model of education. And one of the things that maybe is less known is that the University of Toronto has been really pioneering this new model, where for the last nine years I’ve been educating all my PhD students, before at Uber, now at Waabi, where they really get to learn what it is to do cutting edge research without losing their freedom in order to really drive innovation and learning through their degree.
And then you end up with the best of both worlds. You understand industry, you can be a professor if you desire to, and then you get to really work on the things that matter.
EL KALIOUBY: We were talking about this backstage because MIT is very strict about these lines. When we spun out of MIT, we basically had to cut a lot of our ties with MIT because of conflict of interest, but I’m very Stanford too.
Stanford too. Stanford too.
ANIMA: But it sounds like there are new models where you can kind of assign students these really complex problems to work on for their scientific research that are rooted in real world applications.
DAFNE: I think it’s time for academia to rethink strongly this barrier between academic research and industrial research and embrace a more porous model because the kind of successes that we heard from Anima and from Raquel, I think should be something that all universities embrace and lean into because I think it does allow you to get the best of both worlds. Whereas these very bright lines, it’s like academic research with the limited resources that academia has and the limited ability to really create a cross-disciplinary team structure, and the infrastructure limitations really limit what one can do in academia.
But there’s some incredibly smart people in academia that if only aimed in the right direction and given the right resources, I think could contribute hugely and it would also enrich their own research agenda in ways that they’re working on really the most important problems. And I think with the really bright lines, you’re losing that opportunity.
Copy LinkWhy physical AI could unlock the next wave of scientific discovery
EL KALIOUBY: A lot of the conversation around AI today is focused on productivity gain. But this is gonna be about, or is about scientific breakthroughs. And so I’ll go to you next.
You are using AI in all sorts of ways, but I wanna dig into one particular application which is accelerating climate solutions. And you’ve invented this approach, neural optimizers, to predict, model, and solve for climate events. Can you tell us more?
ANIMA: Yeah, absolutely. This goes back to almost a decade when I started at Caltech, right?
As I mentioned, everybody across campus was very interested in using AI, but they didn’t know how, because they don’t have a lot of data and the problems they’re tackling involve deep scientific knowledge. Right. And that requires modeling the physical world. Weather and climate is one example.
Understanding quantum systems, nuclear fusion, being able to design better medical devices, rockets. So all of this requires not just knowing textbook level math. You can write down the laws of physics, but that’s not so interesting, right? It’s really that ability to simulate and being able to design in the virtual realm and incorporate all of those physical constraints into our AI is a big part of it.
Because a lot of recent work with language models has been for science is: okay, AI or language models can come up with new ideas. But scientific discovery is most of the time not bottlenecked by the lack of ideas. As you know, Daphne was saying, a lot of smart people, a lot of smart ideas, right?
But why doesn’t that see the light of the day? Because doing experiments, taking observations in the physical world is so slow, so expensive, building big instruments, very expensive. And if you could overcome and reduce that, to me the productivity gains from language models is so minuscule compared to the reduction in R&D costs that could happen if we could put the physical world in a bottle.
And that’s what we are doing by physical AI, to me, is full understanding of that physical world, both in space and time, so in three dimensions with time. So that’s four dimensions. And being able to really get to the detailed physics so we can simulate systems like weather and climate.
Copy LinkHow simulation and synthetic data can make autonomy safer
EL KALIOUBY: Raquel, I wanna go to you next because you are in the autonomous trucking space, which is notoriously challenging, but you have a different approach at Waabi to train, validate, and scale autonomy.
So tell us more. You lean on synthetic data a lot.
RAQUEL: I guess for context, there are two core ideas that make Waabi’s approach very differentiated from the rest of the industry. One is this idea that you can build an end-to-end system, so a single neural network that is capable of reasoning like humans do. So you can really learn with very little data to perform complex tasks like the task of driving, because you’re never gonna see every single situation on the road before deploying. And you can have catastrophic consequences if you’re not able to handle certain situations.
So that’s one core piece of technology. And the second was that you will never observe enough data. And data, in the area of AI, is gonna be more than half the question. The idea was, when I started Waabi four and a half years ago, that we can build a simulator that is as realistic as the real world.
And if we can do this, then suddenly we can expose the system to everything that potentially can happen, including unavoidable accidents, without consequences. And then you can learn and be trained mostly on simulation, and then perform really well from day one on public roads.
And just for context, this is totally contrarian to what everybody was building, which is go on the road, crank miles and then maybe later on you will build a simulation system. It turned out to be a great idea. And what we can do is also validate and verify the simulator and prove that driving on simulation is the same as driving in the real world. So now we have all the ingredients so that you have the autonomy that can generalize, you have the simulator that can simulate everything, and then we are ready for deployment. So really exciting.
EL KALIOUBY: Times for us building with trust and safety. We’ll be right back with more from our panel conversation after a short break.
[AD BREAK]
Copy LinkBuilding digital twins to transform drug discovery and development
Daphne, your work sits at the intersection of AI, biology and medicine, and you’ve also built digital twins of tissues and complex diseases. Tell us more. And I believe 2025 was a really important year for you guys.
DAFNE: So we are building a platform for really creating a model that allows us to answer the fundamental question in drug discovery development, which is, if I make this intervention in the human, what is it going to do? What’s it going to do at the cellular level? What’s it going to do at the tissue level?
What’s it going to do at the human level in terms of the clinical outcomes so we can make that prediction in silico, maybe not with a hundred percent accuracy, but with way more accuracy than the current success rates of our industry, where over 90% of drugs that go into the clinic end up failing in clinical trials.
So that’s a low bar to beat, but it’s been incredibly hard to beat that bar. And so what we’ve done is we’ve built an incredible, as Raquel correctly said, data is the core of everything in modern day AI. And the data that one needs in order to really make those types of predictions is not going to be found on the internet.
You’re not going to find the cure to ALS by reading more papers, because if people had known that, they would’ve gotten there already. And this disease has no cure. And so what we’ve built is this incredible data factory that brings in massive numbers of cells perturbed with genetic perturbations, perturbed with different types of exposure.
So we can really start to get at causal relationships between genotypes and phenotypes. We also bring in human data where we can start to see among experiments of nature, like every one of us in this room, what is that relationship between genotype and phenotype? And really integrate that together with a generative AI enabled brain to sort of make those predictions holistically. And so that’s really what we’ve been building for the last years, and it fits squarely, similarly to Raquel, in the realm of physical AI, which is a different journey than consumer AI or SaaS AI, which is you have to first get from zero to one. You have to build the basic platform that allows you to get to these capabilities before you can start to sort of prove it out and scale it.
But once you do, you’ve created this incredible competitive moat because everybody else who wants to take that same journey, basically there’s no shortcuts. You have to build the custom hardware. You have to collect the custom data. You have to build the custom models that understand the physical world and causality, and you have to do it all, as Raquel said, without actually killing people in the process, which is something that when bits meet atoms is a real risk.
And so we’ve done that and 2025 was for us a real banner year because not only did our platforms hit escape velocity to the point that every time we turn the crank, more stuff comes out, but we’ve actually started to see these proof points in the context of real drugs that are coming out. Our first drug is heading into the clinic next year.
It’s in the disease called MASH, which is a terrible fatty liver disease. The one that comes soon after that is in ALS, which is Lou Gehrig’s disease. You may or may not be familiar with that disease. Basically, your lifespan from diagnosis is about three to five years and there is no treatment. 70 plus drugs have gone into clinical trials.
Four have been approved. They extend lifespan by maybe a couple of months. So it’s a horrible disease, worse than most cancers, and we feel based on the data package that we have, albeit preclinical so far, that we have found something that is truly disease modifying for this disease. Our partners at BMS agree with that assessment, and so we are really excited to potentially really take that.
Help people live, which is incredible. Really the most aspirational goal that I think you can have is to have people live a longer, healthier life.
Copy LinkWhy open source still matters for AI progress
EL KALIOUBY: Do we have any questions? Yes. There’s a question right there. Please share your name and organization.
RAVI: Hi, my name is Ravi. I am from Kognito, but I used to work for Lucid Motors. So I have two questions. One, this debate about academia and industry. There’s a third leg of this whole innovation equation, and that’s open source. So a lot of innovation that happened started with Linux and Richard Stallman and GPL and whatnot.
But how do we make sure that that third leg also kind of grows? So that innovation, that flywheel keeps growing. So that’s a question, like, what do you think? What is the third leg?
EL KALIOUBY: Open source. Open source. Open source. Okay. So who wants to answer that?
ANIMA: As somebody who’s spent time at Nvidia where we built not only the first large scale high resolution AI based weather model, so AI is able to replace traditional physics-based forecasting and it’s tens of thousands of times faster. So what would take a supercomputer can be put in our desktop GPU.
But we immediately open sourced it, right? So it’s faster, it’s accurate. And because of that open sourcing in a permissive way with Apache license, startups build on this, and especially many countries in the global south that didn’t have very dedicated weather forecasting stations could use this global model and then fine tune on their own data.
So that has just created this whole swarm of activity in the field. But it started with that open sourcing and it’s been the same principle. A lot of robotic simulation while I was there at Nvidia, we started with open sourcing and now that’s taken off as well. And I think that’s a really important pillar.
There are some companies like Nvidia doing that extensively and I agree. I think both in academia but also national labs, if we can supercharge with more supercomputers, like what’s been announced. So we need compute too, right? It’s not like the open source of the old era where the software engineers by themselves were happy writing and putting it out to the world.
We need to be able to train on large data, open source large models, and of course in China, that’s a big competitive aspect that we need to be doing here as well. It can’t be all done in China, and I think that’s been a good driver to encourage efforts like at Allen Institute and so on. So I really hope we get the resources.
That’s really the primary bottleneck.
Copy LinkThe case for open data as a force multiplier for research
EL KALIOUBY: Build on this, because compute is one aspect of it, the open sourcing of the models, but what about the data? Because—
DAFNE: —thank you for asking because I was going to comment on exactly that. I think there are insufficient efforts out there to create and curate large amounts of high quality data that can drive the kind of discovery that we’re talking about. I will point to what, to my mind, is one of the highlights of this, at least in my industry, which is the UK Biobank, where the UK government and the Wellcome Trust made a very big investment in creating data that is pretty much open to any researcher for very modest economic outlay.
The US has finally opened up All of Us, which was a massive multi-year effort here in the US. It’s not quite as rich as the UK Biobank, but still very useful. I think other resources like that are an incredibly high ROI investment for governments and philanthropic institutions to make, because once you open it up and it’s really large and high quality, thousands of flowers bloom. The number of quality papers and insights that came out of just the UK Biobank blows my mind. And I think more such resources, and the Cancer Genome Atlas, more resources like that are absolutely critical, and I wish more governments and philanthropic institutions would fund that.
ANIMA: I just wanna add that our team also used the UK Biobank. We created the first genome scale language model trained on all bacterial and viral genomes and that whole area too, like protein design, enzyme design, predicting new variants of concern was tanked by what gave us—
DAFNE: AlphaFold.
Exactly.
EL KALIOUBY: Yes. Yeah. Yes. More questions. Yes. All the way at the back.
ANDY: Hi there. Andy Hawks, Reba Systems, and a recovering physicist. Excellent panel. I’m really curious what this group thinks about where the future of fundamental research and development will occur.
How it will be funded, particularly given the changing landscape of federal funding and the increasing concentration of VC funding in a few very large bets.
Copy LinkWhy efficient and sustainable AI may drive the next breakthroughs
EL KALIOUBY: Yeah, exactly. Raquel, do you wanna take that?
RAQUEL: Yeah, sure. So maybe as a Canadian in the room I’ll say that there are opportunities as well for other places to play a role in terms of fundamental research. But it’s an interesting question in terms of, we see a lot of the more is more, and I actually subscribe to the less is more, meaning that a lot of the breakthroughs come when you have spare resources that actually force you to think more.
And I think that as a whole community, we need to spend time building sustainable AI, which means think about the use of data, the architecture, the learning algorithms, et cetera. So that instead of thinking of powering the entire New York versus powering your training your model, we can actually build technology that the entire world can actually benefit from, right? And I don’t wanna be in a position where somebody has to make a call between that family gets, in the winter, to have electricity versus somebody else. I think if we actually make much more effort in terms of these really efficient models, we will be in a much better place.
I was gonna just add that yes, hardware is a big part of it. Good to see you Andy. I know we’ve connected since the early days of Cerebras, so on the hardware equation, really being able to innovate not only new kinds of hardware, but the efficiency aspect of it is gonna be a big part of less is more.
So I just wanted to add that.
Copy LinkThe values that should guide AI in the physical world
EL KALIOUBY: Last question, and very quickly, just one word. What’s one principle we must not compromise on as AI becomes more physical and as we kind of prioritize scientific discovery? Daphne.
DAFNE: Wait, why do I have to start all the time? I think I can start. One word, very quickly.
Human wellbeing.
EL KALIOUBY: Love it. Anima — innovation, always pushing the frontier of what’s possible.
RAQUEL: Safety. I always say safety. I love it.
ANIMA: Thank you so much. Thank you.
It felt awesome to share the stage with these three inspiring leaders. As a startup founder and investor who comes from academia myself, I relate to the ways they care about both the research and practical implementation of AI. I believe that those of us who’ve been in this world for decades bring unique perspectives to this inflection point for business and society.
I still have so much that I’d love to explore. Sometimes I dream about going back to get another Ph.D. – yes, really! And yet I also adore the energy and innovation in the startup world.
Luckily, I get to have a foot in both, and I see the future of AI relying on stronger partnerships between these worlds.
Episode Takeaways
- Rana el Kaliouby opens with a big shift in AI: industry now dominates frontier models and hiring, but her panel argues the future depends on tighter academic-industry collaboration, not a winner-take-all split.
- Anima Anandkumar says academia still seeds the hardest scientific questions, while companies bring the engineering muscle and scale needed to turn breakthroughs in climate, fusion, and materials into reality.
- Raquel Urtasun and Daphne Koller make the case for a more porous university model, where students tackle real-world physical AI problems with industry resources without giving up research freedom.
- The conversation then moves from productivity hype to scientific impact, with Anima on physics-aware climate models, Raquel on simulation-first autonomous trucking, and Daphne on AI-built disease models for drug discovery.
- In the audience Q&A, the panel highlights open source, shared data, and efficient computing as essential public goods, before closing on three guiding principles for physical AI: human wellbeing, innovation, and safety.