Advances in robotics have lagged strides in software for many years. So far, only a few companies seem to have cracked the code to get robots working in the world, at scale. Raffaello D’Andrea is behind several success stories in that vein. His Kiva Systems was acquired by Amazon to move inventory around warehouses, and his newest firm, Verity, uses AI drones to improve warehouse efficiency and safety. An artist, technologist, and entrepreneur, D’Andrea has also built robots that play soccer and drones that perform alongside some of the biggest musicians in the world. He joins Pioneers of AI to talk about why it’s so hard to build good robots and how AI has opened up so many new possibilities.
About Raffaello
- Co-founded Kiva Systems; acquired by Amazon and became Amazon Robotics
- Member of the National Academy of Engineering; inventor on 100+ patents
- Won the Engelberger Robotics Award and IEEE Robotics and Automation Award
- Led Cornell's Robot Soccer Team to 4 RoboCup world championships
- Co-founded Verity; AI drone inventory tech deployed in 100+ warehouses
Table of Contents:
- A global perspective on building across disciplines
- How robot soccer led to a warehouse revolution
- Why simple robots win when the system design is smart
- The real barriers to scaling embodied AI
- How foundation models are changing robot perception
- Where humanoid robots are promising and where they are overhyped
- Why autonomous drones are a practical form of physical AI
- Turning warehouse data into operational visibility
- Using robotics breakthroughs in live entertainment
- Episode Takeaways
Transcript:
Meet the robots and drones working in warehouses
RANA EL KALIOUBY: As you probably know, this year is the men’s world cup. But there’s a lesser known tournament also afoot. The Robocup.
Imagine autonomous bipedal robots, the size of a five year old, teamed up and duking it out on a small soccer field. There’s a ref, a gaggle of fans, and the very human team members watching anxiously from the sidelines.
RAFFAELLO D’ANDREA: The cool thing about the competition, the RoboCup competition, is that once the whistle starts, you press the button. You’re hands off. You can’t do anything. So you gotta watch them. And if they play well, great. If they don’t, then so be it.
EL KALIOUBY: Raffaello D’Andrea is a professor, engineer, artist – and yes – former robot soccer team lead at Cornell. Robo-soccer is one of the more novel projects he’s worked on, but he’s also using AI to transform manufacturing. He’s co-founder of Verity, a company specializing in AI-powered drones. You’ll find them at warehouses for some of the world’s biggest companies.
These drones take on the tedious work of logging inventory — doing it accurately, efficiently, and without the risk of human error.
This is a very timely topic – Physical AI dominated the headlines at this year’s CES – one of the biggest global tech conferences. NVIDIA unveiled new models that make it easier to train AI-powered robots. We saw everything from surgical robots to humanoid robots on factory floors. I believe Physical AI will be one of the areas where we see the biggest innovations, especially when it comes to manufacturing. There’s so much to talk to Raff about, so let’s get into it.
I’m Rana el Kaliouby and this is Pioneers of AI – a podcast taking you behind-the-scenes of the AI revolution.
[THEME MUSIC]
Hi Raff. Welcome to Pioneers of AI. It is so great to have you on the show.
D’ANDREA: Hi Rana. Thanks for having me.
Copy LinkA global perspective on building across disciplines
EL KALIOUBY: So you have such a fascinating background, and I wanna double click on the various things you’ve done in your career. You’re Italian, Canadian and Swiss. How did that mix impact your decision and your career choices?
D’ANDREA: Yeah, I think of myself as a citizen of the world. I was born in Italy, I moved to Canada when I was nine. I moved to the US when I was in my early twenties to do my PhD, and then I was there for a while and then I moved to Switzerland about 15 years ago. I’ve lived in many places and I try to take the best out of every place that I stay.
EL KALIOUBY: You’re also an artist and you’ve created numerous art installations that have also served as research projects. So can you tell us a little bit more about the blind juggling machine and what kind of question were you asking?
D’ANDREA: We wanted to make a machine that could juggle a ball without knowing — or feeling, seeing it, hearing it, touching it — where the ball was. Can we create a juggling machine? And it became an exercise in dynamics and chaos, and it was a lot of fun.
EL KALIOUBY: These juggling machines have a simple design. There’s a large metal plate which balances, then bounces a small red ball. Some of the machines propel the ball up and down – a simple kind of juggling. Others are more dynamic, swinging left to right on a pendulum, passing the red ball to itself back-and-forth.
D’ANDREA: Basically, it can hit a ball — think of a ping pong racket with a ball. You hit it up and down and by actually having the right shape of the paddle and exactly the right motion, you could create a stable limit cycle where this thing would stay on there indefinitely. And what we also discovered is that you could make it chaotic and you could use this to transition between one periodic state to another periodic state with use of chaos in between. So it ended up being chaos as a control mechanism for moving between different dynamical regimes.
So it was a very fundamental research project, but it had a very visual incarnation where you see this thing juggling balls and then all of a sudden it goes chaotic and then it settles down to a completely different pattern and there’s no feedback.
Robotics, AI, however you wanna call it, always involves this feedback loop. But I wanted to challenge myself and say, what can you actually do without any feedback? Can you make something really impactful that doesn’t use any feedback at all? And that’s where the blind juggler came from.
Copy LinkHow robot soccer led to a warehouse revolution
EL KALIOUBY: Yeah, that’s a great segue into the robotics conversation. So you’re also a founder and entrepreneur, and before your current company Verity, you started a company called Kiva Systems, which you then sold to Amazon. That basically was the start of Amazon Robotics, back in 2012. You started the company in 2003, so you were super early with bringing robotics to the world. I want you to take us back to that time. What was it like to build a robotics company?
D’ANDREA: It was super exciting. I got to work with Mick and Pete — the other founders were spectacular people. If you’re gonna do a startup company, make sure you pick the right people. That’s the most important thing. But it was super exciting. We had just won the RoboCup championship four times. As a professor, I was on sabbatical at MIT, hanging out with folks at the media lab and also in the aerospace department. And that’s when I met Mick. He had this vision of using mobile robots in distribution facilities and he was using our RoboCup videos to convince anybody that would listen that, hey, you could use robots in distribution facilities. So we hit it off. I quit my sabbatical and, along with Pete, we started Kiva — and kind of never looked back. Hired many of my former RoboCup students and progressed from there.
EL KALIOUBY: Amazing. Can you tell us about some of the use cases that Kiva tackled?
D’ANDREA: Yeah, the use case was to bring inventory to people in distribution facilities. Before our system, the way that people would handle this — like at an Amazon facility — is people would actually walk around and fill an order, go back to their station and put it in a box, pack it. That would be the process.
So in these distribution facilities, people would have to walk huge amounts to fulfill orders. And the idea behind Kiva was simply to bring the goods to the people instead. We built a whole solution around it and it was successful technically, and then economically it also made a lot of sense for our clients, which is what resulted in the Amazon acquisition.
Copy LinkWhy simple robots win when the system design is smart
EL KALIOUBY: I think a lot of people don’t realize that AI and machine learning has been around for a while. I would love for you to share what kind of machine learning was implemented in these robots and what kind of sensors did they have?
D’ANDREA: Let’s talk about the Kiva robots. The Kiva robots had cameras that would be used for navigation. We also had some sensors that would be used for obstacle detection and obstacle avoidance, and that was pretty much the only sensors that we had.
And in terms of actuation, it was again relatively straightforward. It was a two-wheel robot and a mechanism for picking things up and putting them down.
EL KALIOUBY: Like a hand.
D’ANDREA: Not quite. It was like a corkscrew. The nickname was Taaz for Tasmanian Devil.
EL KALIOUBY: Taaz looks like a large, orange, rectangular roomba. It moves underneath warehouse pods that hold inventory. Using its arm, it lifts and secures these pods, moving them around the warehouse.
D’ANDREA: Basically, the robot would go underneath the pod. It would have the screw, and we would start turning the screw at the same time that we would spin the robot so that in inertial space, the screw didn’t rotate, it just went up and down. So it’s a very clever coordination of these two things. It would pick things up and it was very robust, very few moving parts.
We only had three moving parts in the whole robot and the magic was just creating all the algorithms around it to make it work all the time — never get lost, never hit anything.
EL KALIOUBY: Did you keep the humans in the loop at all?
D’ANDREA: We did. We would still expect people to be at the packing stations. It’s just that they wouldn’t have to walk around.
They would be sitting there, they would turn around and look up, and there would be a shelf in front of them with a light telling them exactly what to pick. They’d pick it, put it in the order box, and by the time they looked around again, there would be a new mobile robot that brought a new shelf with a new light that would tell them exactly where to pick it. So we removed the extremely low-skilled aspect of walking around a warehouse. Manipulation is something that we take for granted, but it’s a very difficult thing for machines to do.
Copy LinkThe real barriers to scaling embodied AI
EL KALIOUBY: Yeah, a lot of robotic companies today are trying to address that. Did you see this new age of embodied AI coming, and how does the world look different today compared to when you started Kiva? If you were starting Kiva today, what would that look like?
D’ANDREA: That’s an interesting question. To answer your first question, embodied AI — I guess that’s what I’ve been doing for I don’t know how many years, just the name changes. But in terms of what’s changed now, there’s certainly a lot more capital available to do it. When I think about it, the Kiva acquisition was $775 million, and I think the total amount that Kiva had raised was less than $40 million. So now, with $40 million, it’s very difficult to do very much. I think the expectations are much higher now.
And I’ve heard many people say this: Kiva started this robotics arms race. But frankly, there haven’t been a lot of other successes. There are actually very few that are these big home runs. And it’s because it’s really hard.
It’s really hard to create, from a technical perspective, something that works all the time. And then the other challenge is that there’s a market for it where it actually brings a significant amount of value to what people do.
EL KALIOUBY: And the third consideration I would also add is cost. It sounds like you constrained the problem enough so that these robots were cost effective?
D’ANDREA: You’re absolutely right. I think this is why there’s always this debate between generalist robots versus tailor-made robots. And people always like to make the analogy — hey, why can’t robots just be like the laptop? Bill Gates wrote a very famous article many years ago about how he felt robotics were at the same level as where software was and that the same thing would happen. And of course it didn’t. It’s very fragmented. There are good reasons for it.
EL KALIOUBY: What do you think the reasons are?
D’ANDREA: When you’re interacting with the physical world, you can’t just abstract the world the way you can with laptops — stuff comes in, I do something, it goes out. It’s very modular with compute. When you’re interacting with the physical world, you have a whole bunch of other constraints that you have to deal with. If you can make something that doesn’t cost very much, but you’re spending all your time doing maintenance on it, it’s a costly solution.
So when I mean cost, I mean upfront, but also just to keep this thing operating out in the field. I think that if you do things that are tailor-made, you’re going to have a tremendous advantage over something that’s a generalist solution.
EL KALIOUBY: In a minute: The role gen AI plays in the world of robotics and how it’s accelerating innovation. Stay with us.
[AD BREAK]
Welcome back to Pioneers of AI. You can check out this episode on YouTube – where you can watch some of Raffaello D’Andrea’s robots in action.
Copy LinkHow foundation models are changing robot perception
What role does generative AI and these foundation models play in the world of robotics?
D’ANDREA: So it depends on what aspect of robotics you’re doing. I can tell you a little bit about how it’s becoming super important in what Verity does. Verity creates systems that allow our clients to operate their warehouses more efficiently. The way they do that is they collect real-time data of the facilities and they use self-flying drones to do this.
They’re flying in the warehouse, collecting data in real time, uploading their findings, combining the data with the client’s data, and then figuring out what is really happening in a client’s warehouse. Visibility is super important and what we’re finding is that with all of this data we collect, we can actually bring to bear visual language models to process this data and make inferences.
Safety, for example, is very important to our clients. They wanna use our system to do safety inspections for them. You can use these models to actually do a lot of that work.
It’s really making obsolete a lot of the old workflow of: here’s a problem, I’m gonna apply some machine vision techniques, I’m gonna hire a whole bunch of people and productize it. It’s kind of bypassing that whole piece.
EL KALIOUBY: Yeah, I would love to double click on this and slow it down, because I truly think this is very fascinating. I did my PhD basically in computer vision and you’re absolutely right that these vision foundation models are changing all of that. So in the world of computer vision before generative AI, what does it look like? And then how does a vision language model change that?
D’ANDREA: That’s a very big question, but let’s use the specific example that I gave. Right now we can take images and query these engines — whether it’s Claude or Gemini — with the right types of prompts, and if you set up the flow properly, you can ask it questions like, is there a fire extinguisher? And is it of the right type that’s consistent with the way that this facility operates? It can actually do that for you.
Now, how would you have to do that traditionally? First you’d have to figure out what the different types of fire extinguishers are that are allowed to be here, and you’d have to interface with someone. Someone would have to type that in. As opposed to now, you can just query the documentation that exists at a client’s site and it can actually figure that out for you. So that’s already something that you couldn’t do before. And then there’s the part of writing some code to find the fire extinguishers.
So people would try to do shortcuts, like — they’re red, so let’s look for red stuff. And you could start making progress to look for things that are fire extinguishers. But it’s a little bit brittle, because if the lighting is kind of weird one day, then maybe that red doesn’t look red anymore and you’re gonna miss that fire extinguisher. And — I’m just making something up specific to this example — what if orange or green become acceptable? You see what I’m saying? If something changes, you gotta change that whole process.
So we would get clever people to create algorithms, take shortcuts. That was the whole point — can you make the problem simpler, collapse it so that you can actually create something that works with high probability and doesn’t take you years to develop. The difference is that right now you don’t have to use that cleverness. You can just use these systems that have been trained on huge amounts of data to kind of bypass that step for you.
EL KALIOUBY: Yeah, I’ll share an example from my world. At my company Affectiva, we serviced the automotive industry. One use case was we wanted to detect if there’s a child seat in the car — it’s a safety application to be able to flag if there’s a child left behind.
In the old world, we would essentially have to collect data where there’s a child seat in the car, and then we’d have to account for all the different angles and different car seat models. It was really expensive to train a model to do that.
But to your point, now you can literally upload a picture and it will just tell you if the car seat’s in there or not. So it bypasses that whole process.
Copy LinkWhere humanoid robots are promising and where they are overhyped
Alright, I wanna get your perspective on humanoid robots. We are seeing companies like Tesla, Figure AI, Physical Intelligence, you name it, enter that whole space. Where do you think we’re going with that?
D’ANDREA: So the argument goes that most activity revolves around what people do — human labor. So if you create something that has the form factor of a human being, it’s gonna naturally fit into whatever work environment, which of course makes a lot of sense.
So that’s the plus — the market opportunity is tremendous. So what are the challenges? I would say there are two key ones. First, can you create something that is as capable as a human being to do tasks? It doesn’t have to be all tasks, but are there enough tasks that you can create a machine in a reasonable amount of time that could replace what humans can do?
And humans are amazing creatures. We talked about dexterity, we talk about vision and just our ability to interact with the world. But then the second challenge is, can you do that cost effectively? Can you create something that brings economic value because you can do it for low enough cost?
From a market perspective, that makes a lot of sense. From a technology perspective, the field is moving pretty rapidly. I think you could make the case that there are enough tasks that are low-level enough that these things could do within five years. I think that’s the case.
Where I’m struggling is the economics. I’m struggling with creating something that just simply works, doesn’t cost a lot of money, and doesn’t require maintenance every six hours.
EL KALIOUBY: Yeah, this is being a little tongue in cheek, but I’m excited for a robot that can fold my laundry — but right now it takes forever, because it is pretty complex if you think about it. And it costs at least $20,000 and I’m like, I’ll just do it myself.
D’ANDREA: That’s right. That’s a case of not low-hanging fruit. It’s not a good first application for these machines. It might be 20 years from now. There are other things that are probably better matches, like being in a grocery store and stocking apples. I think that might actually make a lot of sense — it’s a problem that’s well within reach.
EL KALIOUBY: Yeah, I’m an investor in a company called Chef Robotics. It’s not exactly humanoid, but it’s a pretty versatile robotics arm that does food packaging. They can literally sense whether it’s blueberries that need to get picked up, or they’re spreading Nutella on a toast, or whatever.
D’ANDREA: That’s also a great application because, let’s say you create a restaurant theme around it. You can pick the items in your food that are easy for the machines to make. You can have a difficult version and an easy one, and people like it just the same. So pick the easy one.
EL KALIOUBY: Yeah, fascinating. We had MIT Professor Cynthia Breazeal on the show. We talked a little bit about does every robot need to be humanoid? She had a very specific point of view around that. Her view was, well, it depends on the use case. If the use case requires human-like capabilities, sure, but if not, then don’t do that. I’m curious, what’s your point of view?
D’ANDREA: I agree with Cynthia. And I think there are two reasons why you want a humanoid.
One of them is what we already talked about — the human environment is created for human beings. So it’s likely that creating something with a human shape means it will naturally find itself in that environment and can perform the task effectively. The other reason is if there’s an emotional component that needs to be addressed.
People talk about nursing homes and things like that, but it could also be a receptionist at a hotel or whatever else — by the way, that wouldn’t work for me, but I’m just making a general statement. I’d rather just have a screen. But there are different reasons.
I think it goes to what we said earlier. If there’s a big enough market for a task and the task is relatively well structured, you’re gonna do better with a purpose-built machine. It’s gonna be cheaper.
EL KALIOUBY: It’s gonna be cheaper. Yep. We also had Vinod Khosla on the show, and he predicted that we would have a billion bipedal robots by 2040. Do you agree?
D’ANDREA: I disagree. I don’t think the economics are going to get there. I think we are going to see impressive growth. We’re going to see hype and then the trough of disillusionment. And that takes a significant amount of time.
I think we’re just at the early days, not even close to the peak. But there is gonna be a peak and there’s gonna be this drop into disillusionment, which will probably take us right around 2040. And I think the reason we’re gonna have this drop is people are gonna realize that it is hard to make these things economically viable.
EL KALIOUBY: Yeah.
D’ANDREA: At that scale. At a billion scale. At a scale of a million, or a hundred thousand — different story, different use cases — it will make sense.
Copy LinkWhy autonomous drones are a practical form of physical AI
EL KALIOUBY: When we come back, Raff will take us behind the scenes at his autonomous drone company, Verity. Stay with us.
[AD BREAK]
EL KALIOUBY: All right, so let’s talk about your company Verity. What made you go from robots to drones?
D’ANDREA: Well, to me, drones are robots. It’s all the same. In fact, whether it’s something that flies or something on the ground — we have some projects in the water — I’m interested in creating things that have agency. That’s both in my research as an entrepreneur, but also in the art that I’ve created.
I’ve been doing drones for 25 years. We had the first autonomous indoor drone at Cornell University back in 2000. It was a monstrosity and it barely hovered, but nonetheless I’ve been interested in it for a long time, and I kind of put all of that on hold to do Kiva.
When I moved to Switzerland, that’s when I started a large research program around autonomy and drones. It was just fertile territory. No one was really doing it, just like hardly anybody was doing mobile robots when I got into it in ’97, ’98. It just seemed like there were a lot of interesting problems that could be addressed and it was just fascinating for me.
EL KALIOUBY: Yeah. What are some of the use cases that the companies are tackling?
D’ANDREA: Verity is very much focused on information gathering. Our system comprises two parts. The first is the platform, which is getting the data, and then there’s the whole analytics engine that acts on the data and that’s based in the cloud. But the platform part — which is what most people would think about as robotics — is a system where fully autonomously, drones fly in warehouses and collect data.
3D depth data, 2D data. We can collect RFID data. And we can process all of this data, as much of it locally on the edge as possible, and then we upload our findings to the cloud. The cloud is what creates essentially a digital twin of the facility, which it uses to make recommendations to our clients about how to better operate their warehouse.
The key point about it is that it’s fully autonomous. Our clients don’t care about drones. They don’t even know that they have drones in their warehouse. No one touches them. They go years before there’s a maintenance event. We’ll be close to 200 live sites by the end of the year — we have over a couple hundred booked — and they’re operating globally. The reason it’s working economically is that these things don’t require maintenance. They just work all the time.
Copy LinkTurning warehouse data into operational visibility
EL KALIOUBY: Yeah, that’s incredible. I love the idea of taking all this information and creating a digital twin of the space. How do the people who work at the warehouse consume that digital twin? What does the interaction look like?
D’ANDREA: So examples of things that you do — the simplest one is just doing inventory. Right now, folks have various processes for making sure that their warehouse is stocked enough to fulfill its downstream consumption, whether it’s retail or whether it’s a 3PL that interfaces with other businesses.
A system like ours can give them full visibility of what their actual inventory is in the warehouse right now. The way that they try to infer inventory is by trying to record all the ins and outs of the warehouse and all of the information that people provide — yes, I put it in this location and I verify that it’s there.
Errors are made and these errors just accumulate. To mitigate that, people do things like cycle counting — four times a year they will go and inspect the whole warehouse to find out where everything is. It’s amazing. They find things that have gone missing, things that are in the wrong location.
So the easiest use case with our system is that because you’re doing inventory every night, you know exactly where everything is. So you never run into those issues. You never run into stockouts, never run into lost sales. You don’t run into write-offs. You make operations much more efficient because when something is missing, you don’t have to spend all these hours trying to find it in the facility.
EL KALIOUBY: Yeah. You also talked about safety, right? So if something is incorrectly stocked, it could flag that.
D’ANDREA: Exactly. Are things being stored properly? Is the racking safe for folks that are nearby? Do you have your safety exits? Are they secured? Eventually you could do things like security with a system like ours. In fact, people are already exploring these use cases.
It’s really about giving you full visibility in these very large spaces that can be over a million square feet — to augment all of this other information that is being used to run a system.
Copy LinkUsing robotics breakthroughs in live entertainment
EL KALIOUBY: Yeah, absolutely. Now it sounds like you also explore some use cases at Verity in the art space. Tell us about what that application looks like.
D’ANDREA: Yeah, sure. When we started Verity in 2014, we were in stealth mode for two years. That same year we did two things. One of them was a proof of concept at a Walmart warehouse where we showed that autonomous drones could be used to do the things that we just talked about.
What we discovered from that demonstrator was that we were years away from creating technology that was gonna be good enough for this application, and that’s what we started working on. At the same time, we did a show on Broadway with Cirque du Soleil where eight drones, carrying lampshades, performed above the performers. They came to life — lampshades that all of a sudden took off during the show, performed over the performers, had a five-minute segment. And what we found at Verity was that we could monetize our capabilities in live events and entertainment.
And that’s what we did. We still have a small group of people that does live events. Our systems have toured with Celine Dion, Metallica, Justin Bieber, Drake. We’re on more than 10 cruise ships. We’ve done a lot of one-time events, like a world record number of drones with British Telecom — we did that in the UK several years ago. And we’re in Michael Jackson One, part of the show where these drones create a lighting effect with the performers, with the music, that just adds to the spectacle.
EL KALIOUBY: It is so cool. Very, very exciting.
Last question, and it’s one that I think a lot about and have asked all my guests on the show. What do you think it means to be human in the age of AI?
D’ANDREA: Yeah, so I think what it means to be human in the age of AI is just to continue to do what we’ve always done, which is cherish community, cherish relationships, and see these as tools that can enrich our lives and make us more productive. But at the end of the day, it’s kind of funny when you really take a step back.
Why are we so focused always on productivity and the bottom line and efficiency? We have all of the tools in place to make heaven on earth. We absolutely do. And technology could play a big part of that. But we choose not to. And there are reasons why — we don’t have a centralized government that can impose it, it’s distributed entities that have their own decision processes and democracies and whatever else.
To me, the more relevant question is how do we organize ourselves socially so that we can make better use of these amazing capabilities that could make our lives so much richer, so much more productive, and so much more enjoyable.
EL KALIOUBY: Well, thank you so much for joining us on the show, Raff. This was great.
D’ANDREA: Thank you. Thank you for having me.
EL KALIOUBY: When it comes to physical AI, humanoid robots get a lot of the attention. I get it, they’re cool! But in reality, we’re not likely going to see mass use of them for a while.
I’m betting on physical AI in manufacturing, and Raff is, too. There are so many bottlenecks when it comes to getting a product out of the warehouse and into your hands. Drones and robots like the ones Raff is developing can reduce cost while increasing productivity.
I think this is awesome and I’ve personally invested in companies doubling down on this use-case. And if you are building in this space – I’d love to hear from you! Reach out on LinkedIn or Instagram.
Episode Takeaways
- Rana el Kaliouby opens with Raffaello D’Andrea’s unlikely path from RoboCup soccer to real-world robotics, tracing how playful experiments can lead to serious industrial innovation.
- Raff explains how his blind juggling machine and Kiva Systems both came from the same instinct: simplify the problem, design elegantly, and make robots reliable enough to deliver real value.
- On embodied AI, he makes a grounded case that robotics breakthroughs are not just about capability, but about economics, maintenance, and building purpose-built machines that truly work in the field.
- Raff says generative AI is transforming warehouse robotics by letting Verity’s drones pair visual data with foundation models to handle tasks like safety checks that once required brittle custom vision systems.
- The episode closes on a pragmatic vision for physical AI: humanoids may be coming, but the near-term winners are autonomous drones and specialized robots that quietly make warehouses safer, smarter, and more efficient.