AI isn’t here to take over. It’s here to empower. That’s the vision Kanjun Qiu, co-founder of Imbue, is working to make a reality. In this episode, Qiu challenges the dominant AI narrative and shares why the future shouldn’t be about machines running the show, but about humans harnessing AI as a tool for creativity, decision-making, and personal agency. Qiu and host Rana el Kailouby explore how AI agents can help everyone build their own software, how we can take back control of our data, and why AI can be an extension of our human potential.
About Kanjun
- Co-founded & leads Imbue, valued at $1B in 2022 after raising $230M
- CEO of a unicorn AI lab building generalizable agents that reason and code
- Chief of Staff to Drew Houston; helped scale Dropbox from 300 to 1,200 staff
- Founded Sourceress, ML recruiting startup backed by YC and DFJ
- MIT undergrad & master's in computer science; research at the Media Lab
Table of Contents:
- How MIT shaped a mission to reinvent computing
- Why AI agents should be more than digital assistants
- How higher level coding could make software creation more accessible
- Why trust and flow matter more than full automation
- Why coding skills will still matter in an AI powered world
- How human centered AI can expand creativity instead of replacing people
- What it takes to build trust into AI systems and interfaces
- Why data ownership and interoperability are essential to an open AI future
- Advice for founders and a more human vision for AI
- Episode Takeaways
Transcript:
The future of AI is human-centered, with Kanjun Qiu
KANJUN QIU: There’s a lot of talk about we’re going to use the AI to do the human’s job. It’s gonna be so smart that it’s going to tell the human what to do. It’s going to tell the human, like, go here, get these materials, go here, get these materials, assemble them in this way. And now you have a nuclear reactor and, like, that’s good. And I really want to, like, topple that story. Like, that’s not a good future.
RANA EL KALIOUBY: That’s Kanjun Qiu – computer scientist and co-founder of the unicorn AI company Imbue. And the future that she wants to see looks a lot different than autonomous AI.
QIU: A good future is one in which the human is in the driver’s seat, and the human is able to make good decisions because of what the AI is helping the human understand, what the AI is helping the human do and execute. And so it’s like, how do we use AI to create that much greater sense of agency, freedom, and power over our lives? And that’s the open problem I’d like to figure out.
EL KALIOUBY: This is the kind of AI future that I want to see, too. It’s a future where humans are at the core of AI innovation. A future where there’s broad access to AI tools so that everyone can benefit from them – imagine if everyone could create their own AI!
Kanjun – in part – is trying to achieve this human-centered future through her company Imbue. They’re building agentic AI. But these aren’t your typical customer service agents – these agents are of a different caliber with the lofty goal to empower everyone.
On this episode I’m talking with Kanjun about how we can achieve a more democratized AI future, her vision of the next stage of the personal computer, and how AI agents can be a force for creativity.
I’m Rana el Kaliouby and this is Pioneers of AI – a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
Before we get into my conversation with Kanjun, I want to talk about some big AI news that dropped last week. Of course I’m talking about DeepSeek.
If you haven’t heard, DeepSeek is a Chinese company that released an open source AI model that rivals the US models from giants like OpenAI. The kicker? They claim to have trained their model for a fraction of the cost.
Plus, because their model is so much smaller than say the default ChatGPT model, it’s a lot more energy efficient, too.
The news rattled the tech industry as well as the stock markets.
At this point, it’s been over a week since this has unfolded, and I have two main takeaways from it all.
First off, any AI product, no matter how cheaply made or how sustainable it is needs to be built on trust. I’ve been playing around with DeepSeek, and I got to say, it’s pretty great. I like the simple UI – I like how it has visible reasoning so that you can see how it arrives at its answers. I’ve been doing a side by side comparison with ChatGPT and the answers are comparable.
But for me personally, I don’t trust the platform enough to input personal information. Chinese laws make it easier for the government to access this kind of data – which honestly worries me. And I don’t think I am alone in being careful. Companies may avoid using a cheaper model, if it means risking their data security. Full stop.
My second takeaway is the big one. And it’s not even really about DeepSeek itself. It’s about how DeepSeek was made. DeepSeek trained their model using a fraction of the amount of compute and investment. They did so by using several machine learning techniques like distillation and a mixture of experts to compress their models and reduce these hardware costs.
And so my main takeaway is that AI innovation is no longer the exclusive domain of big players. Smaller companies have an opportunity to breakthrough in AI. And the good news is: as AI becomes more efficient and more accessible, its adoption will soar- both by consumers as well as in business. Which means that smaller companies can more easily harness AI. And this all will unlock and accelerate innovation.
Look, I don’t think that last week’s news means an end to the AI race. But what I do think is that VCs and the US government should be investing more into the startup and innovation ecosystem, because this is where cutting-edge technology is born.
I’m going to continue following what happens around DeepSeek. And as an investor, I also have my eyes on companies innovating in this space. If you have thoughts about DeepSeek or how these global companies affect your life, I’d love to hear them. Leave us a voicemail at 6 0 1 – 6 3 3 – 2 4 2 4 or email us at [email protected].
Ok. Now let’s get to my conversation with Kanjun. Because her company is truly at the cutting edge leading the charge in agentic AI. Let’s hear it.
Hi, Kanjun. Thank you for joining Pioneers of AI.
QIU: Thank you, Rana.
Copy LinkHow MIT shaped a mission to reinvent computing
EL KALIOUBY: So we both have the MIT connection in common. You did your undergrad and master’s degree at MIT. And you also spent time at the Media Lab, which is where I did my postdoc. What was that experience like? And you were also at the high low tech group – tell us more about that.
QIU: Yeah, MIT was such a fun experience. They always say at MIT, IHTFP, you either “I have truly found paradise” or “I hate this fucking place,” and I definitely felt both very strongly. But the Media Lab was really such an interesting place.
I think that’s actually where I first got my taste of the power and magic of computing. Imbue as a company, what we’re really trying to do is reinvent computing and re-imbue that power back into computing. And so I really felt the sense, yeah, at the Media Lab that I want to figure out how to make computing more accessible, and Imbue is another variation on how to do that.
Copy LinkWhy AI agents should be more than digital assistants
EL KALIOUBY: So let’s talk about Imbue and specifically AI agents, which is one of the focus areas for Imbue. So when I think of AI agents, I think of basically AI tools that can do stuff, can execute tasks on your behalf. On this podcast, we have had guests who are building customer service AI agents. We’ve had guests who are building companies that do voice AI agents to automate healthcare workflows.
What is Imbue’s focus? Like what’s your definition of an AI agent and what are you guys building?
QIU: Yeah. So we started Imbue in 2021, and we started as a research lab. We were very interested in how do you build general agents? We really had this feeling that if you could have a general agent, in the same way that ChatGPT is based on an LLM that is more general and has more general knowledge, you could get your computer to do much more for you, and that would unlock a lot of creativity. These days, agents are really hot this year.
EL KALIOUBY: Yeah. Oh yeah. Everybody’s doing AI.
QIU: Doing agents. Yeah, everyone’s doing agents. And these days when people talk about agents, it’s often talked about in a very specific way.
People think of agents as this kind of personal assistant that does stuff for you. You tell it what to do, it’s maybe something like Siri or a customer service bot, or it is something quite specific. And then this agent will go and do that task on your behalf. And that’s kind of how people conceptualize agents.
And when we first started Imbue, that’s also how we thought of agents. Perhaps we could build a general thing on your computer that could help you do stuff on your computer. But over time, actually, we learned a lot about what makes agents interesting and powerful. One of the big things we learned, actually, is that delegating a task to an agent is a very difficult thing for a human to do. As a founder, delegation is hard. I have to figure out what to delegate and how.
EL KALIOUBY: You have to trust that the person or the thing you’re delegating to is going to get this job done at least as good as you will, right?
QIU: You’ve actually nailed it. The core issue is trust. What can I trust this agent to be able to do? What can I not trust it to be able to do? And what we found is that it was very hard for people to use a general agent because they were like, I don’t know what I can use it for and what I can’t use it for.
And there was a second piece that really struck me, where I’ve been thinking a lot about how do we have a good future with very powerful AI systems and humans?
There’s been all of this discussion about taking over people’s jobs and AGI killing us all and things like that. It’s something I think about a lot – how do we create a future that’s humane and where technology serves people and not the other way around? And so that actually caused us to rethink what we believed agents to be.
And what we realized is, if you really think about what an agent is, an agent is this intermediary between you and your computer. It’s just a piece of software. It is a system that talks to your computer by asking it to do stuff. And it talks to you either through language or an interface or something like that. And the most general way for it to talk to your computer is by writing code. Because even if you work with a customer service agent, someone had to write the code for what that agent is doing. And so the most general way of thinking about what an agent is, the most powerful way, is not as these vertical personal assistants, but rather as a system that lets you write code, arbitrary code on your computer.
It is essentially a higher level programming language. That’s what agents are, in their most imaginative view of what they can be. And once we realized that, we were like, ah, what AI allows us to do is enable every person to program at a higher level, at a much more intuitive level.
EL KALIOUBY: Imbue is still in its research phase. They don’t have any products available for commercial use – yet.
QIU: Coding right now is like a super detailed task where I’m writing out every single character. AI systems can generate functions, but it’s still very low level. And so we realized, okay, this kind of more compelling, more empowering vision of what an AI agent could be is actually as a system that democratizes coding and democratizes the ability to control your computer and get your computer to do what you want it to do. Right now, we’re like customers of all of these pieces of software that other people built. But I think that in a future where building programs is super easy and cheap, we would build a lot of stuff for ourselves, and we would make our digital built environment very custom in the way that our physical homes are very custom to us.
Copy LinkHow higher level coding could make software creation more accessible
EL KALIOUBY: Yeah. So let’s bring this to life for our listeners. Say I want to write an app – and I really do actually want to do this. Say I want to write an app that pulls data from all of the different wearable sensors I wear, and also perhaps my electronic health records and maybe my latest blood biomarker data, and it’s going to draw all of this data together and then give me health and nutrition and exercise recommendations. So one way to do this is to put my computer science hat on and actually code this, right? I guess with Imbue, I could literally talk to an Imbue agent and say, hey, take my Whoop data and combine it with this data and visualize it in a beautiful graph. Is that the vision? Is that the idea?
QIU: That would be awesome and magical, but would not work.
EL KALIOUBY: Right. We’re not there yet. So unpack that for us.
QIU: Yeah, that’s a great question. So right now, when people think of AI coding systems, they think either GitHub Copilot, I’ll auto complete your next line of code, or they think app builder thing I can tell instructions to, and it’ll make the app for me. But as programmers, as computer scientists, we understand that coding is more than just writing the code. It’s also architecting – what is the data model, what kind of extractions do I want? And also, it’s about managing changes. I make a commit, now I’m going to make a new feature.
EL KALIOUBY: In coding land, making a commit is basically saving changes to your code.
QIU: And sometimes I need to roll back this feature, because it didn’t work when I was testing it, and I actually need to rethink it.
So what we’re building right now is a tool that allows people to work at a slightly higher level, at the feature level. So maybe I’ve got a code base, or maybe I’m starting from scratch, and I’m writing a feature. So maybe the first feature I would write is integrating with your Aura Ring or something.
And you’re now getting that sensor data. Okay, now you’ve got that data in, I’m going to add a next feature, which is integrating my EHR system.
EL KALIOUBY: EHR system. As in electronic health records. What Kanjun is talking about here is my dream! A way to cohesively integrate my wearable biometric sensors with existing health records from my doctors.
It’s not as simple as dictating what kind of app you want to make to an Imbue agent. But the tools they’re working on can help make building an app like that so much easier.
QIU: Earlier we talked about trust.
It’s this slightly more fine grained control of the system that gives people trust. Because I’ve used all of the app builders, and the app is not what I wanted it to be. Even if I’m interfering in the middle of it while it’s thinking, it’s still not quite what I want it to be.
Copy LinkWhy trust and flow matter more than full automation
EL KALIOUBY: Yeah. And so would it be correct to say that your first set of users are actual software engineering teams and this is helping them be more productive and get to market faster as they’re building their products?
QIU: Yeah, it’s a great question. So I would say our initial users are software engineers or people who know how to read and write code. I’m not a very good software engineer anymore, but I do read and write code, and someone like that. It should help people get things to market faster and build features faster.
I think that’s a really nice piece of it. We’re still in the user testing stage and we’re still testing it out ourselves, and we’ll have an alpha relatively soon.
Actually, the thing I was most surprised about is how much context switching I do when I’m programming and how much this reduces my context switching and keeps me in flow. One of the things I realized is, oh wow, this feels really good to be working at a slightly higher level. I don’t have to be jumping around the code base so much. And maybe this is what programming can feel like more and more. I was just dealing with all this context switching before and I didn’t even realize that it was kind of painful.
EL KALIOUBY: The analogy that you’ve made me think of is when I first started, one of the early classes I took when I studied computer science as an undergrad was how to code in assembly language, which is like this low level coding language. And, oh my God, it was so arduous. I hated it. And then I learned Pascal and C and Python, and these are all like abstractions that make this idea of coding more accessible, and you’re taking it even a few more steps.
QIU: Yes, I think the way we think about what we’re building is we’re building the next layer of abstraction on top of programming languages. And that next layer of abstraction is actually in some ways a programming language itself, but it is this AI-enabled programming language.
And one thing that’s been interesting about it is that it’s more than just writing the code. It also has to take into account the sociological process of programming. There’s this sociological process we’ve developed where we write a spec doc for a feature to think through what to build next.
And we figure out the data model. And we make commits. And we do testing. All of these things that make complex software possible – those sociological processes actually are part of how we think about the next layer of abstraction.
EL KALIOUBY: If you’re not a computer scientist or can’t read or write code, this is still a relevant innovation for you. Because it means that the barrier to writing complex code will be a lot lower.
Maybe at this point your spidey senses are going off. Wouldn’t AI that can expertly write code reduce the need for computer scientists?
Well, it’s complicated. While Kanjun wants to see a world where everyone is empowered to write software, she thinks there will still be a need for coders.
We’ll get to why after a short break. Stay with us.
[AD BREAK]
Copy LinkWhy coding skills will still matter in an AI powered world
All right, so we both studied computer science and spent a lot of time, both at the undergrad and post grad level, learning how to code. Do you think what you’re building is going to change the demand for computer science as a profession and as a degree? And I ask because my son is almost 16.
He is very tech forward. He’s very interested in tech. But I don’t know if it would make sense for him to major in computer science. So what’s your view?
QIU: Yeah, it’s a great question. I think people who understand how to build software will still have a very big advantage. The tools that we see coming out, the tools that we’re building, they still require you to be somewhat technical. And one of the dreams I have is empowering everyone to be able to write software.
But the reality is that a lot of people are probably just going to use software that other people are making for them, and that’s okay. But I think the power to create software – computing and the digital built environment – is going to be such a dominant thing in our future. It’s already pretty dominant today.
The digital environment is able to do and process so much information. Being able to have creative power in that environment is really powerful. And so I would say studying software engineering or studying how to program – those skills and those concepts are still really important. Learning about algorithms and learning about data structures and learning how to assemble a system and learning systems thinking, these are all still going to be necessary no matter how good the programming tools get.
It’s like if you really wanted to be a painter, you still have to learn how to work with paints, even though someone else is now manufacturing the paint for you.
EL KALIOUBY: Yeah, and I would imagine we are going to continue to innovate on the algorithm side and that’s where we’re still going to need people who are deeply immersed in machine learning to continue to innovate on these models and these approaches.
QIU: Exactly. And for us, one of the things we’re excited about is enabling people to build their own agents. So if you want to build your own email bot that processes email exactly the way you do or something like that, that’s still going to take quite a bit of skill. So it’s still a useful discipline.
Copy LinkHow human centered AI can expand creativity instead of replacing people
EL KALIOUBY: I love your focus on empowering people to write their own software. The way I’m thinking about this is – and you mentioned this earlier – how can we harness AI to unlock human potential? This is something that I’m very passionate about. Can you talk a little bit more about how you see these AI agents empowering people as opposed to taking away jobs or taking away opportunities from humans?
QIU: It’s something I think about a lot as a company. Our mission is explicitly to empower humans. In an age where machines are becoming more and more powerful, we really view these systems as tools for people. They should be built as tools for people. And so we actually think quite a bit about decentralization versus centralization of power.
I think part of why people feel concern around current AI systems – and why I feel concern – is that the increase in their capabilities means those systems are becoming more and more powerful and we don’t feel like we actually have that much control over them and how they will impact our lives.
It feels like something being done to us as opposed to something that we’re doing. And there’s kind of an interesting historical analogy here. In the 1960s, people were really excited about the supercomputer. People thought, the supercomputer is going to be the future, it’s the future of business.
Everyone’s going to be time sharing on terminals on centralized supercomputers that are going to be really, really powerful, and that’s the future of computing. And then it took a group of people in the 70s, researchers at Xerox PARC, to invent the desktop and the mouse and the GUI and files and folders and all of these primitives that make computing more understandable to us. Because they took a lot of the ideas that we already understand, those concepts you understand, and they imbued it into computing, and that is what enabled the personal computer.
And when the personal computer first came out, people thought, this is a toy. No one’s ever going to use this. It’s not that powerful. But the power really was in how people figured out how to be creative with it and how to build with it. And I think there’s actually something similar with AI, where there is this default centralizing force right now, and that centralization is real.
That centralization of power is real and the default path is that we end up with these entities that are very powerful corporations that have the power of these very large models at their fingertips and they can do stuff with these models. And I think it’s incumbent on us at Imbue and also people building technology to figure out how do we take that power and give it to people so that people can be creative in this new medium.
EL KALIOUBY: I love this analogy, because I often reference this vision of 45, 50 years ago of giving a personal computer to everyone. And the analogy that I’ve been using for the world we live in today is giving everybody access to a personal AI assistant. But I actually love your tweak on it, which is giving everybody access to an AI agent that allows people to express themselves and get things done that they would have otherwise not been able to.
That’s really powerful. I love that.
QIU: Yeah, an assistant is somebody else’s thing that they—
EL KALIOUBY: Right.
QIU: But what we – the true power is, I feel a lot of power over my home. I can add whatever objects I want. I can do construction. I can change things. And that’s what makes you as a person feel like you have power over that environment.
And so, yeah, I think it’s really important, actually, to go away from the assistant analogy and go toward a creative kind of future, like, how do we enable people to create with this?
EL KALIOUBY: You almost need a different word than agent.
QIU: Yeah, agent’s the wrong term. I think in 10 years we won’t be talking about agents at all.
Copy LinkWhat it takes to build trust into AI systems and interfaces
EL KALIOUBY: Yeah. Okay. So we talked a little bit about trust, and how important trust is to this process, because there’s a trade off between trust and autonomy, right? Like how much autonomy do you give this thing on your behalf to go do stuff?
How do you instill these principles into the work you do, like how does it translate into actual principles or frameworks?
QIU: Yeah, I would say trust is at the core of our product development in a lot of ways, and the question of trust actually drives a lot of our research. So when it comes to writing code, I can trust the code if I know that it’s doing what I wanted it to do as a software engineer.
Usually the way I handle that is either by reading the code or by testing the code. So I’ll write tests for the code. Then I can say, okay, I tested this code. It does – you know what? – it doesn’t have the edge cases that I didn’t want it to have. It does exactly what I expected it to do. And that helps me trust the code.
And so as a company, we do a lot of work on verification of code. How do we effectively verify code so that as a user I can trust it without having to read the code in detail, which for LLM generated code is very arduous. How do I trust that it’s correct? And so that question of trust drives our research direction around verification.
It also drives how we think about the user experience. So from a user experience perspective, autonomy is not the goal. The goal is to have the system be able to do more useful or bigger useful things for you. But I would say we try to get away from delegation and autonomy, and we try to move toward making the interface feel tactile. An autonomous agent inherently is not tactile. I inherently don’t feel like I have that many levers to control what it’s doing. And for us, the question that we always ask ourselves is how do we build more tactility into the interface so that it actually feels like clay.
I can mold what it’s creating.
EL KALIOUBY: Actually, let’s go to that next. Today, a lot of our interfaces with AI are text based, right? But obviously, the way humans interact with one another is based on vision and voice and perception. Do you think the natural interface with AI – or with computers – is going to evolve?
And what will that look like?
QIU: It’s definitely going to evolve. I think a lot of our current interaction with LLM systems are text based, partly because text is actually really useful. We text our friends, we text over Slack. And there will always be some text interfaces to language models.
But I think that as the years go on, we’ll find more and more interesting interfaces.
I can imagine a future where computing is so infused into our environment that we can touch things and they’ll be responsive.
Brett Victor actually has a lot of very interesting work on this, where he’s playing around with how do you compute with objects in your space in a way that feels more intuitive to you, because people are very spatial. I think there’s also something very interesting about voice interfaces now that voice is becoming more interpretable by computers.
Some combination of voice and tactile interfaces – those are things I’m really excited about. If people can use our coding tool to create these kinds of ways of interfacing with computers, I can see a Cambrian explosion of new types of things that people might create on their computer.
Copy LinkWhy data ownership and interoperability are essential to an open AI future
EL KALIOUBY: Yeah, super fascinating. Let’s talk about the role of data. Because a lot of these AI tools and agents are very data hungry and data driven. On this podcast, we talk a lot about responsible AI and ethically sourced data. How do you approach data and where do you get the data from? And how do you ensure that it is sourced and used responsibly?
QIU: So when we think about data, we actually think about a different type of data. We don’t think about model training data, which is what most people talk about today, because we’re building an ability for people to create agents. What we’re seeing actually is when I’m making an application – like a piece of software or an agent – I want to use data. Data is actually often the core of the application. I might want to use my own email data. I might want to use my own LinkedIn data. I might want to use data that’s public. I might want to use some news. Maybe I want to summarize the news. Maybe I want to go through my LinkedIn network and browse it and reach out to people that are relevant for what I’m doing right now.
I have a lot of personal data that I want to use. And I also want to use a lot of public data. Right now, all of that data is actually locked up inside big tech companies. They have a lot of incentive to not let us access it, and their argument is, oh, it creates a lot of burden on us as service providers for people to access this data. But it’s our data.
We created it. And so I think actually one of the most important things going into a future where everyone can create software and everyone has more power over computing is actually being able to access our own data, and for data to be more open in that way. For our data to be owned by us and not owned by the service platform.
And something that we care about is interoperability. The ability for us to get data out of the apps that contain it and use it. Have our agents use it. Right now, LinkedIn will block me if I use an agent on my own LinkedIn profile. But these are people I know, this is my network. And so I think as a society and as a tech industry, we actually really need to shift the way that we approach data in order for people to be empowered in this future.
And that is how we think about data. I think the model training data side – on that side, it does feel a little bit weird to me that we created all of this data on the internet. This is also our data. It’s collectively owned. But now models are getting trained on it, and those models are not collectively owned.
So what’s going on there? There’s a future I can imagine that’s not very palatable to a lot of people. And I think this is the default future – where the powerful get more powerful, power centralizes, people who own these large AI systems that are very powerful continue to gain more power and kind of vacuum up, and everyone else is renting the systems from them.
EL KALIOUBY: Yeah, where there’s a number of these companies building these foundation models and then everybody else is using these models, but they don’t necessarily have control.
QIU: Exactly. When I think about that future, I’m like, I’m not sure that’s a future I want to live in. And I’m not sure that’s a future a lot of people would be excited to live in.
A different future I could imagine that could potentially be more compelling is one in which the vast majority of the software we rely on is perhaps in a public commons and software becomes a public good. It’s actually something where we now build tools – maybe Imbue’s tool is one of many tools – that allow us to access, edit, remix and then re-share back into that public commons. And that’s a world in which you potentially could have a much more powerful and open software ecosystem where we’re actually actively participating in creating that ecosystem and contributing to it. So that’s like one potentially positive world.
And it really feels like the digital future needs to be more collectively owned.
EL KALIOUBY: Yeah, I think it’s also really important that we individually as consumers reclaim control over our data. But how do you think we get to that world? Because it’s not at all the world we’re in today.
QIU: Yeah, there is actually some good recent legislation that was proposed and shot down at the federal level. And there are some ideas behind the interoperability bill, trying to make it a little bit more so that people can get their data out of these walled gardens.
And we would love to support something like that in California.
I actually think this is a place where, as technologists, we often don’t think that much about the broader societal regulatory regime that we’re in, for example. But I think with AI and the future, this is a technology where it’s totally consequential. We’re building human level intelligence in a machine, and we do actually need to be very thoughtful about it. More holistically, the regulatory landscape we’re going into, how that regulatory landscape can shape a more humane technological future. And so, yeah, those are things that as technologists we always thought, like, oh, we can just play with these toys. But no, actually, we need to have responsibility to society and think about these things.
EL KALIOUBY: I absolutely love this because I am very passionate about this idea of human centric AI, where we are not just building the technology but really thinking about the cultural, societal, economical, political implications of the things we’re building. And I absolutely agree with you – as innovators in this space, it’s incumbent on us to think about, I’m going to build this technology and it’s going to scale and millions of people are going to use it.
What are the implications, individually and collectively? We don’t spend nearly enough time thinking about that.
QIU: Yeah, I totally agree. And also, what is the regulatory environment that would help protect people given this technology? At some point we regulated seatbelts into cars, and that was important for protecting people. And right now there’s really this big push toward no regulation. And no regulation means the default path.
EL KALIOUBY: Yeah, I’m a big proponent of thoughtful regulation. I don’t think we should kill these amazing technologies, but I absolutely agree with you.
QIU: Yeah, thoughtful regulation. I like that.
EL KALIOUBY: We need to take a short break. But when we come back, Kanjun gives some solid gold advice for entrepreneurs getting started. Stay with us.
[AD BREAK]
Copy LinkAdvice for founders and a more human vision for AI
EL KALIOUBY: So I want to switch focus to your journey and your experience as a founder. You are one of the very few women-led AI unicorn companies. What was your experience like, and how can we get more women to be part of this AI revolution?
QIU: Maybe the one piece of advice I’d give, especially to female founders, is – and there are a lot more female founders in AI these days, by the way, I’m really happy to see that – self belief is a self fulfilling prophecy. I think investors often see that women tend to be very realistic and honest about the risks, and investors want to see instead what’s the opportunity. So I think instead of focusing on problems, focusing on opportunity and what the big opportunities are – that’s maybe the one piece of advice I’d give all founders.
EL KALIOUBY: I love it. We should print it and hang it on our walls. That’s awesome. So I spend a lot of time thinking about what makes us human in this age of AI.
QIU: This to me is the core problem Imbue is working on. How do we build a future that is human centered? I think that’s a really important frame shift, where instead of the AI making decisions and us not knowing what’s going on, it really should be the AI empowering us to understand better what’s going on. I can see that potential. I can see AI systems that help teach kids in a much more effective way than what we have today. Help teach us, help us as executives or individual contributors understand what’s going on in a much more effective way. That’s much more digestible to me. I see all of that potential, and I think partly the narrative around what AI is for needs to change. AI is for people, it’s not for automation.
EL KALIOUBY: Well, Kanjun, that was fascinating. Thank you for joining us on the show.
QIU: Thank you so much. That was really fun, Rana.
Episode Takeaways
- Rana el Kaliouby opens with the DeepSeek shakeup, arguing that lower-cost AI is exciting, but trust and data security still matter just as much as performance.
- With Imbue co-founder Kanjun Qiu, the conversation shifts from AI as an autonomous assistant to AI as a human-centered tool that helps people create and stay in control.
- Kanjun reframes agents as the next layer of programming, where AI helps users build software feature by feature, making coding more accessible without removing the need for technical skill.
- The two explore a bigger vision for computing, one shaped by trust, tactile interfaces, and user-owned data, so AI expands human agency instead of concentrating power in a few platforms.
- By the end, Kanjun connects that philosophy to founders and the future of work, urging more self-belief from entrepreneurs and a broader shift toward AI that empowers people, not just automation.