We all know what it’s like to call a medical office and wait on hold, just to provide basic information that only takes a few minutes once someone picks up. Medical workers face this, too, spending hours on calls to share patient details or billing codes. Elad Ferber, co-founder and CEO of Synthpop, is deploying AI agents to take those calls, using synthetic human voices with AI behind them to reduce administrative tedium in healthcare. Hear a Synthpop agent handle a call and more from Elad on how the technology works, along with the ethics of deploying AI to talk to humans and handle sensitive information.
About Elad
- Co-founder & CEO of Synthpop, building AI agents for healthcare admin workflows
- Launched AI caller making thousands of payer calls daily for medical offices
- Drove call success rates as high as 95% on complex insurance workflows
- Computer engineer applying agentic AI to cut healthcare admin waste
- Backed by early investor Rana el Kaliouby in AI x health
Table of Contents:
- Why healthcare administration is ripe for AI automation
- How AI agents move beyond chat to get real work done
- What makes an AI insurance caller actually work
- The trust and guardrails behind a patient facing AI agent
- Why building natural voice agents is harder than it looks
- How structured agentic flows create reliable outcomes
- What AI automation means for healthcare jobs and human expertise
- Why disclosure and ROI shape responsible AI adoption
- Where consumer AI agents may fit into everyday life
- Episode Takeaways
Transcript:
Elad Ferber wants AI agents to answer the call
RANA EL KALIOUBY: You know this sound. It’s the soundtrack to every phone call you wish you didn’t have to make — to the car insurance company, or to that medical specialist you need to send your kid to.
Yes, it’s really still part of life – in 2024!
ELAD FERBER: A lot of the default communication channels, our voice, our fax, it feels antiquated. Actually, that’s the world we live in today.
EL KALIOUBY: Elad Ferber is a computer engineer who’s applying the power of AI to one specific area – the healthcare system.
He’s co-founder and CEO of Synthpop, a company developing AI agents that, say, let your doctor’s office talk to your insurance company.
Right now, administrators spend hours of their day on the phone verifying basic information, and even faxing documents. The goal of Synthpop is to save medical offices time and money. Some studies show that the healthcare industry is wasting over 200 billion dollars on administrative tasks.
This is where AI comes in!
FERBER: We actually come to the healthcare system, not trying to change it per se, but you know what, we accept you as you are, we’re just going to streamline processes with your current modalities and with your current problems.
EL KALIOUBY: Today we’re talking with Elad about his company, and specifically about one of the coolest AI agents I’ve seen to date. We’ll dive into how agents like this can save valuable time, the ethics of an agentic AI future, and just how this application of the technology works.
I’m Rana el Kaliouby and this is Pioneers of AI, a podcast taking you behind the scenes of the AI Revolution.
[THEME MUSIC]
Hi Elad, thank you for joining us today.
FERBER: Hi Rana, it’s great to be here. Thank you for having me.
EL KALIOUBY: So this is not the first time we’re interacting. In fact, we met about a year ago when you were raising money for Synthpop. And full disclosure, I’m one of the early investors in Synthpop, very proud to be a supporter of the company.
One of my investment theses is the intersection of AI and health and wellness, and you squarely sit in that space.
Copy LinkWhy healthcare administration is ripe for AI automation
So, Elad, give us the elevator pitch for Synthpop.
FERBER: So at Synthpop, we come to automate healthcare workflows and specifically administrative workflows. We really believe there’s so much work that is being done today that can be automated. These are mundane tasks that are very tedious, to be honest, wasting hours of people’s days every day. I founded the company together with Jan Janink, who is a computer science PhD from Stanford and taught scalable systems and applications of large language models for years, and we just decided to attempt to solve this huge problem in healthcare that drives so much cost and so much frustration, not only for administrators, but also for patients. That could mean anywhere from patient intake — understanding, for example, who the patient is. We need to type their name into our system and see if they’re there. And if they’re not, we need to create a new file and we need to ask the patient some questions and maybe have them fill a questionnaire.
And then we need to call insurance or log into a portal and figure out if their insurance is active. If they’re coming for a specific service, what’s their copay, what’s their financial responsibility. And that’s even before you see them for the first time. Now they come to the clinic. We need to take them in and figure out what services we want to provide and deliver those services — for example, we support a customer that is building power wheelchairs for patients and a power wheelchair can have 10 different vendors that are supplying different pieces of the wheelchair and they’re integrating it together and every piece needs to be justified with Medicare, for example, to be eligible for reimbursement. So we do all of those things, and all those things, you can imagine, take so much time.
And by the way, across this continuum, it’s important to use modalities that humans use.
EL KALIOUBY: Like chat? That means text, obviously, because that’s what most people think about when they think about—
FERBER: But also voice. And speaking is very important to us and AI actually has an opportunity to really transform communications through these modalities.
And that’s what we’re here for.
Copy LinkHow AI agents move beyond chat to get real work done
EL KALIOUBY: One of the things that I love about Synthpop is that you’re building AI agents, which I believe is the next frontier of AI. That’s basically AI that isn’t just, you know, chatting with AI. It’s actually getting stuff done on your behalf and it’s automating these mundane tasks.
FERBER: Yeah, well, we have a bunch of products out. One of the latest products that we’ve released, and is now making many thousands of phone calls to payers every single day, is our AI caller. They have varying levels of support they provide via portals and APIs. You can imagine a sleep clinic that is covering several states. So they have multiple locations and they can send sleep tests to people’s homes and they can also provide therapy. It’s located, let’s say, in California and they serve multiple states. They need to now navigate many health plans because they serve hundreds of thousands of patients a year, and every patient has their own plan. Every patient has a different copay and deductible. Maybe you have a patient that’s in Blue Cross Blue Shield of Nevada. They’re covered there, but actually living in Arizona now, and you need to figure out which plan guidelines are actually active for that patient.
A customer like that will have a team of dozens of people whose job is to actually just wait on the—
EL KALIOUBY: On the line.
FERBER: Waiting on the line for 45 minutes.
We’ve seen people waiting for an hour and 30 minutes just getting validation for a patient’s insurance and it’s insane. And our bots can do that for them.
Copy LinkWhat makes an AI insurance caller actually work
EL KALIOUBY: How’s the AI caller going to do that then?
FERBER: Yeah, so it calls the payer. Of course we also are able to handle documentation intake. That was our first product. And so we can read the insurance card.
We can understand who the patient is. We can understand what services. So our AI is also connected to their system, and can actually retrieve all the information it needs, which is also kind of a fascinating thing. So if there’s a question on the other side, we know where to get that data — very much like a human operator would.
EL KALIOUBY: I think that’s really important, right? Like this AI caller, it’s not that it has a script that it’s following. And if it deviates from the script, it’s stuck.
FERBER: The agent can decide what to do. At any given point in the conversation, it’s choosing what the best action is. And it could be to go and retrieve information, it could be to dial a number on the dial pad, or it could be to ask a question.
So the agent can decide in real time. And it does a great job doing that. We have great success rates with those calls. It took us actually a long time to get to very high success rates, but now it’s doing a really good job.
I love it. Well, let’s try it out.
Yeah, let’s do it.
EL KALIOUBY: What you’re hearing right now is a call that Elad recorded – with permission – between an insurance company and Synthpop’s AI agent.
That voice right there is the AI agent. Does it sound familiar? It’s actually a clone of Elad’s voice.
After the AI Agent gets through the robo phone tree, it’s directed to a real human on the other line.
The AI agent can provide identifying information, and also tell the real human on the other side of any information it doesn’t know.
Like Elad said, this process of navigating insurance companies can normally take hours. This AI agent saves precious time.
This AI Agent in action is pretty cool. It’s responsive, it’s accurate. It’s wildly efficient. But HOW does this technology work? And what guardrails are in place?
That’s after a short break. Stay with us.
[AD BREAK]
Copy LinkThe trust and guardrails behind a patient facing AI agent
So I’d love to take us behind the scenes on how you’ve built this AI agent. This AI clone is basically a clone of your voice, right?
FERBER: Yeah, that’s right. We needed someone whose voice rights we had. We didn’t want to take a Scarlett Johansson or. That. So.
EL KALIOUBY: That would be a riot.
FERBER: Was a fine second alternative.
EL KALIOUBY: Okay. Yeah.
FERBER: It’s calling — I think right now, as we speak, there’s thousands of calls going on with my voice, which is pretty wild.
EL KALIOUBY: That’s wild. Yeah, it is pretty cool.
But, I guess you have to really put trust in this AI agent if it’s going to have access to information about the patient. Like, where can this go wrong?
FERBER: I think it can go wrong if it misrepresents the patient. Tasks that actually have a more substantial financial impact — like in this case, we’re just finding out information that should be available to our client and should be available to the patient.
Like, is prior authorization needed for a specific therapy under the patient’s plan? Just asking that question doesn’t change anything for the patient. But we do QA on this thing.
EL KALIOUBY: QA as in Quality Assurance.
FERBER: So we have a QA team that actually listens to a lot of these calls.
We do it also automatically using AI to QA our AI, and so we make sure that the accuracy level on those is very, very high. We do not tolerate types of errors that will misrepresent data to the patient or will say that something is covered when it’s actually not covered — that’s a big one. We actually prioritize and we can tweak our algorithms to make sure that those mistakes are almost non-existent.
And perhaps other things are maybe more tolerable.
EL KALIOUBY: Elad says that the AI agent only uses information that is already in the system, so it’s not prone to any errors that are out there, say, on the general internet.
But mistakes could still happen. Like the AI agent could give the wrong code or the wrong state because the information is wrong in the system. Maybe there’s an old insurance card on file, or an address hasn’t been updated.
These are the same mistakes a human agent would make if that was the only information they had.
FERBER: We don’t do a job that is better than a human in that sense. There’s some mistakes that are just inherent in the system, and that’s something that we are not necessarily able to fix with this.
Copy LinkWhy building natural voice agents is harder than it looks
EL KALIOUBY: Walk us through the behind the scenes of the technology. How did you build this?
FERBER: So I think modeling a conversation, you could think of a very simple loop of trying to figure out when the other end has stopped speaking. And that’s something that’s called silence detection. And when you talk to ChatGPT on your phone, if you have tried it, one of the key things is to understand when it’s time for me to speak.
And actually, even for humans, it’s pretty complex because we can talk over each other. We can understand cues in our tone to understand when to talk and for it to not sound uncanny. That’s something that we pay a lot of attention to.
EL KALIOUBY: It was interesting. It was adding all these things that make it sound very human.
FERBER: Exactly. Because latency is everything. And we as humans are so similar to LLMs in a sense, because when we decide to speak, when it’s my turn in the conversation to speak, it’s not like I have my entire soliloquy written out in my head at that point. I’m actually also almost like a word-by-word generator.
EL KALIOUBY: Mm hmm. Mm hmm.
FERBER: And giving the model a little more time for it to not sound uncanny and to have the snappy latency that signals to the other side that it’s time for us to assert and speak in a conversation — that’s really important. And I mentioned earlier in our conversation how we have a model where the agent has to decide what action it needs to take at any given point.
So should I speak, should I retrieve, should I hang up? Maybe the conversation is done? There’s a bunch of actions and understanding the right action, that’s the number one key thing. There are so many nuances. We had for the longest time on Blue Cross Blue Shield of California, we were saying and spelling patient details and it would not get it right on the other end.
You know, E-L-A-D — that’s the name — it would hear A-L-A-D. No, it’s E-L-A-D. And then, for example, we started doing NATO alphabet, and it fixed it, and you know what?
EL KALIOUBY: Like E for elephant. L for Lima. Right. Okay. Huh.
FERBER: And some of those small tweaks that perhaps are non-existent in other use cases, we found them, and that unlocked Blue Cross Blue Shield of California. All of a sudden the success rates went from like 60 percent to 95%.
There is an agentic flow that you need to devise. And I think building agents is actually not simple.
Copy LinkHow structured agentic flows create reliable outcomes
EL KALIOUBY: Explain to us what you mean by an agentic flow.
FERBER: It’s like an algorithm that can contain, in our case, 70 calls to LLMs along the way, in order to do one big task. It has five big pieces of the flow, and that’s maybe fixed almost in code or in configuration in our case. But within each piece of the process, there is some freedom of operation for the decision-making process to do more or less, for example.
So when we say an agent, we sometimes think of an LLM that can autonomously decide what to do.
We partially have that, but it’s also partially scripted. There’s—
EL KALIOUBY: That’s important in this case because it’s a very set kind of sequence of administrative tasks.
FERBER: We also want to have repeatable results. If you give AutoGPT “create a business that will make me a billion dollars a year,” it will come up with something. If you just tweak it by one word, it will come up with something completely different.
And even if you write the same prompt, it will have completely non-deterministic results. For us, we want to guarantee less of that, and we want to be more deterministic rather than not. And I think that helps us achieve something that customers can count on.
We call it composable flows. Being able to have composable flows for custom use cases is really important. We have a composable flow that we can actually adapt our agent to do what your humans are doing. Our customers can start with our AI agents at a small scale. It works alongside humans. They don’t have to give us a hundred percent of their volume on day one. Our agents actually work across the same process, the same instruction set as a human would, and achieve the same format of results. Humans and AI work side by side. And what we found is as soon as we can show that, the customer is like, we need to go full throttle right now.
Copy LinkWhat AI automation means for healthcare jobs and human expertise
EL KALIOUBY: I’m sure some of our listeners are listening to this and thinking, here we go again. Here’s another example of AI taking human jobs. What would your answer to that be?
FERBER: Well, I think there is going to be an impact of some sort, but to be honest, the people we work with, we still need them to a large extent.
EL KALIOUBY: Mm hmm.
FERBER: We work in tandem with them. And the call scenario is an interesting example because there are some complex scenarios of primary, secondary, and tertiary insurance problems that we might not be able to solve. One of the actions our agents can do in a call is say, “I think this is a little bit above my pay grade, let me loop in an expert.”
EL KALIOUBY: Right.
FERBER: And I think that is very powerful. We still need humans to work alongside us. We need humans to train us. I think where there’s going to be some reskilling in the workforce is actually in those more entry-level jobs — not the experts, and not those who spent the last five or 10 years doing administrative roles, because the expertise they built is actually super valuable. It’s actually that workforce that is so hard to staff.
That is relatively low entry-level, and is highly cyclical — there’s a lot of turnover. A lot of our customers really struggle with that.
And I think scaling is just a natural thing. You know, with the invention of the car, you don’t need as many horse caretakers anymore.
And when they moved to the electronic health record system, maybe you could do away with a lot of mail and shipping, actual stacks of documents between offices. Those jobs are transformed.
But again, I think the healthcare experts — I don’t think they’re at risk of this. Actually, they can just be much more efficient, less burned out.
EL KALIOUBY: Yep, absolutely.
FERBER: That’s what we see. And the reactions are really all positive from people we talk with.
Copy LinkWhy disclosure and ROI shape responsible AI adoption
EL KALIOUBY: So another question I have is, does Elad’s AI disclose that it’s an AI when it calls an insurance agent?
FERBER: Yeah, so that depends. We had a kind of an interesting experience where I think that if you’re talking to a human, there is an interest in disclosing that. If I’m talking to another bot, I think that’s fair game.
EL KALIOUBY: Yeah. I’m also kind of fascinated by new business models that come with AI. What is the business model in this case? Is it per call?
FERBER: Yeah. We thought about it so hard, and the most important thing for me is ROI for the customer. We don’t charge per minute or anything. We charge per successful call. And what we call a successful call is one where a human would not need to do that.
We have just eliminated the need for a human to touch this ever again.
EL KALIOUBY: Job done. Huh.
FERBER: That’s true also on our order pipelines and our document ingestion and data entry. Sometimes we will need a human in the loop and sometimes the order is so complex or unreadable that we can’t do anything with it. We basically align our ROI with the impact on the customer’s labor. We wanted to make sure we’re priced in a way that makes it a no-brainer for the customer to use us. If we were to take a different approach — like a big chunky platform fee and pay per minute for everything we do for you — I feel like customers are going to have a harder time committing to that because you’re delaying the ROI discussion.
The ROI discussion is going to be there.
ROI leads the way. That’s what we believe in, and that’s how we decided on our business model.
EL KALIOUBY: We’re going to take a short break. But when we come back, Elad and I talk about what an agentic future could look like.
[AD BREAK]
Copy LinkWhere consumer AI agents may fit into everyday life
EL KALIOUBY: So Synthpop is really focused on building AI agents that automate these kind of mundane tasks in the healthcare system. I want to move us on to what is the place for AI agents in our everyday lives? I’m curious where you think the opportunity for AI agents is for consumers, and where do we stand with that? Have you seen any examples that have impressed you or you found curious?
FERBER: Yeah. So I’d like an AI that can handle my communications, but I would actually like it to ask me maybe five questions that will help it understand my mood today, because maybe I’ve had a crazy dream. I have a new vision about who I am and what I want to do in this world.
And actually I’m going to answer those emails a little differently than what I’ve done so far. And that’s part of the friction today with adopting AI tools that can scour your emails and try to kind of get your style.
I’ve tried Gemini AI or even ChatGPT to try to draft an email, and I rarely just accept what it is and send — not because I have a problem with AI-generated text necessarily, but because it doesn’t exactly capture my emotion, my vision, my purpose.
And I feel like being better at that requires some back and forth about who you are at the moment, because we are living and breathing animals and we’re changing from moment to moment. We’re doing this interview in the morning; if we were doing it in the afternoon, it could have been a little different. And so I want to capture me as I change throughout the day.
And I think that sensitivity to the other side is critical.
EL KALIOUBY: Yeah. As a CEO, what’s keeping you up at night?
FERBER: I can sleep pretty well.
EL KALIOUBY: That’s good. That’s good. I love that.
FERBER: Young kids.
EL KALIOUBY: Okay. How old are your kids?
FERBER: One and three. But what keeps me up at night, maybe metaphorically, is how fast to expand. That’s a really interesting problem because there’s always a tension of being able to serve and expand with our current customers and learn to do that better, versus rapidly expanding to the blue ocean of opportunity.
And getting more and more new customers and new use cases. Yes, we’ve built a composable and scalable architecture, but it still takes time and effort and technical work to serve those markets and new customers. It’s always an interesting dilemma — it’s a blue ocean out there and we need to go after it, and we are going after it. But what is the best way for us to do that? That’s kind of the interesting pull.
EL KALIOUBY: Thank you, Elad, for joining us. This was awesome. I learned so much.
FERBER: Thank you so much Rana for having me.
Episode Takeaways
- Rana el Kaliouby opens with a familiar headache—healthcare phone trees and faxes—and introduces Synthpop CEO Elad Ferber, who wants AI to tackle that administrative mess.
- Ferber says Synthpop automates tedious healthcare workflows, from patient intake to insurance verification, by meeting clinics where they are and working across voice, text, and documents.
- The company’s AI caller is now making thousands of insurance calls a day, navigating hold times and coverage questions in real time so staff don’t spend hours waiting on the line.
- Behind the scenes, Ferber explains, the agent blends voice cloning, live decision-making, and carefully designed guardrails, with human and AI quality checks to keep errors rare and stakes low.
- Looking ahead, Ferber argues AI agents will work alongside people, not simply replace them, and says the real opportunity is building tools that are useful, emotionally aware, and priced around clear ROI.