From deepfakes to fabricated news, AI has been shown to generate convincing deceptive images and videos, increasing the spread of misinformation. David Rand, a professor of Information Science and marketing at Cornell University, is using the same technology for a different purpose: debunking those very same conspiracy theories. In a recent and groundbreaking study, Rand and his team asked participants who believed in conspiracies to talk with an AI chatbot specifically designed to reduce those beliefs. The results were surprising. Rand joins Pioneers of AI to discuss what makes conspiracy theories so alluring, how AI can be a powerful tool for debunking them through fact-based conversations, and what this could mean for combating other forms of misinformation.
About David
- Professor of Information Science, Marketing & Mgmt Communications at Cornell
- Pioneered AI chatbot dialogues that cut conspiracy belief ~20% in studies
- Effects held steady 10 days and ~2 months after intervention
- Co-created DebunkBot; drew 100k+ organic visitors by 2025
- Leading researcher in misinformation, polarization, and human cooperation
Table of Contents:
- Why conspiracy theories feel persuasive in the first place
- Why personalized evidence may work better than confrontation
- How the AI debunking study was designed
- Why the strongest surprise was that minds actually changed
- What the results revealed about lasting belief change
- How debunking tools can work in the real world
- Why uncertainty and critical thinking matter in breaking new conspiracies
- Where this approach could help beyond conspiracy theories
- Episode Takeaways
Transcript:
Can AI talk us out of conspiracy theories?
RANA EL KALIOUBY: It’s true that AI can accelerate misinformation and conspiracy theories, because it can generate fake images and videos with the click of a button. But AI can also be part of the solution. And that’s the focus of David Rand, a Brain and Cognitive Science professor at MIT and now Cornell.
DAVID RAND: Given that the technology exists, bad actors are gonna be using it to do bad things. And so what kind of positive things can we do with it? Like what are ways to have some net positive impact on society?
EL KALIOUBY: David and his team harnessed the power of AI chatbots to help debunk conspiracy theories. And on this episode of Pioneers of AI, we’re digging into how that actually works. We’ll look at how AI can walk someone through their belief in a conspiracy theory, and possibly de-program their thinking – with personalized, fact-based conversation.
I’m Rana el Kaliouby and this is Pioneers of AI, a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
Hi David. Welcome to Pioneers of AI.
RAND: Thanks for having me. It’s great to be here.
EL KALIOUBY: So when we were doing research for this podcast, I realized that you were in several rock bands growing up. Tell us more about that.
RAND: Yeah. That was from, I think, like maybe junior and high school through the first year of grad school. The thing that I cared the most about in life was playing in funk rock bands. And so I played in normal punk bands through college and then all my band mates graduated and left. And so I started doing electronic music where I programmed things on the laptop, plug it into the speakers, and run around and sing and do jump kicks.
EL KALIOUBY: Cool. What was the reception? Was it okay?
RAND: This is a story — one of my old band mates said this — and he would always say about my sets, for the first song, he thought it was kind of funny ’cause it was a guy running around singing with a laptop. And then the second song, he started to feel really awkward and embarrassed for me ’cause there’s just a guy running around singing with a laptop. And then by the third song, he started getting into it.
EL KALIOUBY: Okay, great. That’s awesome. I love that.
Copy LinkWhy conspiracy theories feel persuasive in the first place
So let’s talk about conspiracy theories. And I wanna start first by defining what a conspiracy theory is. It’s basically a theory that explains an event or a set of circumstances as a result of a secret plot, usually by some powerful conspirators. Would you agree with that definition, and also what are some of the most common conspiracy theories out there today?
RAND: Yeah. So that’s a great definition of a conspiracy. And I think one important point is not all conspiracies are false. Sometimes powerful actors do actually conspire — like Watergate was a conspiracy. And also not all false claims are conspiracies. There are lots of false claims that just directly make inaccurate statements, but don’t have this kind of conspiratorial aspect to it. So it’s a particular slice of things, and I think a lot of the reason that a lot of people are interested in conspiracy theories is, although they’re not definitionally false, a lot of the very popular conspiracy theories are pretty obviously false and yet widely believed.
And so it makes them kind of psychologically interesting — like, how is it that people continue to believe things that are clearly not true in the face of substantial amounts of counter-evidence?
EL KALIOUBY: Yeah, well what is the psychology of conspiracy theory? Like, why do people believe in it and believe so strongly? Right.
RAND: Yeah, so this is what the research that we’re gonna talk about today kind of grew out of, which is in the face of evidence that various previous estimates have said something like, half of Americans believe at least one conspiracy theory that is kind of widely refuted.
And so the standard psychological explanation is people want to believe. There are all these different motivations that drive you to believe conspiracy theories.
And so therefore you sort of ignore or write off counter evidence because you want to believe. And so these psychological motivations basically blind you to corrective evidence. But this line of research that we’ve been doing was largely set up to kind of challenge that take. So it’s possible that people believe conspiracy theories because they just ignore evidence and these motivations insulate you against inconvenient evidence.
But another possibility is just that people often haven’t heard the right evidence. And what we’ve seen from some of these dialogues that we’ve collected is people will say things like, oh, I a hundred percent believe that 9/11 was an inside job. I’ve watched a bunch of videos on YouTube and they were very compelling.
And so it’s essentially like they were only exposed to the conspiratorial explanation and they never got the debunking. It’s because people maybe aren’t necessarily motivated to go out and search for disconfirmatory evidence. But that doesn’t mean that they would ignore it when they get it.
And actually, I think another important part of this is, my close collaborator Gordon Pennycook and I had a paper recently on conspiracy belief where we found that conspiracy believers massively overestimate how many other people believe in the conspiracy they believe.
EL KALIOUBY: Interesting. I think everybody is like, it’s obvious and everybody believes the same theory.
RAND: Right. And it seems crazy because of like, how could that be? So part of it is what we show in that paper is that a strong predictor of believing in conspiracy theories is being overconfident in your own abilities. But I think part of that overconfidence is also not appreciating the fact that other people disagree with you. You would think that, okay, so how would you know that other people don’t believe the conspiracy theory? It would be because when you say, hey, I believe 9/11 was an inside job, they’d be like, no, come on.
That’s crazy. But if you think about it, nobody — or almost nobody — likes doing that because people don’t like confrontation. So when you’re at Thanksgiving and you’re saying all your crazy conspiracy theories, you might have some relatives that will be willing to get into it with you, but most people are gonna be like, yeah, all right, whatever, Bob. And then change the subject. And so if you’re an overconfident person, you’re like, oh, great. They agreed with me.
EL KALIOUBY: During COVID-19, my mom saw this guy — he’s an Italian talker — and I guess he believed that Bill Gates was implanting 5G chips in the COVID vaccine, and she kind of believed it. And to your point about not being confrontational, I just rolled my eyes and left it at that.
RAND: Right, right. Exactly. So a result of that is I think a lot of people — a good chunk of people that strongly believe conspiracy theories — haven’t actually heard the non-conspiratorial explanation for whatever they’re interested in, explained in a clear and cogent way. And so our idea was, maybe facts and evidence do matter to a lot of conspiracy theorists, you just have to give the right facts in the right way. Because another important element of conspiracy theories is often they’re very complicated and since they’re not constrained by the truth, you can have many different versions of them. Part of what makes it really hard for humans to debunk other people’s beliefs is this great variety in what people believe. And then we were like, okay, well what has access to a vast amount of information and has the ability to personalize their response to whatever the person says.
Oh, well, these new large language models, like GPT. And so that was what we did in this project. We wanted to say, could GPT actually effectively debunk conspiracies? Which would mean that information does work if it’s the right information. Or are the conspiracy theorists just gonna ignore whatever GPT has to say, which is what you would expect based on this sort of more motivational, “I want to believe” kind of psychological explanation.
Copy LinkWhy personalized evidence may work better than confrontation
EL KALIOUBY: So what makes a conspiracy theory successful? Like, what are the ingredients of a conspiracy theory, and also are there stronger theories than others and how do you assess that?
RAND: The question of what makes a conspiracy theory successful is a million dollar question if you’re a conspiracy theory generator. And I don’t think we really know — in the same way, it’s like saying what makes content go viral? It’s like, well, if I knew, all my stuff would be going viral.
And actually it’s an evolutionary process where the ecosystem is just like all these different possible conspiracy theories.
Every time somebody makes up a new one, it’s like introducing a new variant. And so you can wind up with these very elaborate, very bizarre, but very popular conspiracy theories because they arise out of this sort of cultural evolutionary process.
EL KALIOUBY: We’re going to take a short break. When we return we’ll dive into David’s study, which used AI chatbots to debunk conspiracy theories – and the surprising results of that research. Stay with us.
[AD BREAK]
Copy LinkHow the AI debunking study was designed
EL KALIOUBY: So we went in with this hypothesis that using AI could possibly help people kind of move away from their belief around a theory. So who did you partner with? How did you conduct the study? Where did you find these people? What do you tell them coming in?
RAND: Yeah. So this is a collaboration with my long-term, really close collaborator Gordon Pennycook, who’s in the psych department at Cornell, and Tom Costello, he’s a professor at American University now, and he is moving to Carnegie Mellon this summer. And so we did online survey experiments. We recruited participants from a sort of nationally representative-ish sample. And then we start by asking them what conspiracy they believe, and then we filter down to people that actually believe a conspiracy.
And they could just type out anything that they want.
Now what’s the evidence that you see for this? Like, what makes you believe it?
And they type that out. And then we use GPT to summarize back everything they wrote into one sentence. So we have like a running example I’ll use here — a 9/11 conspiracist in our data. And so the person was like, I think that the US government was behind the 9/11 attacks, and evidence for it is that World Trade Center Building 7 collapsed even though it wasn’t hit by a plane. And Bush didn’t look at all surprised when he was reading to children and somebody whispered in his ear that 9/11 was happening and he just kept reading to the kids.
And we say, okay, this is what you said — now zero to 100, how much do you believe it? And so now we’ve got this numerical measure of how much they believe it.
And then we say, okay, now in the next part of the experiment, you’re gonna have a conversation with advanced AI. And the goal of the study is to see how humans and AI can have conversations about complicated topics.
And for the people that are in the treatment, we say you’re gonna talk to the AI about one of the things you just answered questions about. And in the control we say you’re gonna talk to the AI about whether you like dogs versus cats more, or what you think about the firefighters, or your experiences with healthcare.
Whatever random things that are not related to conspiracy theories. So we can just control for the effect of having a conversation with an AI in general, but not about the specific thing that we care about. Then we tell the AI in the treatment, you’re gonna be talking to a conspiracy theory believer, this is the conspiracy they believed.
This is how much they believed it, zero to a hundred. Your goal is to explain to them why it’s not supported by evidence and talk them out of it and try to change their mind to have a less conspiratorial view of the world. And I should say that when we were designing this experiment, Tom was like, I’ve got all these ideas for different prompts for the AI to try, And I was like, dude, there’s no way this is gonna work. Just pick something. Let’s just try it. Probably it’s gonna be a bust, so just don’t sink too much time into it.
Copy LinkWhy the strongest surprise was that minds actually changed
EL KALIOUBY: Was your hypothesis going in like, come on, AI’s not gonna move people on this?
RAND: That is basically it. I was totally on board with the “I want to believe” kind of explanation of conspiracy theories, and actually, because I’ve been working on misinformation and social media for the last decade, I would often get asked by reporters what can you do to talk people out of conspiracy theories once they’ve gone down the rabbit hole.
And I was always like, basically it’s a lost cause at that point. And so what we really should be focused on is trying to prevent people from starting to believe in conspiracies in the first place. So that was the lens that I brought to this work at the beginning. So then they have this back and forth conversation. And so like for our 9/11 conspiracy theorist, the AI always starts out being very polite and sort of affirming in some sense, like, I understand why big issues like this would raise lots of questions, but let’s look at the evidence. And it says, okay, it’s true that World Trade Center Building 7 collapsed even though it wasn’t hit by a plane, but an NTSB investigation showed that’s because it was hit by debris from one of the towers that was hit by the plane, and then it caught on fire and collapsed. And it’s true that Bush didn’t respond when he was talking to kids and they were told about it. Supporters and critics have argued about whether this was the right thing to do or not, but he was trying to avoid a panic — it’s not that he already knew about it. And remember, conspiracy theories kind of are trying to offer easy explanations for things, so remember to engage in critical thinking.
And then the person is like, okay, but what about these reports that the towers were brought down by demolitions, and also I believe jet fuel doesn’t burn hot enough to melt steel.
These are like classic 9/11 conspiracy theory talking points. And so then the AI is like, yeah, it’s true that people have speculated about this kind of thing before, but there’s lots of evidence that the towers were not brought down by demolitions.
The thing that sounded like explosions were successive floors of the tower collapsing one after another, making booms. And it takes months to set up a controlled demolition in the basement of a building — there’s no way they could have done it without somebody finding out about it.
And then the person is like, okay, fine. But then how did we let these men into our country so easily and give them flying lessons? It seems like there really wasn’t much security. And then the AI was like, yes, it’s true that that happened, but you have to remember that before 9/11 there were no security procedures in place to be watching for that kind of thing, because they didn’t know that it was an issue. And then the person is like, okay, thanks very much.
And so part of what I think comes out of these kinds of dialogues is that with conspiracy theories in particular, unlike other kinds of false beliefs, there’s this conspiratorial explanation, which is some kind of complicated thing.
And in general, the debunking isn’t saying the fact that you are citing is wrong. It’s like, yes, that did happen, but here is a much simpler non-conspiratorial explanation. And so then they go through this three-round back and forth dialogue with the model, and then.
EL KALIOUBY: How long is each round?
RAND: On average the dialogues took something like six or seven minutes.
EL KALIOUBY: Okay. Not hours and hours.
RAND: Yeah. Exactly. Exactly. And so now that you’ve talked to the AI, we’re gonna return to some of the questions we asked you about beforehand.
Whatever the one sentence summary is, we ask, how much do you believe it? And so this particular 9/11 conspiracist that started at a hundred percent goes down to 40%.
Copy LinkWhat the results revealed about lasting belief change
EL KALIOUBY: And was that kind of in general what you saw with the other respondents too?
RAND: Well, it was a particularly good one. But when you look on average, we ran — so like the first study had like a thousand participants and then we did a 2000 person replication — and since then we’ve run many of these studies and it always works. It’s something usually around about a 20% reduction in belief.
EL KALIOUBY: That’s an amazing finding. It’s really powerful. Does it stick?
RAND: That’s a key question. So when we ran the original experiment my jaw hit the floor. I was like, whoa, this is crazy. This is like way more than we expected. And then our immediate question is exactly what you said — does this stick?
And so 10 days later we recontacted the people and we were just like, hey, here’s this statement. How much do you believe it? And at 10 days the effect was totally stable. Like it hadn’t gotten any smaller. It was just as big as it was immediately. And we were like, wow, this is really cool. We start writing up the paper, and then we have the paper basically ready to submit, and then we’re like, alright, one more time before we hit submit — let’s just recontact them again. So we contacted them. That was like two months after the original treatment. The effect again was completely stable. It hadn’t really decayed at all.
EL KALIOUBY: Yeah. Before we move on to the other kinds of false beliefs, et cetera, did you experiment with different personalities of the LLM, like one that is more forceful or one that’s nicer?
RAND: Yeah. So what we’ve done in follow-up work is a few things in that vein. The first thing that we did, which is like the most commonly asked question every time we presented this stuff, is people would say, well, what if it was a human? Like, is this something special about just people deferring to AI? And our theory going into the whole thing was that this isn’t anything AI-specific — it’s just that the AI is good at coming up with the facts and evidence. But we couldn’t really rule it out because in the original experiment we asked people beforehand how much they trust AI.
And as you might imagine, it worked better for people that trusted AI more, but importantly, even the people that said they completely didn’t trust AI, it still worked to some extent for them. And so we ran an experiment where we had people talk to the AI and we either told them they were talking to an AI or we told them that they were talking to an expert. We found it didn’t make any difference at all — whether you called it a human or an expert, even when you called it an expert, it really effectively reduced people’s belief.
EL KALIOUBY: And so it’s not something special about thinking it’s an AI – it’s the facts. Talking with these chatbots helped change people’s minds about conspiracy theories – and the effects were long-lasting. So, what does it look like to apply this use of AI at scale? And are there more applications, beyond conspiracy theories? That’s after the break.
[AD BREAK]
Copy LinkHow debunking tools can work in the real world
EL KALIOUBY: So you’ve essentially taken this research and you’re applying it at scale in a number of ways. So you launched debunkbot.com, which I’ve played around with a little bit, with all these like COVID conspiracies. But I feel like I have to be a believer first really for this to really work. But yeah, what are the other ways?
RAND: Tell your mom.
EL KALIOUBY: Yeah, exactly. I’m gonna send it to her. I’m like, mom. Oh God, if she listens to this, she’s gonna hate me. But yeah, what are other ways you are applying this kind of finding?
RAND: Yeah, so I think the first thing just to say on debunkbot.com — it’s a website that exists out in the world, anybody can go and basically just do exactly our study, and we’re in the process of doing a user experience overhaul to make it even friendlier. We’ve had more than a hundred thousand visitors from just organic traffic, not doing anything to promote it essentially. And when you analyze those conversations, you see that people changed their beliefs as much or more as the people we paid to do the survey in our experiment. So for debunkbot, listeners, you can go and try it yourself. If there are any conspiracies you believe in, see what you think. But also I feel like part of the value of it is if you have friends or relatives that are conspiratorial, you can send them that way, and also you can practice with it.
If you wanna get ready for your Thanksgiving debate, you can say the things that you’ve heard your friend or relative say, see what debunkbot says back to it, and then have that ammunition ready when you show up.
EL KALIOUBY: One powerful thing David has done is apply this work to real-time events. After the Trump assassination attempt in July 2024, he leveraged AI to combat developing conspiracy theories around the shooting. Here’s more on that.
RAND: Yeah. So in the original study, the beliefs that we were debunking were these classic conspiracy theories — or at least very widely discussed conspiracy theories where there is a large body of confirmatory evidence and non-conspiratorial explanations. That is very powerful, but it requires the non-conspiratorial explanations to exist. And so when things are happening in real time, we don’t know. It’s not like it has been investigated and here’s the real explanation. And so you might have expected that the debunkbot wouldn’t really work in those cases because it wouldn’t know what to say.
And so we wanted to try it out. As you mentioned, during the week after the first Trump assassination attempt, there were all different kinds of conspiracies going around, both on the right and on the left, generally of the flavor — the people on the right thought the Secret Service let the gunman through and wanted Trump to get assassinated, and the people on the left thought the whole thing was fake and it was just a publicity stunt for Trump.
And so what we did is we did this exact same setup, and to our surprise, we found that the model worked just about as well as it did for debunking the classic conspiracies. But when you look at what’s happening in the conversations, it’s doing it in a very different way. Like for the emerging conspiracies, it’s not offering alternative explanations.
EL KALIOUBY: Doesn’t have them. Right.
RAND: They don’t exist. Right? And instead what it’s saying is basically, here is what is known, and everything that’s not that we just don’t know. And so you shouldn’t be believing things — basically encouraging people to engage in some critical thinking and not just jump to conclusions and not just believe whatever they hear, but basically say anything that’s beyond what is known is just speculation. So don’t believe it.
After the second Trump assassination attempt, we recontacted a bunch of the people from the first study and were like, hey, what do you think about what’s happening with this assassination attempt?
And we found a spillover where the people that were in the treatment after the first conspiracy theory were less conspiratorial about the second.
Copy LinkWhy uncertainty and critical thinking matter in breaking new conspiracies
So you did a study around the racial wealth gap, which I’m very curious about. It’s an area that I’m very interested in. Tell us more about that.
Yeah, totally. So after we did the conspiracy theory first set of experiments, we were like, wow, we picked conspiracy theories because we thought they were gonna be really resistant to evidence, and yet we found these really big effects. Let’s try and push it even further, trying to find other boundary cases where we really don’t think this is gonna work.
And so the next experiment that we ran was on trying to explain sort of the structural factors underpinning the racial wealth gap, and trying to explain that to Republicans. And we’re like, nobody I think believes that the reason people are rejecting structural explanations for the racial wealth gap is about facts and evidence.
It’s like a super salient culture war kind of topic. It’s very identity-laden. And so I ran this with the expectation that it wouldn’t really do much again, because I didn’t think it was gonna be like, oh, you explained it to me, okay, that makes sense. Like, what world is that? So basically we had a similar setup where we recruited a large number of Republicans from these online samples.
We gave them some statistics about how white families in the US have so much more wealth than black families. And then we were like, one to seven, how much do you think this is explained by? And we listed various structural factors — we didn’t use the word “structural,” but things like legacies of slavery, versus other options like cultural or genetic.
EL KALIOUBY: People don’t work hard or whatever. Yeah.
RAND: And then we said, okay, free text — write out exactly how you explain it, and then they have the conversation with the model. What we found was, as expected, a large fraction of the respondents initially really rejected the structural explanations. But among those people that rejected the structural explanations, we got a really big — 15 or 20 percentage point — increase in people saying the structural explanations make sense. And when you look in the dialogue, a lot of people are saying, oh yeah, like this is the first time someone’s explained something to me that actually makes sense.
EL KALIOUBY: Interesting.
RAND: And like with the conspiracy theories, something that’s important to remember is 75% of the conspiracy theorists didn’t stop believing after the conversation. So it’s not like it works for everyone. But the point is, there’s a really substantial chunk of people that just never heard the explanations before. And that’s true even for the racial wealth gap thing.
I think in general the average person doesn’t want to have inaccurate beliefs, but you very rarely get exposed to the evidence that contradicts what you already believe because you’re watching news that’s from your side, and you’re listening to political leaders that are from your side, and you’re hanging out with friends that are either from your side or polite and don’t want to get into arguments about things. And so you just don’t get exposed to the disconfirmatory evidence.
Copy LinkWhere this approach could help beyond conspiracy theories
EL KALIOUBY: Are there other areas where you are very excited to apply this work? Ultimately your work is about changing people’s minds, right? Are there other use cases or applications?
RAND: I think there are a lot of applications in healthcare, in terms of countering misinformation around vaccines and things like that, but also way more generally helping people make informed decisions. And another one is local politics, where local elections in many cases actually impact your life much more than national elections.
And people are amazingly uninformed — even what we call high information voters, who care about politics in general, typically know very little about school board candidates and city council candidates and mayoral candidates and stuff like that. But you could imagine building one of these models that essentially just helps people navigate information — ingests all the statements from all the candidates and public videos — and then you can say, well, this is what I care about, and it says, okay, well this is sort of where different candidates stand on that.
EL KALIOUBY: All right, so to wrap up, I always ask all my guests the same question. In this world where AI is very persuasive and it’s smart and it knows all the evidence and facts, what does it mean to be human in the age of AI?
RAND: Okay, so my snarky answer is, it means check the sources.
EL KALIOUBY: I love it.
RAND: What it means to be human is to see whether the source that this AI is citing is something I actually think is credible or not.
EL KALIOUBY: I love it. That’s an awesome answer. I think this is a great way to end our conversation. Thank you, David.
RAND: Alright, well thanks so much. This was really fun.
EL KALIOUBY: Often conspiracy theories are deep-seated in people’s minds and it seems like nothing could budge them at all on their thinking. But David’s research suggests that AI can help with this – that facts DO matter. And by walking people through orchestrated dialogues powered by logic and facts, their minds CAN change.
And maybe AI is a great tool for this precisely because it doesn’t roll its eyes or get frustrated by someone’s beliefs. It just delivers a counterargument, patiently and methodically. That is huge.
Episode Takeaways
- Rana el Kaliouby opens with a paradox at the heart of AI: the same technology that can turbocharge misinformation may also help dismantle it, one careful conversation at a time.
- MIT and Cornell professor David Rand argues many conspiracy believers are not unreachable zealots, but people who often have not heard the strongest non-conspiratorial evidence clearly explained.
- In Rand’s study, participants described the conspiracy they believed, then debated an AI chatbot for just a few minutes, and their certainty dropped by about 20% on average.
- What surprised the team most was durability: those shifts held steady 10 days later and even two months later, suggesting facts delivered well can have real staying power.
- The conversation then widens beyond conspiracies, from DebunkBot and breaking-news rumors to the racial wealth gap, with Rand insisting the human job in the AI era is simple: check the sources.