We have all experienced searching for a job. Online platforms have made it easier than ever to share your resume, but harder to stand out in the sea of applicants. AI plays a growing role for job seekers and hiring managers alike. Journalist and author Hilke Schellmann spent years diving into the impact of AI on who gets hired and moves up – or doesn’t – in writing her book “The Algorithm.” Schellmann joins Pioneers of AI to talk about how and why she began reporting on AI and jobs, how AI has changed hiring, and how job seekers even leverage AI to give them an edge.
About Hilke
- Emmy-winning investigative reporter; NYU journalism assistant professor
- Author of The Algorithm on AI's role in hiring, promotion, and firing
- WSJ and Guardian contributor focused on AI accountability
- Sundance-screened FRONTLINE doc Outlawed in Pakistan won Emmy & OPC honors
- MIT Tech Review AI hiring investigation was a Webby finalist
Table of Contents:
- Why AI took over the hiring funnel
- Can algorithms reduce bias better than humans
- What blind hiring should actually focus on
- Why qualified candidates still get filtered out
- How platform behavior quietly disadvantages women
- Why biased training data keeps reproducing unfair outcomes
- How seemingly neutral signals become bias proxies
- How job seekers are learning to game AI systems
- When workplace monitoring creates stress instead of productivity
- Episode Takeaways
Transcript:
How AI is shaping the job market, with Hilke Schellmann
HILKE SCHELLMAN: So I call myself a Lyft, a ride share, and I got in the back of a car and just asked the driver, how are you doing? And in the history of me taking a Lyft, this has never happened that a driver said, oh, I had a really weird day. And I was like, really? Why? And he was kind enough to tell me that just a couple hours before he had a job interview with a robot. So it’s 2017, I had no clue what he spoke about, like, what? Job interview with a robot? So he had applied for a baggage handler position at a local airport and got a call and this pre-recorded voice, which he called the robot, had asked him three questions. So I’ve never heard of it. We chatted. It’s a pretty quick ride to Union Station and kind of forgot about it until a few months later. I went to the first fact AI conference at NYU and I remembered the job robot, and I was hooked. I wanted to know more. So that started this whole eight year, nine year journey that is not over yet.
RANA EL KALIOUBY: Hilke Schellmann is an investigative journalist and author of the book, The Algorithm. She’s spent years reporting on artificial intelligence as it shows up in the workplace. And specifically, how AI is being used to hire workers.
If you’ve been on the job hunt recently, you know it can be hard. A survey from Aerotek found that nearly 70% of people say that their current job search was more challenging than their last.
And it’s true that the way companies hire is changing – from how they find candidates to how they evaluate resumes. AI is part of that change, as well as a growing factor in who gets promoted – and even who gets fired.
On this episode we’re going to dive into Hilke’s findings around all this, and my experiences, too.
I’m Rana el Kaliouby. And this — is Pioneers of AI, a podcast taking you behind the scenes of the AI revolution.
[THEME MUSIC]
Hilke Schellmann is a journalist who tackles big systems, and how they impact or fail people. She’s covered the barriers for sexual assault victims reporting crimes, and the student debt crisis.
Her book, The Algorithm, takes a holistic look at the effects of AI on people in the job market. I wanted to start with that first Lyft driver who gave her the idea.
EL KALIOUBY: So Hilke, thank you for joining us today.
SCHELLMAN: Thank you so much for having me.
EL KALIOUBY: When this Lyft driver was sharing his story, was he kind of saying it with excitement or was he — was it fear? Like, what was the emotion?
SCHELLMAN: Yeah, I think he was a little weirded out too.
EL KALIOUBY: Okay. Yeah.
SCHELLMAN: It definitely shook him up a little bit. I don’t know if it was fearful.
It was just really odd and really surprising for sure to him. And now I’ve talked to a lot of people who’ve done one way video interviews and I don’t think they’re necessarily fearful. I just think they thought it was so weird, the experience, that you would assume that if you have a job interview with somebody, there’s somebody on the other side talking to you. And they’re like, there’s no one on the other side.
You get pre-recorded questions and then you record yourself answering. And they felt, you know, they were like, yeah, I’m really excited about the job, but I’m talking to my camera on my computer and that’s kind of really weird, to get excited about that. This moment where I met the Lyft driver was the beginning of writing this book. It technically was, but there were a couple years in between where it was like, working on stories for the Wall Street Journal on AI and technology, and I started to write more about AI and hiring for other outlets.
And like over the course of looking into this industry and seeing how big AI is becoming in HR and hiring and in the world of work, I sort of felt like, wow, I think this really warrants a book. This is a real sea change. So many companies are using this kind of technology.
And I didn’t see a lot of people talking about it, because I think it’s a little bit hidden in a way, because I think a lot of job seekers don’t necessarily know that AI is being used on them, right? They upload their resume and their application to a job platform, be it LinkedIn or Indeed, or ZipRecruiter.
They don’t understand that on the other side there might be AI that is, quote unquote, looking at their resume and possibly rejecting them or putting them in the next round, right? It could also be human, but we know that large job platforms all use AI. But I think it’s not really clear to job seekers.
So I thought there was a little bit of an information vacuum. And I think also that the technology started with folks who were in retail, in fast food often, where employers have to hire a lot of people and have high turnover. And then slowly we know that the technology was used to hire flight attendants, teachers.
Copy LinkWhy AI took over the hiring funnel
EL KALIOUBY: Is there a cultural change in your mind that has caused AI to become so ubiquitous in the hiring process? I mean, one of them is obviously these huge platforms like LinkedIn, where hundreds of people can apply to a job, right? Is that a part of it?
SCHELLMAN: Yeah, I think that’s probably the biggest driver, right? With the dawn of job platforms, it’s so easy to find jobs now. It helps you find the best ones. And then it’s so easy to apply, it often takes seconds to upload your resume, so it democratized hiring for job seekers. But I think on the other side, what companies say is they get a deluge of applications, and they feel they’re drowning in applications.
So Google says they get about 3 million applications a year.
IBM says they get about 5 million resumes a year. Goldman Sachs said a couple years ago that for their summer internship program alone, they got over 220,000 applications. So obviously there aren’t enough humans to look through all of these resumes.
And I should also say that humans are very biased in hiring. So that might actually also not be our optimal solution.
Copy LinkCan algorithms reduce bias better than humans
EL KALIOUBY: Absolutely. Humans are really biased when it comes to hiring. And so I’ll share just some numbers, right? White sounding names on resumes get 50% more callbacks for interviews than non-white sounding names. Women applicants are 30% less likely to receive a callback for an interview. And blind hiring increases the likelihood of hiring a woman by 25 to 46%. And then this last one is really shocking to me. Basically 48%, which is about half of the hiring managers, admit to being biased in their choice of a candidate. So I guess the question I’d love for us to explore together is, can AI do better? Or not really.
SCHELLMAN: I wish I could answer that question, but we don’t have those longitudinal studies. And I do think that would be a huge thing for humanity if any company could do this, so we can actually tell, well, does this AI tool actually work, but also is it better than humans? And I do think that in general, humans are not very good at this. And we’ve seen a lot of research on trying to get bias out of humans by training them.
And that is also not very successful. So I actually do think that we need technological solutions to the problem. We just need to find the right technological solutions. Because I think what we’ve seen in the sort of first generation, if you will, of AI tools used in hiring, is that we’ve seen some misfires and some misapplications. And I think we really need to learn from that and build better tools and test those tools for discrimination and use the less discriminatory algorithms.
Copy LinkWhat blind hiring should actually focus on
EL KALIOUBY: So the ideal algorithm, or actually the ideal hiring manager, whether it’s a human or a machine I think, ought to really focus on the skills needed for the job and be blind to everything else, right?
Blind to your gender, your ethnicity, your age. Can you talk about this concept of blind hiring and what does it actually mean?
SCHELLMAN: Yeah. The idea is really like, if we should hire someone for the job, what is the most important thing this person needs to do in the job? And we should really hire for that. So that’s usually skills, capabilities, and not what hair color you have, where you’re from, your gender. And I think what sort of happens when we often use AI, we take in a lot of information, right?
So the software looks at a resume. And often the software doesn’t only look at the skills and the capabilities and maybe work history, because that gives you some sense of people’s capabilities as well. But when I talked to employment lawyers who brought in outside counsel to look into some of these tools, they found out that some of these tools, unfortunately, did what AI does best — did a statistical analysis. And then one of the AI tools found out that the name Thomas was predictive of success.
So obviously that doesn’t qualify you for any job, right? Sorry for all the Thomases. What probably happened is that a company gave the tool a resume of people who are successful in the job right now, maybe hundreds of people, and maybe there were a bunch of Thomases in the pile.
So the AI found the statistical pattern and suddenly Thomas is a proxy for success. We see this again and again. In another tool it was Syria and Canada that were predictors of success, and that could actually be discrimination based on national origin. Another tool used Africa and African American. Another tool gave people more points if you had the word baseball on their resume, and fewer points if you had the word softball on your resume. Obviously, it has nothing to do with the job, and that’s probably gender discrimination, right? So I think that is the problem when we use AI and don’t constrain it, right? When it looks at, in this case, all the words on a resume.
And we see some companies that mask pronouns and that mask names and addresses, and that’s all great. But the problem is that the bias can come in through proxies that seem neutral, that seem non-problematic, and then they happen to be problematic again. So I think that is a thing that is really, really hard.
So we need to supervise these systems. And I think that’s often lacking.
EL KALIOUBY: So we know how AI is affecting the job application process and the role bias plays. But what does that process look like for a job seeker? And is AI helping or hurting their chances for getting that job? That’s after the break.
[AD BREAK]
Copy LinkWhy qualified candidates still get filtered out
In your book, you talk about Sophie, a software developer in her twenties, and you share her employment journey. Apparently she was a star candidate with many qualities that employers seek, but somehow her job application just never seemed to go anywhere. What was happening?
SCHELLMAN: She had everything that I think if you were a software developer recruiter you would want. She had an undergrad and a master’s degree in information science and software development and UX design.
She had taught a girl’s coding camp. So she was a teacher. She had a portfolio because she had done internships. She had everything you wanted. Also, she’s a veteran and she’s black and she’s a woman, like all kinds of things that people in tech would want. We found her through her professor who put her forward and said, hey, she’s a really interesting story that I’m really shocked by too. And I was like, wow, I thought the recruiters would throw offers at you, right? Like she was part of sort of women in tech groups on LinkedIn and other places. And she’s like, yes, I’m sending 200, 300 applications and don’t hear anything. Which, you know, I had assumed for some jobs, but maybe not software development, because we always hear companies saying we don’t have enough talent in tech. Leadership as well — when they use AI tools, almost 90% said that they know that their AI tools reject well-qualified candidates. So we kind of all know that it’s not working super well, or as well as we would hope. And I think there we really need to push into that and build better tools.
Rana el Kaliouby: So what happened to Sophie? Did she get a job at the end?
After 146 applications, she did get a job. She was very happy about that. So the way she got hired is she did a little bit of a roundabout thing. She would send in her application and then she would find out who the recruiter was and hit them up on LinkedIn and send them her resume with a message, and she was like, that’s how I got the interviews.
And that’s how she at the end got a job offer. She and I actually did some AI tests, and it turned out that her resume wasn’t necessarily well picked up by some of the AI tools.
So job seekers can check that online, how much your job description and your resume have an overlap. It’s not foolproof because we don’t actually exactly know what kind of AI a company necessarily uses, but if they use job description and resume overlap, this can be a good indication.
Copy LinkHow platform behavior quietly disadvantages women
Rana el Kaliouby: I want to dig into another example of how things can go wrong. In your book, you talk about one particular example that totally hit home for me. Job recruiting platforms may accidentally discriminate against women by amplifying these really subtle behaviors, right? So specifically, men often apply to jobs even if they only partly qualify, whereas women wait until they check all the boxes.
I’ve definitely done that where I look at a job and I was like, ooh, I only have like half of the qualifications, should I really apply? I’m not sure. And I think a lot of men are a little bit more confident in their abilities. We see this a lot in the psychology literature. There’s a confidence versus competence issue.
And in a lot of workplaces, if you appear very confident, that is seen as you are very competent, which may or may not be true. So I think we see this here too. And I think it really comes out on job platforms because they track what the job seekers do on the platform, like every click, what you do. And it was kind of interesting when I talked to John Gerson, who is the former vice president of product at LinkedIn. He said the AI isn’t necessarily built to find the most qualified people.
It is built to find the most qualified people likely to apply. Because somebody like me, who’s been very happy at her job for seven years, I don’t really apply to any job on LinkedIn. So if I was an AI, I wouldn’t put me at the top either, even if I was the most qualified, because I’m very unlikely to apply.
And you want to make a recruiter happy, and they want applicants, not just resumes of people who will never apply or don’t want the job. And how do you measure if somebody is likely to apply? It’s usually with signals on the platform, right? Do you follow companies? Do you message back recruiters?
So it turns out men are a little bit more aggressive in general, not all men, but more than women. Men message back recruiters more. And so that’s a signal to an algorithm that you are likely to apply. So I think we see this like gender-based behavior that really most of us can absolutely not control.
I mean, I’ve definitely changed my behavior. I actually have been messaging back recruiters now and started following companies. And I think we also see now that LinkedIn and other companies have AI built that pushes a little bit against that.
So for example, women are also often more modest in putting their skills on their resume. And that’s one of the problems of a resume, right? Like it says, I’m looking for a software developer. So it maybe says that their programming language is Python, but as a hiring manager, I don’t know, are you a beginner? Are you a master developer? No one knows that. And so I think what also happens often is women are a little bit more modest. So maybe a man takes a couple months class and puts Python on their resume, where women often wait two or three years until they have a master level competency.
So now we have algorithms that infer qualifications. So we see a little bit of a push with AI to level the playing field. But I think this gender-based behavior is really, really hard to overcome.
Copy LinkWhy biased training data keeps reproducing unfair outcomes
EL KALIOUBY: What is the process of building these AI algorithms, and it all starts with data, right? So tell us more about that.
SCHELLMAN: So in hiring and at work, I think one of the real problems is that a lot of the data that we have is already biased, right? So you could think about like, I want to build an AI that promotes people, that finds people that haven’t been promoted previously because, you know, John always puts forward Alex and we know that Alex isn’t really that competent.
I want some new voices. Like, who are the hidden gems in my company? AI maybe based partially on performance reviews. Well, it turns out, performance reviews that are usually done by humans are also biased. Women, people of color, people with disabilities are underrated in performance reviews.
Even though they have the same achievements and performance as, for example, white men. So if you already have biased data, if you build an algorithm and don’t supervise it, and have an unsupervised system and don’t test it for what we call disparate impact, that’s a real problem. So we see this again and again, and there really isn’t a lot of unbiased data, and bias mitigation takes a lot of time, takes a lot of work. And that is not always done. I mean, there are sort of best practices, but I think what we see in hiring and in work algorithms a lot is there is guidance from the government.
Unfortunately, it is 45 years old. It’s the uniform guidelines from 1978 that tell companies it’s best if you look at different races against each other and gender, men versus women. But we don’t often look at the intersection of like white men versus black women, for example, where we know where the crux is, where the problem is. There’s no real bias mitigation for people with disabilities. We don’t even actually check for that. So there is a problem in these systems.
EL KALIOUBY: Yeah, back when I was running Affectiva, because we were building emotion recognition technology, we were very intentional about the data, but also about how we tested these algorithms, right? It is so important that you ask these tough questions of the algorithm, you try to poke holes at it.
But you’re right, it takes a lot of time, it takes a lot of money, it slows down your product launches. And so you have to be really committed to that. When you were doing your research, how top of mind were some of these issues for the companies that you interviewed?
SCHELLMAN: Yeah. I mean, I think enough stories come out of bias in these algorithms that a lot of people are more aware of it. And I think a lot of people, especially companies that buy these tools, I always encourage them to do pilot studies and test the technology.
Don’t believe what a vendor tells you, because they’re selling the technology. And to be honest, a lot of them are venture capital backed. They have to bring a product to market very quickly, right? They might not have the time to actually do all of this testing. And I was also going to ask, you know, with HireView — I know that HireVue used Affectiva and did the emotion recognition system for hiring.
Did you think that was a good application of the technology?
EL KALIOUBY: HireVue is an AI and human resource company. They enable employers to conduct video interviews where the applicant initially interacts with a computer instead of a human interviewer.
In fact, the Lyft driver that inspired Hilke’s work to begin with could have encountered HireVue’s technology in their job search.
Yeah, that’s interesting. I’m glad you brought that up. So one of the applications we explored at Affectiva was this idea of can AI help de-bias the hiring process and also bring people’s resumes to life, right? Like if you are applying to be a flight attendant for Southwest Airlines and you’re really empathetic and you have very high EQ, that’s very hard to portray in a Word document. How do you represent that? So I love the idea of a video interview. It does bring your story to life and it gives you an opportunity to really showcase who you are. So that was the impetus. The team at HireVue was really focused on leveraging technology to help recruiters sift through all these videos. The great thing is these algorithms are super blind to gender and ethnicity and age.
It’s really looking at your emotional reactions. But if there’s a little bit of bias in any of these algorithms, you’re right, it’s going to be deployed exponentially and exacerbate a lot of biases that exist in society. So we were very thoughtful about that. We did end up pausing our partnership with them. But I still love the team. And one of the things that I also realized in this whole process is who’s building these algorithms really matters, right?
The diversity of the team around the table is important. So can you say a little bit about that?
SCHELLMAN: I do think that diversity in teams is really, really important. One example is, in hiring, I played a lot of these games and a lot of video interviews. One of the video games that I played that was supposed to find out my personality and capability — sort of soft skills, so to speak — one of the things I had to do is I had to hit the space bar as fast as possible in a certain amount of time. When I was doing this, I was asking myself, like, what does it have to do with the job? That’s odd. I’ve never had to hit the spacebar as fast as possible, right? But then when I played the game with somebody who was quadriplegic, he’s like, what is with people who have a motor disability?
And I was like, oh, yeah, you’re totally right. What would happen to them if they, maybe they can’t hit the spacebar as fast as possible, but does that mean they’re less, you know, agile or, whatever — as a software tester, right? I have no idea how diverse the team was that built this algorithm, but it felt like they’re probably missing a lot of these questions here.
So I think diversity is really important. I have unfortunately also found out that even though we as humans can think of a lot of ways that algorithms can be biased — man, bias proxies can come from anywhere.
Copy LinkHow seemingly neutral signals become bias proxies
EL KALIOUBY: So what’s a bias proxy? What do you mean?
SCHELLMAN: Yeah, so a bias proxy is, for example, something that indicates that you’ll be successful. So for example, I talked to the former heads of talent acquisition at Walmart. One of their core objectives is, we need to hire people who stay longer in the stores, right?
So they had found out in a survey that if you have a friend or an acquaintance at a store, you stay longer. So that looks pretty neutral, right? They were thinking maybe we should use this as one of the criteria. So they did a pilot test, and it turns out it’s very predictive that if you have a friend or an acquaintance — but when they looked at the results and tested them for race and gender, it turned out that mostly Asian Americans had acquaintances and friends at the store and African Americans had not.
So even something that looks like a neutral proxy, as an indicator of success, I think it’s important to remember that the law in the United States can actually be very biased, because you would have discriminated against African Americans in this way without ever intending it.
And the law in the United States makes no difference. You can intend or not intend discrimination, and it doesn’t matter — if there is discrimination, the federal investigators might investigate you and bring a case forward. We haven’t seen a lot of investigations in the space because the way AI works, it’s a little bit more obfuscated than maybe traditional assessments where you have, like, for a firefighting job, we knew you had to take 200 pounds and carry them and move them from A to B.
Right now the problem is that maybe I’ll play a game, maybe I know there’s an algorithmic assessment, but how do I know what is actually being assessed? I have no idea as a candidate. And also, is there a cost being done or am I being discriminated against? I have no idea. I get rejected or I get put in the next round.
And that’s also obviously very ubiquitous as part of the hiring process, right? Like, we get rejected all the time. So as a job candidate, I don’t know why. And usually in a court of law, you have to show that you’ve been harmed. So just being rejected is really often not enough.
So we haven’t seen a whole lot of litigation in the space.
EL KALIOUBY: AI is increasingly being used in the hiring process. But applicants are not standing by; in fact, they’re finding creative ways to make AI work for them. That’s next after a short break. Stay with us.
[AD BREAK]
Copy LinkHow job seekers are learning to game AI systems
So humans are trying to outsmart AI with, for example, using white fonting.
SCHELLMAN: Yeah. Totally. White fonting. That is an old technique, right? White fonting was even around — I think when you started looking into this, there were already recruiters that were really upset about this. So, you know, if you didn’t have a skill, but that was a requirement, you would put it in white on your resume.
So a human wouldn’t be able to see it, but a machine would pick it up, right? Because it ingests all of the words in a resume. So maybe it would put you on the yes pile.
They’ll find out and be really upset about the white fonting. They look at the resume and are like, how did this person get on the pile? But what we see now is I think a lot of job seekers felt really helpless and hopeless for a long time — they send their applications into the ether, they never heard anything, or maybe they heard something a month later. It’s a very isolating experience. And I think they feel like they have a little bit of power back with ChatGPT and other LLMs.
EL KALIOUBY: Tell us more about that.
SCHELLMAN: So ChatGPT is really great at optimizing your resume, generating cover letters, really helping job seekers with preparing for a job. A lot of people also query ChatGPT, what are the most commonly asked questions in this kind of job interview? What are maybe the best answers? And prepare themselves. I’ve seen people use it in one way video interviews because you usually get a couple minutes before you have to answer.
You know what I’m saying? Time to think about it — query ChatGPT and use the answers that they think are probably better than what they would come up with. I’ve deepfaked myself in video interviews, where I wasn’t in front of the camera, so no one was in front of the camera.
I was next to the computer and I typed in my answers and had a deepfake—
EL KALIOUBY: Oh, wow.
SCHELLMAN: —generate the voice.
EL KALIOUBY: Did you get the job?
SCHELLMAN: I was actually highly ranked. So it’s kind of interesting. And the tool did not find out that there is no human in front of it, right. And I think that’s one of the things that we see — there isn’t enough attention paid to security inside these systems from a lot of vendors and companies. So I think there’s definitely room for improvement here. We also now see algorithms that can actually apply for people.
It can like upload your resume, hundreds of them in an hour. I mean, there’s only anecdotal evidence of this, right, but we have now heard from recruiters that they’re getting even 50% more of the many resumes they’ve already gotten.
So the deluge is getting worse.
Copy LinkWhen workplace monitoring creates stress instead of productivity
EL KALIOUBY: I really wanted to talk about the monitoring in the workplace. How is AI being used to monitor employees for productivity?
SCHELLMAN: I think a lot of the tools to monitor employees have been around for a while, and they certainly precede the pandemic. But the pandemic really put a boom in this, right? A lot of folks were working from home. And I think a lot of managers were suddenly worried, like, is this person working? Are they really at their desk all day? So there was a real rush to buying some of these software tools and putting them on people’s computers at home. So we see like keystroking, everything that people do. We’ve seen screenshots of people’s faces to check that they’re still in front of the computer. The New York Times has done an investigation. They found that eight out of the ten largest employers in the United States use this kind of technology. And we know from some vendors that very large Fortune 500 companies use them. And they not only can track every keystroke, they can also do sentiment analysis on Slack channels, like private Slack channels where maybe you vent. That is all fair game to look at. And some of these tools now can do a sentiment analysis and also track behavior.
EL KALIOUBY: Yeah, it’s so interesting to me because the technology can be used — it’s neutral. It’s how you decide to use it, right?
SCHELLMAN: Yeah. And unfortunately, what we also know from research is that when you start tracking employees, they probably find out, and it leads to a lot of anxiety and stress and actually doesn’t increase productivity.
It leads to a lot of what we call productivity theater, you know, where you have like a mouse jiggler that just moves your mouse while you take your dog for a walk or something.
And I think we see that unfortunately again and again, that we really have to critically think, is this a good application of the technology? And does it have the intended results that I want? And I think a lot of these tools — like, we can track everything that happens on a computer.
But is that actually helpful? Is that actually helpful to understand if somebody is performing? And I would say, probably not.
And some job seekers comment online that they feel like, you know, we knew that companies have been using AI for a long time, and finally we get to use it too. But the question is, what is left there?
Like everyone’s gaming each other. Like, how can we still make quality hires? And I think that’s a real question that we really haven’t answered.
EL KALIOUBY: Last question. If you could have AI do anything for you, what would it be?
SCHELLMAN: Oh, my God. I’m actually the kind of person that’s testing technology. I’m also testing AI tools for journalists because I get beams of data from freedom of information requests and I actually built an AI tool to go through that. So I am not anti-AI at all. I think it’s a transformative technology. I think we just have to know how to use it and how to apply it.
And I think we often — we’re humans. So we go into this like, oh, technology has solved it all! And I think that is the wrong stance to take here. We have to be much more critical and really also think through, if I feel weird about using that kind of data, yeah, don’t use it.
EL KALIOUBY: And kind of think of this as a human-machine partnership, right? Really elevate the role of humans in this whole process. But yeah, thank you.
Thank you for joining us.
SCHELLMAN: Yeah. Thank you for having me.
EL KALIOUBY: We talked about A LOT in this episode. If it left you with questions, let us know! What about AI concerns you?
Episode Takeaways
- Investigative journalist Hilke Schellmann traces her book The Algorithm back to a Lyft driver who was unnerved after discovering his first job interview was with a prerecorded AI system.
- As Rana el Kaliouby and Schellmann unpack modern hiring, they argue AI spread because online job platforms created a flood of applicants that companies now struggle to review fairly.
- The conversation shows how hiring algorithms can miss the point, latching onto proxies like names, locations, or hobbies instead of the actual skills a role truly requires.
- Through stories like Sophie’s stalled software job search, Schellmann explains how candidates are learning to work around AI filters, even as those systems can quietly amplify gender and racial bias.
- By the end, the episode widens from hiring to workplace surveillance, warning that AI can just as easily monitor keystrokes and sentiment as empower people, depending on how humans choose to use it.