How AI is changing filmmaking in Hollywood, with Tom Graham
Cutting-edge special effects have brought some of the highest-grossing films, like “Avatar” and “Avengers: Endgame,” to life. Historically, these effects are time-consuming and expensive to produce, but that’s changing. Now, AI offers more options. Co-founder and CEO of Metaphysic, Tom Graham, is at the forefront of reshaping Hollywood using AI — building tools that provide more fidelity and realism at a cheaper price. Graham joins Pioneers of AI to talk about how AI is transforming the entertainment industry, how Metaphysic’s technology works, and about his collaboration with director Robert Zemeckis on the film “Here.”
About Tom
- Co-Founder & CEO of Metaphysic AI, a 2024 TIME100 company
- Brought AI de-aging to Robert Zemeckis' film Here with Tom Hanks & Robin Wright
- Won a 2024 VMA for Eminem's Houdini video, reviving Slim Shady
- Delivered first live generative AI on broadcast TV at the 2024 VMAs
- Ex-lawyer pursuing copyright registration for his AI likeness with the USCO
Table of Contents:
- Why Hollywood is wrestling with consent and control
- Metaphysic's vision for scalable human centered media
- Why powerful AI tools are not being opened to everyone
- How a viral Tom Cruise deepfake launched a company
- Why AI de-aging changes the economics of visual effects
- What real time AI unlocked on the set of Here
- The hidden details that make digital humans believable
- Why AI generated layers may soon reshape all video
- Why your face voice and memories may become priceless data
- The rules and disclosures realistic AI will require
- Episode Takeaways
Transcript:
How AI is changing filmmaking in Hollywood, with Tom Graham
TOM GRAHAM: At the VMAs, we won a VMA award for the music video we did with Eminem, where we brought Slim Shady back.
RANA EL KALIOUBY: Tom Graham is the cofounder of the AI company Metaphysic. And they did bring back Slim Shady at the MTV Video Music Awards.
GRAHAM: It’s the song Houdini. In the music video, he is kind of fighting a battle against himself, and himself is Slim Shady, and then his today self and Slim Shady merge into this kind of like hybrid. It’s a lot of fun, the song’s a lot of fun.
EL KALIOUBY: Slim Shady is the alter-ego of rapper Eminem, but more specifically the younger version of the artist. Metaphysics’ technology allowed for a realistic and younger version of Eminem to appear alongside the actual rapper. But that’s not all.
GRAHAM: During the VMAs where his performance opened the VMA awards this year, we had live, real time, a Slim Shady, where it’s a young performer who’s fantastic kind of rap battling with Eminem on stage. But what’s happening there is that the camera is taking an image of Slim Shady and Eminem next to each other and putting it through our computer, and we’re adding to that image the young Slim Shady face, head and shoulders onto the impersonator. And then it was going out live broadcast television.
EL KALIOUBY: So, people in the theater saw just the impersonator. But anyone watching the live feed from home saw an AI version of young Slim Shady, generated in-camera, in real time.
GRAHAM: And I think that’s the first example of Generative AI feeding into broadcast television. It’s certainly the first time that’s ever happened.
EL KALIOUBY: Using AI visual effects during a live televised performance is just the beginning for Tom. His company Metaphysic is changing the future of Hollywood.
They’re making AI-powered visual effects for movies and TV. On this episode, we talk about AI in the entertainment world, how Metaphysic’s technology works, and its most recent debut in the new Robert Zemeckis film “Here” with Tom Hanks and Robin Wright.
I’m Rana el Kaliouby and this is Pioneers of AI – A podcast taking you behind-the-scenes of the AI revolution.
[THEME MUSIC]
Rana el Kaliouby: Welcome, Tom.
GRAHAM: Thanks for having me. It’s good to be here.
Copy LinkWhy Hollywood is wrestling with consent and control
EL KALIOUBY: So I just want to start kind of with a broad lens on what’s happening with Hollywood as it relates to AI. We’ve seen strikes from the actors and animators unions and AI is a major sticking point with all of that.
Even back in February, which seems like forever ago in the world of AI. Tyler Perry put a pause on expanding a production studio that he was working on because he could see that AI was going to change everything, especially the traditional ways of making film and TV. So how would you describe Hollywood’s relationship to AI right now?
GRAHAM: Yeah, I think that the most important thing for me is to focus on the human part of that relationship.
EL KALIOUBY: Love that by the way. We’re all about human centered AI here, so that’s—
GRAHAM: If you see an AI generated version of yourself and it’s very realistic and that AI generated version of you is doing something that you didn’t do.
It is very concerning. You have a strange feeling, not just of uncanniness in terms of your relationship with yourself in reality, but you feel kind of sick and unwell. At the core of it, it’s really disenfranchising you from controlling your body and the outward expression of your body.
So when you come to Hollywood, people’s performance is really their body and who they are. They get to do that and they get to choose to do that in front of a camera or in front of an audience. So anything that disintermediates that person’s control over their performance is not a good thing.
And so we’ve always focused on the idea of consent when it comes to creating an AI version of somebody. The reality today is that, kind of up until today, if we just suggest that today is like a bit of an inflection point, all of the technologies to make realistic AI generated human performance have mostly been outputs on top of human performance.
So like a deepfake. You take one human performance and you put something on top of it to change the appearance of it, which is manifestly different than using AI to fully create the performance from scratch with no initial human input. So that’s kind of up until today. It moves a lot of the use of AI in Hollywood into the realm of it’s a tool that is used creatively by people with their consent, just like CGI or VFX.
So, there is an initial period of generative AI exploding into the world, and people feeling that deep emotion of concern, which is the right thing to feel, because that technology is very powerful. We should harness that concern to drive institutional policy regulation responses. But I mentioned this inflection point. We are certainly moving towards a world where human performance in its entirety can be created by these algorithms in a way that regular audiences might not be able to tell the difference. And so that’s where consent becomes even more important.
Copy LinkMetaphysic's vision for scalable human centered media
EL KALIOUBY: Becomes really important. So give me the elevator pitch for Metaphysic.
GRAHAM: Yeah. So back in early 2021, we were experimenting with kind of AI generated content in the context of deep fakes and autoencoder architectures.
EL KALIOUBY: Autoencoder Architectures. An autoencoder is a type of artificial neural network – the machine learning algorithm that is the basis of a lot of the AI we see today. An autoencoder’s superpower is that it’s really good at learning data representations in an efficient way. It’s useful for tasks like facial recognition.
GRAHAM: And we built the company to kind of build the software and infrastructure to scale photo realistic AI generated content to kind of everyone on earth.
EL KALIOUBY: Tom is talking about democratizing access. Metaphysic wants to enable everyone to use and create realistic AI generated media.
GRAHAM: That’s the mission. When you understand that it’s kind of a data science pipeline to create this content with algorithms, it’s a software problem. And so you see quickly that premium quality content that looks like reality, that we can immerse ourselves in and enjoy the benefits of emotional responses which are elicited by reality, harness those, become scalable because that software can scale. The hardware is there. It’s not a hardware problem going forward. It is a software problem. So the thing that I’m really excited about is: Can we create immersive, personalized experiences which are like a Star Trek holodeck, but content wise, maybe from our memories, maybe from our loved one’s memories.
Could you relive your kid’s first birthday party, or your first birthday party? And can you interact with that? Inside that idea is the data that we capture in the real world becomes kind of a repository of human knowledge and understanding. It’s our library of Alexandria. And so, if we can harness that to create that experience, communication between future people, then you can build more empathy, you can build lots of positive human emotions. There are many bad things that can happen also, and we need to really diligently, from the top down, from regulators through the people designing products, think very carefully about how we harness a technology and make it safe. But the benefits I think are incredible.
Copy LinkWhy powerful AI tools are not being opened to everyone
EL KALIOUBY: Today, Metaphysic is not directly consumer facing, right? And so it’s not a tool where I can download Metaphysic and create a digital twin of myself. But it sounds like this is where you would like to go and this is where you see kind of the roadmap for the future.
GRAHAM: Yeah, we chose not to kind of open source or make the set of tools available for retail applications because of the bad things that people can do. Really. So, from political misinformation to non consensual image based abuse, which is AI generated, all of those things are immediately harmful to individual people, or have a broader social context. If we can create content that looks exactly like reality and people can’t tell the difference, this is not something that we should open source or allow to be used in a general context without the right type of content moderation.
Increasingly, platforms are being able to deploy content moderation in a way that may meet those needs. But I think that’s a couple of steps beyond where we are today.
Copy LinkHow a viral Tom Cruise deepfake launched a company
EL KALIOUBY: I do hope we get there in a way that’s safe for everyone. But for now, Metaphysic is using their technology for some pretty awesome applications in the entertainment industry – like the Slim Shady clone.
And it all started with a viral deepfake of Tom Cruise playing golf.
GRAHAM: So, going back to when we started the company nearly four years ago in early 2021, there is no one who has any idea what we are talking about.
Period. My co founder created Deep Tom Cruise, which was the first AI generated content where hundreds of millions of people thought, oh, that must be Tom Cruise, but it wasn’t actually. It was an AI generated version of him on top of a fantastic performer.
When I saw Deep Tom Cruise, I rang that guy up, Chris Umeh, my co founder, and asked him, what are you doing that’s different? He’s like, oh, it’s kind of data set on top of data set, algorithm on top. Okay, that’s a data science pipeline, great, that’s software.
How many people know how to do that? And he’s like, seven? So, four years ago, nobody knew anything about how to harness this technology to create content. So much so that we’re looking to hire people, and there’s only a couple of people working on master’s theses that have some kind of relevance to what we’re doing, because they hadn’t even got to PhD level yet. About two years ago, GPT and generative AI blew up, and stable diffusion kind of popped up.
And so increasingly today, everyone is focused on how to create content with AI, but the data science nature of this pipeline going from real world data to final outcome that looks real is a very difficult process. And there are no software components, libraries, primitives that you can just plug in to do this.
So you have to build this from scratch. That’s what we’ve done. But you can build the software, and then any data scientist will tell you, well, I’ve got all the tools, but one data scientist is better than a different data scientist, right? Like, how do you featurize the data set? How do you parameterize the model?
How do you bring these things together? How do you solve problems? That is a difficult skill.
EL KALIOUBY: Tom says that because this tech is so new, we haven’t seen tons of commercial applications of it yet. But there is one pretty big project using Metaphysic that you may have already seen on the silver screen.
After a short break we talk about how Metaphysic made Tom Hanks and Robin Wright look like they’re back in their 20s.
[AD BREAK]
Stay with us.
[AD BREAK]
Copy LinkWhy AI de-aging changes the economics of visual effects
So, the movie Here came out on November 1st.
It’s a Robert Zemeckis film with Tom Hanks and Robin Wright. And I am so excited to see it. Your company plays a major role in the filmmaking process. So first of all, congratulations. In the movie you see the actors across a wide range of ages, right? And Tom Hanks, I believe, is 68 and Robin is 58 years old today. But we see them at a much younger age, right? So I’m so curious, how did your company help with that filmmaking process?
But to kind of contextualize this, I want you to take us before Metaphysic existed. Like, what would they have done as a team if your technology was not around?
GRAHAM: Yeah. So, traditionally, if you’re trying to create digital humans, digi doubles, in a VFX CGI computer graphics sense, the prevailing technology is 3D modeling.
So you might do a photogrammetry session, where you capture images of different angles of somebody’s face.
EL KALIOUBY: The one where you’ve got the little dots?
GRAHAM: Yeah, so you might start by capturing images of a face without the dots, and then kind of mold those, shape those onto a 3D model that you’ve designed with software, and then you might animate that 3D model of someone’s head with the motion capture, where now I’m doing the performance with the dots on my face, and I take the dots and I map them onto the 3D model and I get the 3D model to move just like that.
So at the core of that is this 3D model which is kind of like heuristically generated human programming. And imagine that if we’re programming something we can create tens of thousands of different connections. Exactly like a rig on a face. There might be 200 different things, levers that you can pull, cheek up, nose to the side.
But in reality, when faces move, there’s billions and billions of combinations of things. And the interplay between a smile and how your ears wiggle, or how the light on your forehead changes, is so complex. This new genre of technology, which is essentially neural nets which are trained on imagery and video from somebody’s face over maybe half an hour of video or something like that.
You can train the neural net to understand the interplay between those different parts of the face as they move. Such that, if you kind of go into that neural net – we conceptualize it as like a 3D model of a brain, like a human brain – and you grab the neuron that is like, add cheeky smile number 5, right, 30 percent more cheeky smile.
Because of the entanglement between all of the different expressions and facial movements around cheeky smile, when you grab cheeky smile, you drag everything in the direction of what would happen if you were smiling.
EL KALIOUBY: You see a few wrinkles around the eyes and like, you know.
GRAHAM: Exactly. And that’s profound in the context of comparing it to 3D models.
Because when you do that, in the context of this neural net, and you go through the process of inference, which is like creating the image from what it’s trained on, the process of inference is incredibly cheap. Like, fractions of pennies. Today, we can run live, kind of 1K by 1K inference, for a face swap or a head and shoulder swap from a gaming laptop at 100 frames a second.
Like, that’s how cheap it is to create one image. But if you’re trying to create that same image in a CGI 3D modeling sense, you need to compute all the different movements of all the different parts of a face, and then on top of that, you have to compute the skin texture and then the lighting and ray trace all the lighting.
Some models are like down to the single hair, right? How does a hair move? You get all of that for free from the neural net that understands implicitly all of those things. It’s a million times cheaper. That’s the core. That’s the real driver that means that this technology is going to be creating content for every single one of us.
Copy LinkWhat real time AI unlocked on the set of Here
EL KALIOUBY: But I want you to take us to the movie set, right? Were you there? Like, was the technology doing this modeling in real time?
Because my understanding is that Tom Hanks and Robin Wright were there. Like, they’re acting as if they’re their 18 year old selves when they met, and you’re capturing this video and then in real time, basically generating their 18 year old selves.
GRAHAM: Yeah, fundamentally, audiences as they’re watching the film, they’re falling in love with Tom and Robin and some of the other characters’ performance. And that’s them on stage. And there is something magical about one person’s performance versus what you can generate with AI today, or even other people trying to imitate that person’s performance.
There’s a reason that Tom Hanks and Robin Wright are amazing actors. And so their performance that we see is them on stage. And that’s amazing. Then there is this layer on top that can kind of work as a tool to take their performance and just make it look like it’s a 20 year old version of them.
But a large part of that was really them having to act like they were 20 year olds also, so that their visuals lined up with how.
EL KALIOUBY: Of voice and movement.
GRAHAM: Yeah, and like how you kind of hold your face and things like that. We could do that live on set. So, they could see themselves in kind of like the youth mirror, and adjust their performance live in real time.
And then, the director Bob Zemeckis, he could see what it would look like in real time on the little screen as they’re going. It comes back to this kind of like live real time technology, which is amazing. You could never do that with CGI VFX because the compute cost of generating, rendering each image is so—
EL KALIOUBY: So it has to happen after the fact. It probably takes days or weeks, right?
GRAHAM: Yeah. So from a production point of view, this is a really amazing development. And that’s really part of that promise of this bundle of technology for creating content. It’s just really fast and cheap. So you can scale it. You can do amazing.
Copy LinkThe hidden details that make digital humans believable
EL KALIOUBY: Yeah. So this is actually really cool because you must have competed with a whole bunch of special effects studios to win this project. It sounds like you didn’t just win on cost, but you’re almost like you’re empowering the production team to be in control of this content as it’s being generated. What did you need to unlock about the human aging to make it visually compelling and real?
GRAHAM: So ears are a problem.
EL KALIOUBY: Really?
GRAHAM: Yeah, ears and nose. So, as we get older, our ears grow and our noses grow. And so, if you’re trying to de age somebody, you have these things sticking out the side of your head, which are now on screen bigger than they were 20, 30 years ago. So that’s an interesting problem. How do you deal with that? The other thing which is counterintuitive is, generally when we think about impersonating somebody, we think about what is inside the face, but that’s being kind of replaced by the AI as a tool on top of the expressions rendered by the human underneath.
So it’s actually the face shape which becomes the most important thing. So when we’re looking for someone to act as a stand in to then put a person on top of, face shape is really, really important. And you can imagine that in the context of prompting that people are very familiar with, the face shape is like the prompt. If you have a good prompt, it is easier to get a good outcome. If the chin’s really, really too big, you might need to use other AI tools to shrink the chin a little bit, to adjust the prompt before you put it through the final algorithm.
EL KALIOUBY: I spent almost my entire career building emotion recognition technology. And face shape is important, but it’s also these wrinkles on our face that make it so that if you don’t have this kind of wrinkling and texture, that’s where the uncanny valley happens.
How did you solve for that? And I’ll push you on this a little bit. Did you even solve for it? Are we done with the uncanny valley or not yet?
GRAHAM: I think it’s fair to say that you get all of that for free from the architecture of these neural nets, such that what you’re training into them is tens of thousands of images, maybe on top of a pre train that is trained on millions of images. So it understands a human face and how the different parts of it work.
And then you train in a specific person, and it’s just looking at pixels. Pixels over pixels over pixels. And so what we see is kind of like a micro expression. It is just a fundamental structure in its understanding of what goes with what. And so, you can’t avoid it. We have technology that can kind of like unpack these neural nets and make them navigable, So we can pinpoint specific expressions. If I add cheeky smile number five, and it’s entangled with all of the micro expressions and everything else, down to a very, very large resolution, a fine layer of detail, it would be harder to take that detail out than to just let it come along with what the neural net imagines is the outcome from that prompt. More smile.
EL KALIOUBY: What did Tom and Robert think? Did they embrace the technology? Were they skeptical?
GRAHAM: I think that Bob Zemeckis is such a frontier running director in his use of technology for his entire career.
EL KALIOUBY: Yeah, I mean, he did Back to the Future and Forrest Gump.
GRAHAM: And all of motion capture is really like being kind of driven by Bob Zemeckis and what he’s been working on over a long time.
He and the team really embraced this.
But it’s hard to explain how much of a step function change this technology is in the context of making humans that look real. If you went from a very, very expensive, tens of millions of dollar process, to now you can do it live, real time, on set, and you can see it and it looks perfect, and over here it looks a little bit weird.
Wow, you can do many, many creative, amazing things with that. So across the board, I think the experience for directors and actors was a really, really positive one.
Copy LinkWhy AI generated layers may soon reshape all video
EL KALIOUBY: Love that. Is this a major signal to Hollywood? Do you think it’s going to change a lot of things?
GRAHAM: I think that if we go back to that economic statement of fact about how cheap it is to create content that looks like reality with this set of technologies, it is very likely that 99.9 percent of all of the content on the internet 10 years from now is AI generated.
Even in the context of live sports. So you’ve got the basketball, and you take that feed, but then the feed will run through a set of algorithms which pump out the content that you look at.
But the algorithm putting on top is changing all of the logos and the sponsorships to serve ads directly to you. It will look like it’s perfectly embroidered on the jersey as they’re running around bouncing the ball. So it will look perfectly like reality, but really, that’ll be an AI generated layer.
And when you think about it in that context, yeah, obviously people are going to do that, right? It’s just serving ads, like everyone’s going to.
EL KALIOUBY: Just serving ads. Serving ads, personalized ads as you—
GRAHAM: And so you won’t notice that it’s not real in a sense.
But the computational efficiency will mean that you could probably do that rendering at the edge — right on your mobile phone, in real time, as you’re watching.
EL KALIOUBY: We’re going to take a short break. When we come back, we peek behind the curtain and talk to Tom about the data that powers Metaphysic. Back in a minute.
[AD BREAK]
Copy LinkWhy your face voice and memories may become priceless data
So you’re using your own networks. Where do you get your data from?
GRAHAM: Yeah, data is the most important thing. And if there is any message for people going forward in the context of this technology, it is this: all of the data from your life, your face, your voice, but also the memories, the video from your kid’s first birthday party.
That’s really important that you capture it today and hold onto it dearly, because in the near future, people are going to help you make that into content experiences that you probably care about. But you should control it. You shouldn’t give it away. So in the context of a face, you’re kind of looking to gather data, 4k video, more or less.
And if we were doing a data capture, it would be: we have these cameras set up right here. I would be kind of interviewing you, putting you through a half an hour program. Fifteen minutes, I’m trying to get you to talk about things that you enjoy, and smile, and maybe I’ll get you to change the angle of your face a little bit, to capture more of the different angles. And we’re looking for the movement between expressions. Where 3D modeling would take a static understanding of expressions, take a photograph of you smiling, and a photograph of you frowning, and then use the model to interpolate between them, the movement between them. Here we want the model to understand frame by frame every micro movement between those expressions.
And in half an hour with five or six cameras you can gather all the data that you need to make a perfect version of somebody for any point in the future of human history. Because the algorithms get better you need less data basically. It’s only really half an hour of data.
EL KALIOUBY: So let’s talk about some of the implications, the legal and kind of IP implications of this technology. Once the studio has created a digital version of an actor, whether it’s their kind of face likeness or their voice likeness, who has ownership of this digital twin?
GRAHAM: Generally this is in contract law. And let’s say that a regular user is going through, they’ve got the Vision Pro from Apple or the Meta headset and they turn it around and they take camera footage of their face and they’re making a little Gaussian splat avatar.
In the terms of service of that device is probably something along the lines of Apple or Meta own the data from your face to put it into their model. That’s generally kind of the terms of service status quo today. I think we certainly need to move to a paradigm where individuals have a lot of control over the data that can be used to represent a version of themselves in a realistic sense where no one can tell the difference. One thing I think is like a good analogy that’s happening today is like 23andMe. What’s going to happen to all that DNA data, right? It’s great that we can have really trusted relationships with large organizations today and give them our data.
But when I did 23andMe, I thought that that was locked in. I didn’t think that maybe today they might sell that to somebody. And so, there is a layer of regulation that should happen there, to help individuals have rights and control over how they’re portrayed, but also the data that represents them.
Copy LinkThe rules and disclosures realistic AI will require
EL KALIOUBY: So we care deeply about responsibility and AI on this podcast. What does that mean for you and for Metaphysic?
GRAHAM: Yeah, I think that from the beginning, we were the first people to really be able to create something that was indistinguishable from reality. And immediately, responsibility is the thing that you have to think about there. Because people have used these types of technologies to do very, very bad things for quite a long period of time. You know exactly what people would do with that power, because they have shown you over years.
And so, that has been fundamental to our mission. That’s why we didn’t open source our products. Why it’s been difficult for us to find retail consumer experiences to generate which are safe. And why we, with other members of the industry, have been very supportive of a very strict policy, understanding that you should only create someone’s AI generated likeness or voice with their consent. On a more fundamental level, you can understand how caricature or parody are very important First Amendment elements to speech. But I don’t know what the First Amendment or fair use copyright arguments are for creating content that’s indistinguishable from reality.
Like, I don’t know what public good that serves. So I’m kind of on the side of like, maybe there shouldn’t be a fair use version of something that looks exactly like me. What’s fair to me? In a free speech context, what’s the public good that’s served by putting people in a position where they might be fooled, or putting an individual in the position where they’re not in control of their body? So it’s a new frontier of both of those different jurisprudence.
EL KALIOUBY: Do you think we should disclose when it’s the AI version of the person?
GRAHAM: Yeah, I think fundamentally we shouldn’t try to fool anyone. Also, audiences don’t want to be fooled. If you go into a Star Wars movie, you know that that’s not real, right? But no one feels really great when they are watching what they think is an authentic interaction between two influencers, but one of them’s AI, or something’s just not real. It’s not a great strategy from a business point of view, but labeling is, I think, a very important part of a matrix of safeguards and social norms that we need to have to create a more safe information environment.
EL KALIOUBY: Alright, last question. If you could have AI do anything in the world for you, what would you have it do?
GRAHAM: Book flights, organize logistics.
EL KALIOUBY: That we still don’t have that.
GRAHAM: Life admin. The US has a health system that is challenging. The UK’s system is definitely better, but the NHS is just difficult to manage. You might be spending hours trying to get through stuff. Everything to do with health. How we do preventative health.
When you turn 40, book a scan. Just that administrative process. Just solve that problem. I don’t need all this other stuff. Just solve.
EL KALIOUBY: Just so, yeah, there ought to be an AI agent that can just do, yeah, exactly. We’ve—
GRAHAM: Lots of people are working on it, right? There’s some regulatory layer there that makes it slower, but.
EL KALIOUBY: We’ll get there. Yeah. That’ll be awesome. Well, thank you so much for joining me on the podcast.
GRAHAM: For having me.
Episode Takeaways
- Metaphysic cofounder Tom Graham opens with the company’s headline-grabbing work for Eminem, including a live VMA performance that brought a young Slim Shady to broadcast TV in real time.
- Asked about Hollywood’s uneasy relationship with AI, Graham keeps the focus on people, arguing that consent and control must come first as digital likenesses grow ever more believable.
- He describes Metaphysic as a software company for photoreal AI media, with a long-term vision of immersive, memory-driven experiences, while holding back consumer tools over safety concerns.
- The conversation then turns to Robert Zemeckis’s film Here, where Metaphysic let Tom Hanks and Robin Wright see de-aged versions of themselves live on set rather than waiting on costly CGI.
- Graham closes with a warning and a promise: your face, voice, and memories will become incredibly valuable data, which is why stronger consent, disclosure, and ownership rules matter now.