From Fortune Brainstorm AI: What does it take to make AI?
What does it take to make AI? This is a question Dr. Rana el Kaliouby explored in a moderated discussion at this year’s Fortune Brainstorm AI featuring Daniela Braga, founder and CEO of Defined AI; Benjamin Plummer, CEO of Invisible Technologies; and Jonathan Ross, founder and CEO of Groq. In this special presentation of the live panel, we dive into the nuts and bolts of making artificial intelligence, how to ethically source the data we need to power large language models, and why we should keep humans in the loop.
About Jonathan
- Founder & CEO of Groq, a multi-billion-dollar AI inference company (2025)
- Led development of Google’s TPU, the pioneering custom AI accelerator
- Invented Groq’s LPU for ultra-fast, energy-efficient AI inference
- Built a vertically integrated inference cloud used by 1M+ developers (2025)
- Closed a $1.5B Saudi Arabia deal and deployed ~20,000 chips in 51 days (2025)
Table of Contents:
- Why AI winners need both hardware and software
- Why fast inference changes user experience and business results
- What ethically sourced data really looks like
- Why AI investment needs broader incentives and representation
- How humans evaluate and improve frontier AI models
- Why better working conditions lead to better AI data
- How scaling compute can expand access to AI
- Which jobs AI is creating behind the scenes
- Episode Takeaways
Transcript:
From Fortune Brainstorm AI: What does it take to make AI?
RANA EL KALIOUBY: Hi there. We have something a little different for you today.
Since its inception 4 years ago, I’ve been co-chairing Fortune’s Brainstorm AI. It’s an annual conference that brings together thinkers, policy makers, startup founders, and Fortune 500 executives that are at the forefront of AI innovation.
A few weeks ago we wrapped up this year’s Brainstorm AI. And it was incredible. It was so good, that we want to share a slice of it with you.
I led a panel discussion with three CEOs to dig into the nuts and bolts of what it takes to make AI. We talked about everything from ethically sourced data needed to train these AI models, to the humans in the loop and of course, the compute.
I was joined by Daniela Braga, Founder and CEO of Defined AI, Benjamin Plummer, CEO of Invisible Technologies, and Jonathan Ross, Founder and CEO of Groq.
And just a note: Since this discussion is a peek behind the curtain with industry leaders, there may be some AI terms that are new to you. But even if you don’t know all the jargon, we think this conversation still makes a big impact.
Without further ado, let’s dive in.
EL KALIOUBY: Hi everyone. So good to see so many familiar faces again this year. Hello.
All right. I am so excited for this conversation. Thank you for being here. So AI is making us all more productive, but man, it’s so expensive to make. So I thought in this conversation, we would go behind the scenes and talk about what it actually takes to make AI. So the data, the humans and the compute.
And Jonathan, I wanted to start with you.
JONATHAN ROSS: All right.
Copy LinkWhy AI winners need both hardware and software
EL KALIOUBY: People often think about Groq as a hardware company, but you actually insisted on not being on the tip panel at this event. You’re more like a full stack solution, more like NVIDIA actually, and I wanted to point to one particular data point.
So Groq cloud platform is used by more than 650,000 developers to build a wide range of AI applications. Can you explain why this software hardware combo is so important, especially in AI inference, which is what Groq focuses on?
ROSS: Well, most people don’t know this, but AMD’s GPUs are actually faster than NVIDIA’s GPUs.
I actually had a VP from NVIDIA brag to me about this fact, and then said it’s actually their software that gives them the advantage. So when we started Groq, we actually spent the first six months working on the compiler, and only after we were able to take programs and lower them into something that we thought we could build and run a chip with, did we start designing the chip.
And so it was a massive advantage.
Copy LinkWhy fast inference changes user experience and business results
EL KALIOUBY: Follow on question. So you’re often thought of as NVIDIA’s challenger, and there’s also a lot of new entrants in the space like Cerebrus and Etched and Amazon. What is it going to take to win?
ROSS: Okay, so you never want to directly compete, so we like to say that we do inference and they do training, they would love to do inference too, but we do fast inference.
EL KALIOUBY: Why is that important?
ROSS: Fast?
EL KALIOUBY: Yeah, why is fast inference important? Yeah.
ROSS: Does anyone here want slow AI? Please raise your hand. Okay. I see no hands. So just imagine if you were doing a Google search, and it took 8, 10, 40 seconds to get an answer. Right? That would be intolerable. And that’s where we’re at with AI today.
So with us, we’re about 20, 40 times faster than a GPU for these answers. It allows you to do a lot of the system 2 thinking and other stuff, improve the quality of results, but speed also results in increased engagement. So roughly every 100 milliseconds of speed up is about an 8 percent conversion rate increase on desktop and 30 percent on mobile.
So it matters a lot.
Copy LinkWhat ethically sourced data really looks like
EL KALIOUBY: Amazing. Daniela, I want to come to you next. So we’ve known each other for many, many years. It’s great that you’re here. Earlier this year, you presented at the United Nations and I want to share a quote from that conversation. The race for data without permission is only broadening cultural, language, and gender imbalances.
Because internet content is just a reflection of our society, full of biases and misinformation. Your company, Defined AI, is the largest marketplace of ethically sourced data and models. What does ethically sourced data mean? And why is it so important?
DANIELA BRAGA: Thank you. I’m very proud of that. I’m actually very excited to hear the presentation before on ProRata AI, which is in line with our belief that just because data is public, it doesn’t mean it’s free. This is always what I’ve been saying, and now it comes to fruition. We created an ecosystem of a marketplace of training data that allows everyone to monetize their data. And we as the brokers in that sense, we vet legally the data. We make sure we can trace it back to the consent of the participant level.
We apply a price. We add all the machine learning readiness. And we sell it to a willing buyer that cares about brand reputation and not being sued by copyright infringement. But it’s beyond that. There’s another area which, as you also mentioned, everything that is in the internet is biased by definition.
It’s mostly white male generated. It’s English language generated mostly, it just always augments the divide of our society if we don’t bring bias data purposely built and ethically sourced, with everybody paid on the chain, to the models. And finally, the part that is hidden and nobody likes to talk about is the humans in the loop component, which I guess it’s up to you.
Where a lot of this work has been done under what is called digital sweatshops, in exploitation of people in developing and third world countries, exposing them to very, very low paid jobs and very harsh conditions. Especially including in the content moderation world, including psychological harm.
So those three pillars is how we built our world and our marketplace and how we’ve been.
Copy LinkWhy AI investment needs broader incentives and representation
EL KALIOUBY: Definitely want to come back to the digital sweatshop and the humans behind that in a second. But I’m an investor now in AI companies, and you’ve actually been quite critical of the AI investment landscape.
You’ve talked a lot about how billions and billions of dollars of funding are going in to fund the same people, the same ideas over and over again. How do we get to this world where we are actually funding underrepresented humans, applications, problems?
BRAGA: I did mention that too in that United Nations talk.
EL KALIOUBY: That’s pretty cool, by the way. I highly recommend it.
BRAGA: Of course, this is a capitalist world and venture capital is focused on making money, which, by the way, so far, very few AI companies make money. That’s the other ironic part here. The reality is there must be an agreement, and this is why it was important to speak it at the United Nations level, where investors have tax breaks to incentivize them to look into other fields. The agriculture field itself has 1 percent of AI investment, and it’s what sustains humanity. So everything is very unequal in our world. I think it needs to be incentivized at the government level with money, which is what people understand.
EL KALIOUBY: We’re going to take a short break. More of our conversation with Daniela Braga, Benjamin Plummer, and Jonathan Ross in a minute.
[AD BREAK]
Copy LinkHow humans evaluate and improve frontier AI models
Ben, let’s talk about the humans in the loop. You’re the CEO of Invisible Technologies, and it’s probably one of the best kept secrets, I think, in the AI world. You’re already profitable. You started off as a workflow automation company, but you actually also do a lot of work behind the scenes with OpenAI and Cohere.
What do you actually do with OpenAI? Give us an example.
BENJAMIN PLUMMER: Yeah, so I’d categorize what we do in two broad areas. One, loosely defined as sort of evaluations and understanding how are these models performing, where are they strong, where are they weak, where are there gaps in their capabilities so that they have a much better understanding of how these models are performing in the real world.
And then the second part of that is creating really high quality data that can be used to retrain those models and actually close those gaps. And you can understand that there’s a really symbiotic relationship between those two things. And the faster you can crank that flywheel, the faster you can drive model performance and improvement.
EL KALIOUBY: Now, you also recently, and I don’t know if this is public or not, but you recently signed on NVIDIA. What are you doing with them? And shouldn’t you then work with Groq?
PLUMMER: Over the last couple of years, we’ve worked with the majority of the frontier foundational model providers and we’ve really specialized in the sort of leading edge of that frontier, the most complex work, the most intricate training of those models.
And so any company looking to build a world class foundational model is either a client or a potential client. And given this is so exploratory in terms of trying different techniques and still very much in the R&D phase, a lot of that is co-creating new capabilities. We’re working with researchers understanding their goals and building evaluation suites or building data that they can use to actually close the gaps in model performance.
Copy LinkWhy better working conditions lead to better AI data
EL KALIOUBY: I’ll come to audience questions soon. So tee up your questions, please. I want to come back to this digital sweatshop because you employ thousands of humans around the world to basically do all this work. How do you think about what Daniela said?
PLUMMER: Yeah, it’s really interesting because for us as a business, we’ve never really felt conflicted about needing to sort of decide what’s best for our business or what’s best for the people that participate in that ecosystem because we know that quality is paramount, that the highest quality data is absolutely necessary and to get that you need really happy, really engaged people. And so we spend a lot of time curating this community, investing in them, training them, making sure we understand we’re paying well above living wages, and really investing and building that out.
Yes, it’s a good thing to do, but it’s also good for business in that these engaged people produce way higher quality results. I think probably the most surprising was when I discovered that a few of them had gotten tattoos with the Invisible logo, which was probably a little too far in the engagement side of things, but yeah, really a fantastic community.
EL KALIOUBY: That’s loyalty. That’s next level loyalty. All right. We have a question here. Can you please say your name and your affiliation?
PANKAJ KATIA: Yeah. Hi, Pankaj Katia, friend of Chamath, investor in Groq. Oh. My question is, having been in the semi space for three decades, I cannot recall the last time a semi startup made it.
It’s a scale business, working with the foundry and the supply chain and so on and so forth. So while I love LPUs and NVIDIA is showing the way, Jonathan, how do you think about scaling vis a vis NVIDIA, AMD, for that matter.
ROSS: Well, I don’t know if you intended to tee me up perfectly, but I think you just did, so I appreciate this.
So AI is a scale game. And if you can’t get to scale, there’s no point. One of the unique things about what we did was we actually designed our chips to be using 14 nanometer, which is an old technology. It’s underutilized. The fab that manufactures our chips is only about 50 percent used, which means that next year, if you look at our contract manufacturers, we actually have the ability to scale up to over 2 million of our LPUs.
Physically, we can manufacture that. So what we’ve been doing to get to scale has been working with partners. So I’ve been playing around with this token. So if some of you look at me on LinkedIn, you’ve probably seen this. This is my 25 million tokens per second target. Everyone at Groq carries one of these, that scale.
So when you produce output from these models, they produce tokens. And a token is about 1.3 tokens is a word. And so 25 million is about where OpenAI and Microsoft combined were at the beginning of this year. And so our goal is to get there. We’ll get there by the end of next quarter. So that’ll make us hyperscaler scale.
But I also happen to have this other one, which is one billion tokens. I’ll even do this. One billion tokens. And so this is with Aramco Digital, and we’re working together to do this. If we do that, that’ll be more than all of the other cloud providers combined. And we partnered with them, they’re covering our cost to deploy and then we split the profits of that.
And it’s a very beneficial arrangement together. Because we’re really the only ones who can get to that scale. NVIDIA has a lot of obligations with existing customers and we can actually build as much compute as all of NVIDIA combined.
Copy LinkHow scaling compute can expand access to AI
EL KALIOUBY: Wow. I have to ask this follow up question. You’re very passionate about democratizing access to AI.
Can you talk about how the scale will be a path to doing that?
ROSS: Yeah, because right now, if you want to get access to a GPU, you have to wait in line, and it’s a long line, and it’s not a very transparent line. You don’t know how long you’re gonna wait. So right now, Groq gives away about four times as many tokens for free every day as GCP does. And we’re a startup. And so our intention is to get to a point where we make enough money where we can give access away for free to everyone in the world. Just like you go into any room here, you plug into an outlet and no one’s going to be upset. Right? They’re not going to say you’re stealing our electricity.
Now for those who are using a large amount of tokens, they will pay, but that’ll cover the cost for everyone else.
EL KALIOUBY: Amazing.
We’re going to take a short break. When we come back, we talk about the impact AI will have on jobs. Stay with us.
[AD BREAK]
Copy LinkWhich jobs AI is creating behind the scenes
Okay, let’s talk about jobs. Do you both see kind of an evolution of AI jobs that are created in AI?
Okay, let’s talk about jobs. Do you both see kind of an evolution of AI jobs that are created in AI?
That it’s needed kind of behind the scenes to train all these AI models and create all this ethically sourced data?
PLUMMER: Yeah, we’ve already started to see shifts in the demand for certain jobs — some have gone up and are being created, and the demand for other types like job translation and some of these things has clearly gone down.
And that’s not that different from, I think, a lot of technology shifts. What is different about this one is, a, the speed. I think these shifts have sort of happened much more quickly than typically. And the second is this really unique attribute about AI and these technologies and their ability to democratize knowledge and skills and capabilities.
And so you have people who might not have any technical knowledge who can create a website and create these technologies. And so I think it’s really interesting to give these powerful tools to people and free them up to go back to inventing and creating new things, removing them from the sort of boring mundane work that holds most people back.
BRAGA: I’m just gonna add two more. The AI life cycle, more constrained and not more philosophically what AI is going to change. XAI calls them AI tutors, which is a glorified but really needed way of calling the humans in the loop in a fine tuning process of the model. And the legal side — the lawyers — the amount of legal work, it’s huge. Every company now has to either hire them in house or contract consultants. It’s really huge.
EL KALIOUBY: Okay, one word answers. What is it going to take for AI to be faster, smarter, impactful and equitable? Just one word that comes to mind. If you have the answer, say it.
PLUMMER: Humans.
EL KALIOUBY: Humans. Okay.
ROSS: Oh, commitment.
EL KALIOUBY: Commitment. Love it.
BRAGA: I think more equity, equitable investment.
EL KALIOUBY: Equitable investment.
There we go. Thank you so much to our panelists. Great conversation.
You can find this conversation and more from Fortune Brainstorm AI at fortune.com. And from me and the rest of the Pioneers of AI team, we want to wish you a Happy New Year! We are so grateful to you – our community – for joining us on this journey.
The team here is so excited to keep bringing you more conversations about the future of AI in 2025.
There’s so much more ground to cover. We’re just getting started.
Episode Takeaways
- Rana el Kaliouby opens with a Brainstorm AI panel on what it really takes to build AI, from data and human labor to the compute stack powering inference.
- Groq founder and CEO Jonathan Ross argues that in AI, software and hardware rise together, and that fast inference matters because users simply will not tolerate slow answers.
- Defined AI founder and CEO Daniela Braga makes the case for ethically sourced data, saying consent, fair compensation, and bias mitigation must be built into AI from the start.
- Invisible Technologies CEO Benjamin Plummer says humans in the loop are still essential, both to evaluate frontier models and to create the high-quality data that improves them.
- As the conversation turns to scale and jobs, the panel says AI will reshape work quickly, creating new roles in training, legal oversight, and human guidance while widening access.