0:00
/
0:00

Sam Altman on Sora, Energy, and Building an AI Empire

Sam Altman has led OpenAI from its founding as a research nonprofit in 2015 to becoming the most valuable startup in the world ten years later.

In this episode, a16z Cofounder Ben Horowitz and General Partner Erik Torenberg sit down with Sam to discuss the core thesis behind OpenAI’s disparate bets, why they released Sora, how they use models internally, the best AI evals, and where we’re going from here.

Timecodes:

00:41 OpenAI’s vision

01:44 What will OpenAI do with all its infrastructure?

02:36 Balancing research and vertical integration

05:07 Betting on AGI

08:01 AI-human interfaces

09:12 AI scientists

11:43 How Sam has updated his worldview

17:32 OpenAI’s partnerships

19:38 Balancing product vs research

20:30 Product vs research & investing vs operating

25:04 AI safety

28:29 Copyright & fair use

32:36 Open source

33:56 Sam’s interest in energy

37:06 Monetizing AI

43:03 Early OpenAI retrospective

44:56 What will AGI think of humanity?

45:19 Envisioning the post-AGI world

Transcript:

This transcript has been edited lightly for readability.

00:41 OpenAI’s vision

Erik Torenberg

Sam, welcome to the a16z podcast.

Sam Altman

Thanks for having me.

Erik

In another interview, you described OpenAI as a combination of four companies: a consumer technology business, a megascale infrastructure operation, a research lab, and all the new stuff, including planned hardware devices. From hardware to app integrations to job marketplace to commerce, what do all these bets add up to? What’s OpenAI’s vision?

Sam

Yeah, I mean, maybe you should count it as three, maybe as four for kind of our own version of what traditionally would’ve been the research lab at this scale, but three core ones. We want to be people’s personal AI subscription. I think most people will have one.

Some people will have several, and you’ll use it in some first-party consumer stuff with us. But you’ll also log into a bunch of other services, and you’ll just, you’ll use it from dedicated devices. At some point you’ll have this AI that gets to know you and be really useful to you, and that’s what we wanna do.

It turns out that to support that we also have to build out this massive amount of infrastructure. But the goal there, the mission is really like build this AGI and make it very useful to people.

01:44 What will OpenAI do with all its infrastructure?

Ben Horowitz

And does the infrastructure, do you think it will end up… You know, it’s necessary for the main goal. Will it also separately end up being another business? Or is it just really gonna be in service to the personal AI? Or unknown?

Sam

You mean like, would we sell it to other companies as raw infrastructure?

Ben

Yeah. Would you sell it to other companies? Or, you know, it’s such a massive thing. Would it do something else?

Sam

It feels to me like there will emerge some other thing to do like that, but I don’t know. We don’t have a current plan there.

Ben

You don’t know what it is, yeah.

Sam

It’s currently just meant to like, support the service we wanna deliver and the research.

Ben

Yeah, no, that makes sense.

Sam

Yeah. But the scale is sort of like terrifying enough that you’ve gotta be open to doing something else.

Ben

Yeah. If you’re building the biggest data center in the history of humankind.

Sam

The biggest infrastructure project, you might say

02:36 Balancing research and vertical integration

Erik

There was a great interview you did many years ago on StrictlyVC. And sort of early OpenAI before, well before ChatGPT, and they’re asking, “What’s the business model?” And you said, “Oh, well, we’ll ask the AI. It’ll figure it out for us.” Everybody laughs.

Sam

There have been multiple times, and there was just another one recently where we have asked a then current model for, you know, “What should we do?” and it has had an insightful answer we missed. So I think when we say stuff like that, people don’t take us seriously or literally. But maybe the answer is you should take us both.

Ben

Yeah. Yeah. Well, no, as somebody who runs an organization, I ask the AI a lot of questions about what I should do. It comes up with some pretty interesting answers

Sam

Sometimes. Sometimes, not always.

Ben

You know, you have to give it enough context, but…

Erik

What is the thesis that connects these bets beyond more distribution, more compute? How do we think about that?

Sam

I mean, the research enables us to make the great products, and the infrastructure enables us to do the research. So it is kind of like a vertical stack of things. Like you can use ChatGPT or some other service to get advice about what you should do running an organization. But for that to work, it requires great research, and it requires a lot of infrastructure. So it is kind of just this one thing.

Ben

And do you think that there will be a point where that becomes completely horizontal, or will it stay vertically integrated for the foreseeable future?

Sam

I was always against vertical integration. And I now think I was just wrong about that.

Ben

Yeah, interesting.

Sam

Because you’d like to think that the economy is efficient and the theory that companies can do one thing. And then that’s supposed to work.

Ben

Like to think that, yeah.

Sam

And in our case, at least, it hasn’t really. I mean, it has in some ways for sure. Like there’s people that make… Like you know, NVIDIA makes an amazing chip or whatever, that a lot of people can use, but the story of OpenAI has certainly been towards, we have to do more things than we thought to be able to deliver on the mission.

Ben

Right. You know, although the history of the computing industry has kind of been a story of kind of a back-and-forth, in that, you know, there was the Wang word processor and then the personal computer and the Blackberry before the smartphone. So you know, there has been this kind of vertical integration. But then the iPhone is also vertically integrated.

Sam

The iPhone I think is the most incredible product the tech industry has ever produced, and it is extraordinarily vertically integrated.

Ben

Amazingly so. Yeah. Interesting.

05:07 Betting on AGI

Erik

Which bets would you say are enablers of AGI versus which are sort of hedges against uncertainty?

Sam

I think you could say that on the surface, Sora, for example, does not look like it’s AGI-relevant, but I would bet that if we can build really great world models, that’ll be much more important to AGI than people think. There were a lot of people who thought ChatGPT was not a very AGI-relevant thing. And it’s been very helpful to us, not only in building better models and understanding how society wants to use this, but also in like bringing society along to actually figure out, man, we gotta contend with this thing now.

For a long time before ChatGPT we would talk about AGI and people were like, “This is not happening,” or “We don’t care.” And then all of a sudden they really cared. So research benefits aside, I’m a big believer that society and technology have to co-evolve. You can’t just drop the thing at the end. It doesn’t work that way. It is a sort of ongoing back-and-forth.

Erik

Yeah. Say more about how Sora fits into your strategy because there was some hullabaloo on X around, hey, you know, why devote precious GPUs to Sora, but is it a short term/long term tradeoff?

Ben

Well, and then the new one had a very interesting twist with the social networking. I’d be very interested in kind of how you’re thinking about that, and like, did Meta call you up and get mad or like, “Hey, what do you expect the reactions to be?”

Sam

I think if one company of the two of us feels more like the other one is going after them, it wouldn’t… They shouldn’t be calling us.

Ben

Well, I do know the history.

Sam

Like, first of all, I think it’s cool to make great products, and people love the new Sora. And, I also think it is important to give society a taste of what’s coming on this co-evolution point. So like, very soon the world is gonna have to contend with incredible video models that can deepfake anyone or kind of show anything you want. And that will mostly be great. There will be some adjustment that society has to go through. And just like with ChatGPT, we were like, the world kind of needs to understand where this is. I think it’s very important the world understands where video is going very quickly because that’s gonna be… Video has much more like emotional resonance than text.

And very soon we’re gonna be in a world where like this is gonna be everywhere. So I think there’s something there. As I mentioned, I think this will help our research program and is on the AGI path. But yeah, like, you know, it can’t all be about just making people like ruthlessly efficient and the AI like solving all our problems.

There’s gotta be like some fun and joy and delight along the way. But we won’t throw like tons of compute at it. Or not by a fraction of our compute.

Ben

Yeah, it’s tons in the absolute sense, but not in the relative sense.

08:01 AI-human interfaces

Erik

I wanna talk about the future of AI-human interfaces because back in August you said the models have already saturated the chat use case.

So what do future AI-human interfaces look like, both in terms of hardware and software? Is the vision for kind of a WeChat-like super app?

Sam

So, solving the chat thing in like a very narrow sense, which is if you’re trying to like, you know, have the most basic kind of chat-style conversation, it’s very good.

But what a chat interface can do for you, it’s like nowhere near saturated. Because you could ask a chat interface like, “Please cure cancer.” A model certainly can’t do that yet. So I think the text interface style can go very far. Even if for the chitchat use case, the models are already very good. But of course there’s better interfaces to have.

Actually that’s another thing that I think is cool about Sora, like you can imagine a world where the interface is just constantly real-time rendered video and what that would enable. And that’s pretty cool. You can imagine new kinds of hardware devices that are sort of always ambiently aware of what’s going on. And rather than your phone like blast you with text message notifications whenever it wants, like it really understands your context and when to show you what, and there’s a long way to go on all that stuff.

09:12 AI scientists

Erik

Within the next couple years, what will models be able to do that they’re not able to today? Will it be sort of white-collar replacement at a much deeper level? AI scientists? Humanoids?

Sam

I mean a lot of things, but you touched on the one I am most excited about, which is the AI scientist. This is crazy that we’re sitting here seriously talking about this. I know there’s like a quibble on what the Turing Test literally is, but the popular conception of the Turing Test sort of went whooshing by.

Ben

Yeah, that was fast.

Sam

You know, it was just like, we talked about it as this most important test of AI for a long time. It seemed impossibly far away. Then all of a sudden it was passed, the world freaked out for like a week, two weeks. And then it’s like, “Alright, I guess computers like can do that now.” And everything just went on. And I think that’s happening again with science. My own personal, like equivalent of the Turing Test has always been when AI can do science. Like that is a real change to the world. And for the first time with GPT-5, we are seeing these little, little examples where it’s happening.

You see these things on Twitter. It did this, it made this novel math discovery, and did this small thing in my, you know, my physics research, my biology research, and everything we see is that that’s gonna go much further. So in two years, I think the models will be doing bigger chunks of science and making important discoveries.

And that is a crazy thing. Like that will have a significant impact on the world. I am a believer that to a first order, scientific progress is what makes the world better over time. And if we’re about to have a lot more of that, that’s a big change.

Ben

It’s interesting because that’s a positive change that people don’t talk about.

It’s gotten so much into the realm of the negative changes if AI gets extremely smart but…

Sam

But curing every disease is like…

Ben

We could use a lot more science.

Sam

Yeah.

Ben

That’s a really good point. I think Alan Turing said this. Somebody asked him, they said, “Well, you really think the computer’s gonna be, you know, smarter than brilliant minds.”

He said, “It doesn’t have to be smarter than a brilliant mind, just smarter than a mediocre mind, like the president of AT&T.” And we should use more of that too probably.

Erik

We just saw Periodic launch last week, you know, OpenAI alums. And to that point, it’s amazing to see both the innovation that you guys are doing, but also the teams that, you know, come out of OpenAI just feels like, are, you know, creating tremendous…

Sam

We certainly hope so.

11:43 How Sam has updated his worldview

Erik

I wanted to ask you about just broader reflections in terms of what sort of about diffusion or development in 2025 has surprised you? Or what has sort of updated your worldview since ChatGPT came out?

Sam

A lot of things again, but maybe the most interesting one is how much new stuff we found. We sort of thought we had like stumbled on this one giant secret that we had these scaling laws for language models, and that felt like such an incredible triumph that I was like, “We’re probably never gonna get that lucky again.” And deep learning has been this miracle that keeps on giving. And we have kept finding like breakthrough after breakthrough. Again, when we got the reasoning model breakthrough, like, I also thought that was like, we’re never gonna get another one like that.

It just seems so improbable that this one technology works so well. But maybe this is always what it feels like when you discover like one of the big,

you know, scientific breakthroughs is, if it’s like really big, it’s pretty fundamental and it just, it keeps working. But the amount of progress, like if you went back and used GPT 3.5 from ChatGPT launch, you’d be like, I cannot believe anyone used this thing.

And now we’re in this world where the capability overhang is so immense. Like most of the world still just thinks about what ChatGPT can do. And then you have like some nerds in Silicon Valley that are using Codex and they’re like, “Wow, those people have no idea what’s going on.” And then you have like a few scientists who say, “Those people using Codex have no idea what’s going on.”

But the overhang of capability is so big now. And we’ve just come so far on what the models can do.

Erik

And in terms of further development, how far can we get with LLMs? At what point do we need either new architecture or… How do you think about what breakthroughs are needed?

Sam

I think far enough that we can make something that will figure out the next breakthrough with the current technology.

Like that’s a very self-referential answer, but if LLMs can get, if LLM-based stuff can get far enough that it can do like better research than all of OpenAI put together, maybe that’s like good enough.

Ben

That would be a big breakthrough, a very big breakthrough. So, on the more mundane, you know, one of the things that people have kind of started to complain about, I think South Park did a whole episode on it, is kind of the obsequiousness of kind of AI and ChatGPT in particular.

And how hard a problem is that to deal with? Is it not that hard? Or is it like kind of a fundamentally hard problem?

Sam

Oh, it’s not at all hard to deal with. A lot of users really want it. Like if you go look at what people say about ChatGPT online, there’s a lot of people who like really want that back. So technically, it’s not hard to deal with at all. One thing, and this is not surprising in any way, but the incredibly wide distribution of what users want out of like, how they’d like a chatbot to behave in big and small ways.

Ben

Do you end up having to configure the personality then you think? Is that gonna be the answer?

Sam

I think so. I mean, ideally like, you just talk to ChatGPT for a little while, and it kind of interviews you and also sort of sees what you like and don’t like.

Ben

And ChatGPT just figures it out.

Sam

And just figures it out, but in the short term, you’ll probably just pick one.

Ben

Got it. Yeah, no, that makes sense. Very interesting.

Sam

I think we just had a really naive thing, which, you know, like it would sort of be unusual to think you could make something that would talk to billions of people and everybody wants to talk to the same person.

Ben

Yeah.

Sam

And yet that was sort of our implicit assumption for a long time.

Ben

Right. Because people have very different friends.

Sam

People have very different friends. So now we’re trying to fix that.

Ben

Yeah. And also kind of different friends, different interests, different levels of intellectual capability. So you don’t really wanna be talking to the same thing all the time. And one of the great things about it is you can say, “Well, explain it to me like I’m five.” But maybe I don’t even wanna have to do that prompt. Maybe I always want you to talk to me like I’m five.

Sam

It should just learn that.

Ben

Particularly if you’re teaching me stuff. Interesting. I wanted to ask you kind of like a CEO question, which has been interesting for me to observe you, is you just did this deal with AMD, and you know, of course the company is in a different position, and you have more leverage in these kinds of things.

But like, how has your kind of thinking changed over the years since you did that initial deal, if at all?

Sam

I had very little operating experience then. I had very little experience running a company. Like I am not naturally someone to run a company. I’m a great fit to be an investor, and I kind of thought that was gonna be… That was what I did before this, and I thought that was gonna be my career.

Ben

Although you were a CEO before that.

Sam

Not a good one. And so I think I had the mindset of like an investor advising a company.

Ben

Oh, interesting.

Sam

And now I understand what it’s like to actually have to run a company.

Ben

Yeah. Right, right, right. There’s more than just numbers.

Sam

I’ve learned a lot about what it takes to operationalize deals over time.

Ben

Right. All the implications of the agreement as opposed to just, “Oh, we’re gonna get distribution and money.” Yeah. That makes sense. I’ll just say I was very impressed at the deal structure improvement.

17:32 OpenAI’s partnerships

Erik

More broadly, you’ve, you know, in the last few weeks alone, you mentioned AMD, but also Oracle, NVIDIA. You’ve chosen to strike these deals and partnerships with companies that you collaborate with, but could also potentially compete with in certain areas. How do you decide when to collaborate versus when not to, or how do you just think about that?

Sam

We have decided that it is time to go make a very aggressive infrastructure bet, and I’ve never been more confident in the research roadmap in front of us and also the economic value that’ll come from using those models. But to make the bet at this scale, we kind of need the whole industry to, or a big chunk of the industry to support it.

And this is like, you know, from the level of like electrons to model distribution and all the stuff in between, which is a lot. And so we’re gonna partner with a lot of people. You should expect like much more from us in the coming months.

Ben

Actually expand on that. Because when you talk about the scale, it does feel like in your mind the limit on it is unlimited.

Like you would scale it as, you know, as big as you possibly could.

Sam

I mean, there’s like some… There’s totally a limit. There’s some amount of global GDP.

Ben

Yeah. Well, yes.

Sam

You know, there’s some fraction of it that is knowledge work, and we don’t do robots yet.

Ben

Yes. But the limits are out there.

Sam

It feels like the limits are very far from where we are today, if we are right about… I shouldn’t say from where we are… Like, if we are right that the model capability is gonna go where we think it’s gonna go, then the economic value that sits there can go very, very far.

Ben

Right. So you wouldn’t do it. Like if all you ever had was today’s model, you wouldn’t go there.

Sam

No, definitely not.

Ben

So it’s a combination.

Sam

I mean, we would still expand because we can see how much demand there is we can’t serve with today’s model, but we would not be going this aggressive if all we had was today’s model.

Ben

Right.

Sam

We get to see a year or two in advance though.

Ben

Interesting.

19:38 Balancing product vs research

Erik

ChatGPT usage is 800 million weekly active users, about 10% of the world’s population, fastest growing consumer product, you know, ever, it seems.

Ben

Faster than anyone I ever saw.

Erik

How do you balance, you know, optimizing for active users at the same time being a product company and a research company.

How do you thread the needle?

Sam

When there’s a constraint, which happens all the time, we almost always prioritize giving the GPUs to research over supporting the product. Part of the reason we want to build this capacity is so we don’t have to make such painful decisions. There are weird times, you know, like a new feature launches, and it’s going really viral or whatever, where research will temporarily sacrifice some GPUs, but on the whole, like, we’re here to build AGI, and research gets the priority.

20:30 Product vs research & investing vs operating

Erik

You said in your interview with your brother Jack around how, you know, other companies can try to imitate the products or hire…

Sam

Buy our IP maybe.

Erik

Or do all sorts of things. But they can’t buy the culture, or they can’t imitate the culture of innovation. How have you done that? Or what are you doing? Talk about this culture of innovation.

Sam

This was one thing that I think was very useful about coming from an investor background. A really good research culture looks much more like running a really good seed stage investing firm and betting on founders and sort of that kind of, than it does like running a product company. So I think having that experience was really helpful to the culture we built.

Erik

Yeah. Yeah. That’s sort of how I see, you know, Ben at a16z in some ways. You know, you’re a CEO, but you also have this portfolio and have an investor mindset.

Ben

Right, like I’m the opposite. CEO going to investor. He’s an investor going to CEO.

Sam

It is unusual in this direction.

Ben

Yeah. Yeah, well, it never works. You’re the only one who I think I’ve seen go that way and have it work.

Sam

Workday was like that, right?

Ben

Oh, but Aneel was, he was an operator before he was an investor. And I mean, he was really an operator. I mean, PeopleSoft is a pretty big company.

Erik

And why is that? Because once people are investors, they don’t want to operate anymore?

Ben

No, I think that generally, if you’re good at investing, you’re not necessarily good at like organizational dynamics, conflict resolution. You know, just like the deep psychology of like all the weird shit.

And then you know how politics get created. There’s the detailed work in being an operator or being a CEO is so vast, and it’s not as intellectually stimulating. It’s not something you could ever go talk to somebody at a cocktail party about. And so like you’re an investor, you get like, “Oh, everybody thinks I’m so smart.” because you know everything, you see all the companies and so forth. And that’s a good feeling. And then being a CEO is often a bad feeling. And so it’s really hard to from a good feeling to a bad feeling, I would just say.

Sam

I’m shocked by how different they are, and I’m shocked by how much the difference between a good job and a bad job they are.

Ben

Yeah. Yes. You know, it’s tough. It’s rough. I mean, I can’t even believe I’m running the firm. Like I know better. And he can’t believe he’s running OpenAI. He knows better.

Erik

Going back to progress today, are evals still useful in a world in which they’re getting saturated, gamed? What is the best way to gauge model capability now?

Sam

Well, we were talking about scientific discovery. I think that’ll be an eval that can go for a long time. Revenue is kind of an interesting one. But I think the like static evals of benchmark scores are less interesting. And also those are crazily gamed.

Ben

That’s all they are, is games as far as I can tell.

Erik

More broadly, it seems that the culture—“the culture,” Twitter, X is less AGI-pilled than it was a year or so ago when the AI 2027 thing came out. Some people point to, you know, GPT-5, them not seeing sort of the obvious… Obviously there was a lot of progress under the hood, not as obvious to what people were expecting. But should people be less AGI-pilled, or is this just Twitter vibes?

Sam

Well, a little bit of both. We talked about the Turing Test. AGI will come. It’ll go whooshing by. The world will not change as much as the impossible amount that you would think it should. It, it won’t actually

Ben

It won’t actually be the singularity.

Sam

It will not. Even if it’s like doing kind of crazy research, like society will learn faster, but one of the kind of like retrospective observations is people and societies all are just so much more adaptable than we think that, you know, it was like a big update to think that AGI was gonna come. You kind of go through that. You need something new to think about. You make peace with that. It turns out like it will be more continuous than we thought.

Ben

Which is good.

Sam

Which is really good.

Ben

I’m not up for the Big Bang.

25:04 AI safety

Erik

Yeah. Well to that end, how have you sort of evolved your thinking? You mentioned how you’ve evolved your thinking on vertical integration. How have you evolved your thinking or what’s the latest thinking on sort of AI stewardship, safety? What’s the latest thinking on that?

Sam

I do still think there are gonna be some really strange or scary moments. The fact that like so far the technology has not produced a really scary giant risk doesn’t mean it never will. We were talking about, it’s kind of weird to have like billions of people talking to the same brain. Like there may be these weird societal-scale things that are already happening that aren’t scary in the big way but are just sort of different.

But I expect, like, I expect some really bad stuff to happen because of the technology, which also has happened with previous technologies, and I think…

Ben

All the way back to fire.

Sam

Yeah. And I think we’ll like develop some guardrails around it as a society.

Erik

What is sort of your latest thinking on the right mental models we should have around the right regulatory frameworks, or the ones we shouldn’t be thinking about?

Sam

I think most regulation probably has a lot of downside. The thing I would most like is as the models get truly, like, extremely superhuman capable, I think those models and only those models are probably worth some sort of like very careful safety testing as the frontier pushes back. I don’t want a Big Bang either. And you can see a bunch of ways that could go very seriously wrong. But I hope we’ll only focus the regulatory burden on that stuff and not all of the wonderful stuff that less capable models can do, that you could just have like a European style complete, clampdown on, and that would be very bad.

Ben

Yeah, it seems like the thought experiment that, okay, there’s going to be a model down the line that is this super, superhuman intelligence that could, you know, do some kind of takeoff like thing. We really do need to wait until we get there, or like at least we get to a much bigger scale or we get close to it.

Because nothing is gonna pop outta your lab in the next week that’s gonna do that. And I think that’s where we as an industry kind of confuse the regulators. Because I think you really could, one, you’d damage America in particular in that, but China’s not gonna have that kind of restriction, and you getting behind, in AI, I think it’d be very dangerous for the world.

Sam

Extremely dangerous. Extremely dangerous.

Ben

Much more dangerous than not regulating something we don’t know how to do yet.

Sam

Yeah, yeah.

28:29 Copyright & fair use

Erik

Do you also want to talk about copyright?

Ben

Yeah. Well, that’s a segue. How do you see copyright unfolding? Because you’ve done some very interesting things, with the opt-out. And, you know, as you see people selling rights, do you think, will they be bought exclusively? Will they be just like, I could sell it to everybody who wants to ping me? Or, how do you think that’s gonna unfold?

Sam

This is my current guess. Speaking of that, like society and technology, coevolve, as the technology goes in different directions and we saw an example, like video models got a very different response from rights holders than image gen does.

Ben

Yeah, yes.

Sam

So like you’ll see this continue to move, but forced guess from the position we’re in today, I would say that society decides training is fair use but there’s a new model for generating content in the style of, or with the IP of, or something else. Like a human author can, anybody can read a novel and get some inspiration, but you can’t reproduce the novel on your own.

Ben

Right. You can talk about Harry Potter, but you can’t re-spit it out.

Sam

Yes. Although, another thing that I think will change, in the case of Sora, we’ve heard from a lot of concerned rights holders and also a lot of…

Ben

Name and likeness…

Sam

And a lot of rights holders who are like, “My concern is you won’t put my character in enough.”

Ben

Yeah, yeah.

Sam

I want restrictions for sure, but like if I’m, you know, whatever, and I have this character, like I don’t want the character to say some crazy offensive thing, but like I want people to interact.

Like that’s how they develop the relationship, and that’s how like my franchise gets more valuable. And if you’re picking like his character over my character all the time, like, I don’t like that. So I can completely see a world where subject to the decisions that a rights holder has, they get more upset with us for not generating their character often enough than too much. And this is like, this was not an obvious thing recently that this is how it might go.

Ben

Yeah, this is such an interesting thing with kind of Hollywood where we saw this… Like one of the things that I never quite understood about the music business was how like, you know, okay, you have to pay us if you play the song in a restaurant. Or like at a game. Or this and that and the other. And they get very aggressive with that, when it’s obviously a good idea for them to play your song at a game. Because that’s the biggest advertisement in the world for like all the things that you do, your concert…

Sam

Yeah, that one felt really irrational.

Ben

I would just say it’s very possible for the industry just because the way those industries are organized, or at least the traditional creative industries, to do something irrational. Like in the music industry, I think it came from the structure where you have the publisher who’s just, you know, basically after everybody, that their whole job is to stop you from playing the music, which every artist would want you to play. So I do wonder how it’s gonna shape out. I agree with you that the rational idea is, I want to let you use it all you want, and I want you to use it, but don’t mess up my character.

Sam

“Here are my restrictions.”

So I think like if I had to guess, some people will say that. Some people are gonna say, “Absolutely not.” But it doesn’t have the music industry thing of just a few people with all of the leverage.

Ben

Right. Right. It’s more dispersed.

Sam

And so people will just try many different setups here and see what works.

Ben

Yeah. And maybe it’s a way for new creatives to get new characters up. And you’ll never be able to use Daffy Duck, or…

32:36 Open source

Erik

I wanna chat about open source. Because there’s been some evolution of thinking too, in that GPT-3 didn’t have the open weights, but you released, you know, very capable open model earlier this year. What’s sort of your latest thinking? What was the evolution there?

Sam

I think open source is good. It makes me really happy that people really like gpt-oss.

Ben

And what do you think, like strategically, like what’s the danger of DeepSeek being the dominant open-source model?

Sam

I mean, who knows what people will put in these open-source models over time.

Ben

Like what the weights will actually be?

Sam

Yeah.

Ben

So you’re ceding control of the interpretation of everything to somebody, who may be or may not be influenced heavily by the Chinese government.

We really thank you for putting out a really good open-source model because what we’re seeing now is in all the universities, they’re all using the Chinese models, which feels very dangerous.

Erik

You’ve said that the things you care most about professionally are AI and energy.

Sam

I did not know they were gonna end up being the same thing. They were two independent interests that really converged.

33:56 Sam’s interest in energy

Erik

Talk more about how your interest in energy sort of began, how you’ve sort of chosen to play in it. And then we could talk about how they’ve converged.

Ben

Because you started your career in physics.

Sam

CS and physics. Well, I never really had a career. I studied physics. My first job was like a CS job. This is an oversimplification, but roughly speaking, I think if you look at history, the highest impact thing to improve people’s quality of life has been cheaper and more abundant energy. And so it seems like pushing that much further is a good idea. I don’t know. People have these different lenses, they look at the world, but I see energy everywhere.

Ben

In the West, I think we’ve painted ourselves into a little bit of a corner on energy by both outlawing nuclear for a very long time.

Sam

That was an incredibly dumb decision.

Ben

And then, you know, like also a lot of policy restrictions on energy. Worse so in Europe than in the US but also dangerous here And now with AI here, it feels like we’re gonna need all the energy from every possible source. And how do you see that developing kind of policy-wise and technologically. Like, what are gonna be the big sources? And how will those kind of curves cross?

And then what’s the right policy around, you know, drilling, fracking, all these kinds of things?

Sam

I expect in the short term most of the net new in the US will be natural gas relative to at least base load energy. In the long term, I expect it’ll be, I don’t know what the ratio, but the two dominant sources will be solar plus storage and nuclear. I think some combination of those two will win the future. Like the long term future.

Ben

In the long term, right.

Sam

And advanced nuclear, meaning SMRs, fusion, the whole stack.

Ben

And how fast do you think that’s coming on the nuclear side? Where it’s really at scale. Because you know, obviously there’s a lot of people building it. But we have to completely legalize it and all that kind of thing.

Sam

I think it kind of depends on the price. If it is completely crushingly, economically dominant over everything else, then I expect it to happen pretty fast. Again, if you like, study the history of energy, when you have these major transitions to a much cheaper source, the world moves over pretty quickly. The cost of energy is just so important. So if nuclear gets radically cheap relative to anything else we can do, I would expect there’s a lot of political pressure to get the NRC to move quickly on it, and we’ll find a way to build it fast. If it’s around the same price as other sources, I expect the kind of anti-nuclear sentiment to overwhelm and it to take a really long time.

Ben

It should be cheaper.

Sam

It should be. It should be the cheapest form of energy on Earth, or anywhere.

Ben

Cheap, clean, what’s there not to like? Apparently a lot.

37:06 Monetizing AI

Erik

On OpenAI, what’s the latest thinking in terms of monetization, in terms of either certain experiments or certain things that you could see yourself spending more time or less time on. You know, different models that you’re excited about.

Sam

The thing that’s top of mind for me, like right now, just because it just launched and there’s so much usage is what we’re gonna do for Sora. Another thing you learn once you launch one of these things is how people use them versus how you think they’re gonna use them. And people are certainly using Sora the ways we thought they were gonna use it, but they’re also using it in these ways that are very different. Like people are generating funny memes of them and their friends and sending them in a group chat.

And that will require a very different… Like Sora videos are expensive to make. So that will require a very different, you know, for people that are doing that, like hundreds of times a day, it’s gonna require a very different monetization method than the kinds of things we were thinking about.

I think it’s very cool that the thesis of Sora, which is people actually wanna create a lot of content. You know, the traditional naive thing that it’s like 1% of users create content, 10% leave comments, and 100% view. Maybe a lot more want to create content, but it’s just been harder to do.

And I think that’s a very cool change. But it does mean that we gotta figure out a very different monetization model for this than we were thinking about if people wanna create that much. I assume it’s like some version of you have to charge people per generation when it’s this expensive.

But that’s like a new thing we haven’t had to really think about before.

Erik

What’s your thinking on ads for the long tail?

Sam

Open to it. Like many other people, I find ads somewhat distasteful, but not a non-starter. And there’s some ads that I like, like one thing I’d give Meta a lot of credit for is Instagram ads are like a net value add to me. Um, I like Instagram ads. I’ve never felt that like… You know, on Google, I feel like I know what I’m looking for. The first result is probably better. The ad is an annoyance to me. On Instagram, it’s like, I didn’t know I want this thing. It’s very cool. I’d never heard it, but I never would’ve thought to search for it. I want the thing. So that’s like, there’s kinds of things like that, but people have a very high-trust relationship with ChatGPT, even if it screws up, even if it hallucinates, even if it gets it wrong, people feel like it’s trying to help them and that it’s trying to do the right thing. If we broke that trust, it’s like you say, “What coffee machine should I buy?” And we recommended one, and it was not the best thing we could do but the one we were getting paid for, that trust would vanish. So like that kind of ad does not work. There are others that I imagine that could work totally fine. But that would require like a lot of care to avoid the obvious traps.

Ben

And then how big of problem, you know, just extending the Google example, is like, fake content that then gets slurped in by the model, and then they recommend the wrong coffee maker because somebody just blasted a thousand great reviews about a horrible coffee maker.

Sam

So there’s all of these things that have changed very quickly for us. This is one of those examples that people are doing these crazy things to, maybe not even fake reviews, but just paying a bunch of like human like, really trying to figure out…

Ben

Or using ChatGPT to write some good ones. “Write me a review that ChatGPT would love about my coffee maker.”

Sam

Exactly. Exactly. So this is a very sudden shift that has happened. We never used to hear about this like six months ago ir 12 months ago, certainly. And now there’s like a real cottage industry that feels like it’s sprouted up overnight, trying to do this.

Ben

Yeah, no, they’re very clever out there.

Sam

Yeah. So, I don’t know how we’re gonna fight it yet, but people figure this out.

Ben

So that gets into a little bit of this other thing that we’ve been worried about. And, you know, we’re trying to kind of figure out blockchain sort of potential solutions to it and so forth. But there’s this problem where like the incentive to create content on the internet used to be, you know, people would come and see my content and they’d read like, you know, if I write a blog, people will read it and so forth.

With ChatGPT, if I’m just asking ChatGPT and I’m not like going around the internet, who’s gonna create the content and why? Um, and is there. An incentive theory or, or, or, or something that you have to kind of not break the covenant of the internet, which is like, I create something and then I’m rewarded for it with like either attention or money or something.

Sam

The theory is much more of that will happen if we make content creation easier and don’t break the like kind of fundamental way that you can get some kind of reward for doing so. So, for the dumbest example of Sora, since we’ve been talking about that, it’s much easier to create a funny video than it’s ever been before. Maybe at some point you’ll get a rev share for doing so. For now you get like internet likes, which are still very motivating to some people. But people are creating tons more than they ever created before in any other kind of like video app.

Ben

Is that the end of text?

Sam

I don’t think so. Like people are also creating…

Ben

Or human-generated text?

Sam

Human generated will turn out to be like, you have to verify like what percent. So like fully handcrafted, was it like tool-aided…

Ben

Yeah. I see. Yeah, probably nothing not tool-aided. Interesting.

43:03 Early OpenAI retrospective

Erik

We’ve given Meta their flowers, so right now I can feel like I can ask you this question, which is: The great talent war of 2025 has taken place, and OpenAI remains intact. The team is strong as ever, shipping incredible products. What can you say about what it’s been like this year in terms of just everything that’s been going on?

Sam

I mean, every year has been exhausting. The first few years of running OpenAI were like the most fun professional years of my life by far. It was like unbelievable.

Ben

Before you released the product.

Sam

Was running their research lab with the smartest people doing this like amazing like historical work, and I got to watch it, and that was very cool. And then we launched ChatGPT, and everybody was like congratulating me and I was like, “My life is about to get completely ransacked.” And of course it has. But it feels like it’s just been crazy all the way through.

It’s been almost three years now, and I think it does get a little bit crazier over time, but I’m like more used to it, so it feels about the same.

Erik

We’ve talked a lot about OpenAI, but you also have a few other companies, Retro Biosciences in longevity and energy companies like Helion and Oklo. Did you have a master plan, you know, a decade ago to sort of make some big bets across these major spaces? Or how do we think about the Sam Altman arc in this way?

Sam

No, I just wanted to like use my capital to fund stuff I believed in. It felt like a good use of capital. And more fun or more interesting to me. And certainly like a better return than like buying a bunch of art or something.

44:56 What will AGI think of humanity?

Erik

What about the quote unquote “human algorithm” do you think AIs of the future will find most fascinating?

Sam

I would bet the whole thing. My intuition is that like AI will be fascinated by all other things to study and observe…

45:19 Envisioning the post-AGI world

Erik

In closing, I love this insight you had where you talked about how a mistake investors make is pattern matching off previous breakthroughs and just trying to find, “Oh, what’s the next Facebook,” or “What’s the next OpenAI?”

And that the next, you know, potential trillion dollar company won’t look exactly like OpenAI. It will be built off of the breakthrough that OpenAI has helped emerge, which is near-free AGI at scale in the same way that OpenAI leveraged previous breakthroughs. And so for founders and investors and people trying to ascertain the future listening to this, how do you think about a world in which OpenAI achieves its mission?

There is near-free AGI. What types of opportunities might emerge for company building or investing that you’re potentially excited about as you put your investor hat on your company-building hat on?

Sam

I have no idea. I mean, I have like guesses, but I have learned…

Ben

You’re always wrong.

Sam

You’ve learned you’re always wrong. I’ve learned deep humility on this point. I think if you try to like armchair quarterback it, you sort of say these things that sound smart, but they’re pretty much what everybody else is saying, and it’s like really hard to get the right kind of conviction.

The only way I know how to do this is to like be deeply in the trenches exploring ideas, like talking to a lot of people. And I don’t have time to do that anymore, right, I only get to think about one thing now. So I would just be like repeating other people’s or saying the obvious things, but I think it’s a very important, like if you are an investor or a founder, I think this is the most important question and you figure it out by like building stuff and playing with technology and talking to people and being out in the world. I have been always enormously disappointed by the willingness of investors to back this kind of stuff, even though it’s always a thing that works. You all have done a lot of it, but most firms just kind of chase whatever the current thing is. And so do most founders. So I hope people will try to go.

Erik

We talk about how silly five-year plans can be in a world that’s constantly changing. It feels like when I was asking you about your master plan, you know, your career arc has been following your curiosity, staying, you know, super close to the smartest people super close to the technology and just identifying opportunities and just kind of in an organic and incremental way from there.

Sam

Yes, but AI was always a thing I wanted to do. I studied AI. I worked in the AI lab between my freshman and sophomore year of college. It wasn’t working all at the time. I don’t wanna like work on something that’s totally not working. It was clear to me at that time AI was totally not working. But I’ve been an AI nerd since I was a kid.

Ben

So amazing how, you know, you got enough GPUs, got enough data and the lights came on.

Sam

It was such a hated, like people were… Man, when we started like figuring that out, people were just like, “Absolutely not.” The field hated it so much. Investors hated it too. It’s somehow not an appealing answer to the problem.

Ben

The bitter lesson.

Erik

Well the rest is history and perhaps let’s wrap on that. We’re lucky to be partners along for the ride. Sam, thanks so much for coming on the podcast.

Sam

Thanks very much.

Ben

Yeah, thank you.

Resources

Follow Sam on X: https://x.com/sama

Follow OpenAI on X: https://x.com/openai

Learn more about OpenAI: https://openai.com/

Try Sora: https://sora.com/

Follow Ben on X: https://x.com/bhorowitz

Stay Updated:

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Apple Podcasts:

Listen to the a16z Podcast on Spotify:

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Discussion about this video

User's avatar