0:00
/
0:00

Faster Science, Better Drugs

Can we make science as fast as software?

In this episode, Erik Torenberg talks with Patrick Hsu (cofounder of the Arc Institute) and a16z Bio + Health General Partner Jorge Conde about Arc’s “virtual cells” moonshot, which uses foundation models to simulate biology and guide experiments.

They discuss why research is slow, what an AlphaFold-style moment for cell biology could look like, and how AI might improve drug discovery. The conversation also covers hype versus substance in AI for biology, clinical bottlenecks, capital intensity, and how breakthroughs like GLP-1s show the path from science to major business and health impact.

Timecodes

00:00:35 Arc’s moonshot

00:01:56 Why is science slow?

00:05:10 Why AI has been better at language and images than at biology

00:10:18 Virtual cells

00:16:39 The GPT moment for virtual cells

00:19:56 Scaling from virtual cells to organisms

00:22:13 Fixing the pharma industry

00:33:19 Predicting biotech innovations

00:38:15 The state of AI drug discovery

00:42:04 The bull case for AI in bio

00:45:50 Patrick’s approach to investing in AI

00:54:02 Arc’s Virtual Cell Challenge

Transcript

This transcript has been edited lightly for readability.

00:00:35 Arc’s moonshot

Erik Torenberg

Patrick, welcome to the podcast. Thanks for joining.

Patrick Hsu

Thanks for having me on.

Erik

I've been trying to have you on for years, but finally I could get your time.

Patrick

Here I am. I'm excited to do it. It's gonna be great.

Erik

For some of the audience who aren't familiar with you and your work at Arc and beyond, how do you describe, what's your moonshot? What is what you're trying to do?

Patrick

I want to make science faster, right? You know, we can frame this in high-level philosophical goals like accelerating scientific progress. Maybe that's not so tangible for people. I think the most important thing is: science happens in the real world. If it's not AI research, which moves as quickly as you can iterate on GPUs, right, you have to actually move things around, atoms, clear liquids from tube to tube, to actually make life-changing medicines, and these are things that take place in real time. You have to actually grow cells, tissues, and animals. And I think the promise of what we're doing today with machine learning and biology is that we could actually accelerate and massively parallelize this.

And so our moonshot is really to make virtual cells at Arc and simulate human biology with foundation models. And, you know, we'd like to figure out something that feels useful for experimentalists. People who are skeptical about technology, you know, they just wanna see the data and see the results, that it's actually the default tool that they go to use when they want to do something with cell biology.

00:01:56 Why is science slow?

Jorge Conde

Okay, well, hold on, let's back up. Why is science so slow in the first place? Like, whose fault is that?

Patrick

Whose fault is that? Now that is a long one. We should get into it. We should get into it. It's really multifactorial, right? It's this weird Gordian knot that ultimately comes down to incentives, right? You know, people talk a lot about science funding and how science funding can be better, but it's also about how, you know, the training system works, right? How we incentivize long-term career growth, how we, you know, try to separate basic science work from commercially viable work and generally the space of problems that people are able to work on today.

I think things are increasingly multidisciplinary. It's very hard for individual research groups or individual companies to be good at more than two things, right? You might be able to do, you know, computational biology and genomics, right? Or you know, like chemical biology and molecular glues. But you know, how do you do five things at once? It's increasingly hard.

And we really built Arc as an organizational experiment to try to see what happens when you bring together neuroscience and immunology and machine learning and chemical biology and genomics all under one physical roof, right? If you increase the collision frequency across these five distinct domains, there would hopefully be a huge space of problems that you could work on that you wouldn't be able to.

Now, obviously in any university or any, kind of, geographical region, you have all of these individual fields represented at large, right, across these different campuses. But, you know, people are distributed, and you want everyone together.

Jorge

Okay. But if I may, I would've thought a university was an attempt to bring in multiple disciplines under one roof. You're saying it's not, it's too diffuse.

Patrick

It's across an entire campus.

Jorge

Okay. So the physical, like literally the physical distance creates inefficiency.

Patrick

That's part of it. And I think the other part is folks have their own incentive structures, right? They need to publish their own papers, they need to do their own thing and you know, make their own discovery and you're not really incentivized to work together, I think in many ways in the current academic system. And a lot of what we've done is to try to have people work on bigger flagship projects that require much more than any individual person or group or idea.

Jorge

Yeah, that's cool. So that's sort of the original hypothesis for the Arc Institute is if you can bring multiple disciplines together to increase the collision frequency, as you said, and, if one could remove some of the cross incentives that may exist in sort of traditional structures, the combination of those two things will make science faster.

Patrick

Yeah. These are, these are absolutely part of it, right? We have two flagship projects: one trying to find Alzheimer's disease drug targets, the other to make these virtual cells. I think it's not just the people and the infrastructure but also the models will hopefully literally make science faster. That you could, you know, do experiments at the speed of forward passes of a neural network if these models could become accurate and useful.

Jorge

Yeah. So that will be one thing that solves the length of discovery as you compress the time discovery takes naturally by just throwing technology at the problem. At the risk of oversimplifying.

Patrick

Well, we're techno-optimists here, no?

00:05:10 Why AI has been better at language and images than at biology

Erik

We are. Why has AI progressed so much faster in sort of image generation and language models than biology? And if we could wave a wand, like where are we excited to speed things up?

Patrick

To be honest, it's a lot easier, right? Maybe that's a hot take.

Jorge

You mean technology is easier than biology.

Patrick

Natural language and video modeling is easier than modeling biology. And to some degree, like if you understand and learn machine learning, right, and how to train these models, you have already learned how to speak. You already know how to look at pictures. And so your ability to evaluate the generations or the predictions of these models are very native, right? We don't speak the language of biology, right? At very best with an incredibly thick accent.

So when you're training these DNA foundation models, I don't speak DNA natively, so I only have a sense of the types of tokens that I'm feeding into the model and what's actually coming out, right? Similarly with these virtual cell models, you know, I think a lot of the goal is to figure out ways that you can actually interpret the weird, fuzzy outputs that the model is giving you.

And I think that's what slows down the iteration cycles is you have to do these lab-in-the-loop things where you have to run actual experiments to actually test with experimental ground truth. And you know, I think increasing the speed and dimensionality of that is gonna be really important.

Jorge

You talk about, we speak biology poorly or with a very thick accent.

How much of this is like if you're training on an image, we can see the image, and so we can see how good the output is. What about all the things in biology that we can't see or don't even know exist yet? Like how can we create a virtual cell, and maybe we should come back to what a virtual cell model is, by the way, for the lay audience, but like, how can we create a virtual cell model when we're not even sure if we understand all of the components that are in a cell and how they function.

Patrick

People talked a lot about this in NLP as well. There's this long academic tradition in natural language processing, right? And then it was just weird and non-intuitive and intensely controversial that you could just feed all this unstructured data into a transformer and it would just work. Now, we're not saying this will just work in all the other domains, including in biology, but I think there is this, you know, controversy around what does it mean to be an accurate biological simulator? What does it mean to be a virtual cell?

It's true. We can't measure everything, right? We can't measure, think things like metabolites in really high throughput with spatial resolution. And there are gonna be different phases of capability where initially they model individual cells, then they model pairs of cells, then they model cells in a tissue, and then in a broader physiologically intact animal environment.

And those are length scales and kind of layers of complexity that we’ll aggregate and, you know, improve upon over time. And I think the other kind of non-intuitive thing in many ways are the scaling laws that you get in data and in modeling. I'll give you an example, right? There's a lot of discussion in molecular biology about how RNAs don't reflect protein and protein function, right?

And so while we don't have, you know, proteomic measurement technologies that are nearly as scalable as transcriptomic measurement technologies today like at the single-cell resolution certainly, but we're getting there, and you can layer on certain nodes of protein information that you can add on top of the RNA information, but in many ways, the RNA representation is a mirror, right? It might be a lower resolution mirror for what's happening at the protein layer, but eventually what is happening in protein signaling will get reflected in a transcriptional state, right? And so for an individual cell, this may not be very accurate, but when you imagine the massive data scale that we're generating in genomics and functional genomics, you start to gather tremendous amounts of RNA data that will read in kind of like what's happening at the protein level as some sort of mirror echo, right? And then that can, you know, be the case for metabolic information as well, and so on.

Jorge

It's a low-pixel image, but if we can get sort of zoomed out far enough, we'll get a sense of what's going on.

Patrick

You have to bet on what you can scale today, right? We're able to, you know, scale single-cell and transcriptional information today. We're able to add on, you know, protein-level information. Over time, we'll need spatial information, spatial tokens, and we'll need temporal dynamics as well.

I kind of bucket things into three tiers. There's invention, engineering and scaling. And there are certain things today, biotechnologically that are scale-ready, and then there are things that we still need to invent, right? And that's part of why we felt like we needed a research institute to be able to tackle these types of problems.

That we weren't just going to be an engineering shop that's just trying to scale single-cell perturbation screens, right? That, you know, would be interesting, but in three years would feel very dated, I think. And so there's a lot of novel technology investment that we're making that we think will bear fruit over time.

00:10:18 Virtual cells

Erik

Can we flesh out the virtual cell concept, why that's the ambition we've landed on, what it's gonna take to get there? Or what are the bottlenecks?

Patrick

I would say the most kind of famous success of ML in biology is AlphaFold. And this solved the protein folding problem of, you know, when you take a sequence of any amino acids, what does the protein look like?

And you know, it's pretty good. It's not perfect. It certainly doesn't simulate the biophysics and the molecular dynamics, but it gives you a sense of what the end state is with 90+% accuracy, right? And that's the AlphaFold moment that people talk about, where anytime you want to work with a protein, if you don't have an experimentally solved structure, you're just gonna fold it with this algorithm.

And we kind of want to get to that point with virtual cells as well. And the way that at Arc we're operationalizing this is to do a perturbation prediction, where the idea is you have some manifold of cell types and cell states. That can be a heart cell, a blood cell, a lung cell, and so on, and you know that you can kind of move cells across this manifold, right?

Sometimes they become inflamed. Sometimes they become apoptotic, sometimes they become cell cycle arrested. They become stressed. They're metabolically starved. They're hungry in some way. And so if you have this sort of representation of universal, sort of, cell space, can you figure out, what are the perturbations that you need to move cells around this manifold? And this is fundamentally what we do in making drugs, right? Whether we have small molecules, which started out as natural products from, you know, boiling leaves or antibodies when we injected proteins into cows and rabbits and sheep and took their blood to get those antibodies, where we are basically trying to get to more and more specific probes, right?

And we had experimental ways to kind of cook these up. Now we have computational ways to zero shot these binders, but ultimately what you're trying to do with these binders is to inhibit something and then by doing so, kind of click and drag it from a kind of toxic, gain-of-function, disease-causing state to a more quiescent, homeostatic, healthy one.

And the thing that is very clear in complex diseases, right, where you don't have a single cause of that disease is there's some complex set of changes, there's a combination of perturbations, if you will, that you would wanna make to be able to move things around. Now, you know, people talk about this classically as things like polypharmacology.

But, you know, I think we're moving from, “Oh, this thing happens to have a whole bunch of different targets, kind of by accident,” to, “We have the ability to manipulate these things combinatorially in a purposeful way,” right? That to go from cell state A to cell state B, there are these three changes I need to make first, then these two changes, and then these six changes over time, right? And we kind of want models to be able to suggest this. And the reason why we scoped virtual cell this way is because we felt it was just experimentally very practical. You want something that's gonna be a copilot for a wet lab biologist to decide “What am I gonna do in the lab?”

We're not trying to do something that's like a theory paper that's really interesting to read, where the numbers go up on an ML benchmark, but you know, you practically can decide what are the twelve things that you're gonna do in the lab in twelve different conditions and actually just test them.

And then that's how we kind of enter the kind of the lab-in-the-loop aspect of model predictions to experimental measurements to kind of improved, or RL-ed, or whatever, model predictions again. And the goal is to be able to do in silico target ID where you can basically figure out new drug targets, figure out then the compositions, the drug compositions you would need to actually make those changes.

I think if we could do that, we could make a new like vertically integrated, AI-enabled pharma company. Which, you know, I think is obviously a very exciting idea today, but I think in many ways the kind of pitch and the framing of these companies precedes the fundamental research capability breakthroughs.

And that's what we are really invested in at Arc, is kind of just making that happen along with many other amazing colleagues in the field to just make this possible for the community.

Jorge

So if the goal is, I'm going to oversimplify it for you, like if we wanted to get to the AlphaFold moment where, you know, it kind of gives you a useful structure, folded structure 90% of the time to use your data point, we wanted to take that comparison in the virtual cell model, and we said, “Okay, 90% of the time if I ask the model, I want to shift the cell from cell state A to cell state B, and it's gonna give me a list of perturbations. And let's say that at 90% of the time, those perturbations in fact result experimentally in the shifting from cell state A to cell state B.” How far away are we from that AlphaFold moment for virtual cells?

Patrick

I find it helpful to frame these in terms of like GPT-1, 2, 3, 4, 5 capabilities. And I think most people would agree we’re somewhere between GPT-1 and 2, right? A lot of the excitement was that we could achieve GPT-1 in the first place, that you could see a path with scaling laws of some kind to make successive generations where capabilities would improve. But you know, with like our Evo kind of DNA foundation models that we developed at Arc with Brian Hie, one of the things that we've seen is that, you know, these genome generations are like quote unquote “blurry pictures” of life, right?

We don't think if you synthesize these novel genomes, they would be alive. But, you know, we don't think that's actually also impossibly far away. We'll just have to kind of follow these capabilities. We're taking a very integrated approach to attack this problem where you need to curate public data, you need to generate massive amounts of internal private data, build the benchmarks and train new models and build new sort of architectures and kind of doing these things full stack. And we'll just kind of attack this hill climb over time.

00:16:39 The GPT moment for virtual cells

Erik

What's the GPT, I'll say GPT-3, moment going to look like, and by that I mean sort of a public release that alters the public's conception of just what's possible here from a capabilities perspective and also inspires a whole new generation of talent to like rush into biology.

Patrick

Well, the good thing with biology is we have a lot of ground truth, right? There are entire textbooks, right, that describe cell signaling and cell biology and how these things work. And so, you know, even without a virtual cell model at all, if you went into ChatGPT or Claude, and you asked it some question about, you know, like receptor tyrosine kinase signaling, it would have an opinion on how that works, right?

And so I think you would want the model to be able to predict perturbations that are kind of famous canonical examples of biological discovery. So I'll give you an example. If you've loaded into the model an iPSC, kind of an induced pluripotent stem cell state or human embryonic stem cell state, and a fibroblast cell state. Could it predict that the four Yamanaka factors would reprogram the fibroblast into a stem-like state and essentially rediscover from the model something that won the Nobel Prize in 2009? That would be sort of one really kind of classic example.

And then you could go do the inverse. If you have a stem cell, can it discover neurogenin 2, ASCL1, MyoD? Can it find differentiation factors that will turn that into a neuron or into a muscle cell or so on? And you know, these are kind of classic examples in developmental biology, but you could also use this to try to discover or kind of recapitulate the mechanism of action of FDA-approved drugs. And so you could say, for example, you know, if you kind of inhibit Her2 in breast cancer cell states, you would get this type of response. Or it could predict the certain clones that, you know, will be able to kind of be more metastatic or they'll be more resistant and they'll lead to minimal residual disease. There are, I think, lots of kind of biological evals that you can kind of add on to these models over time that are really tangible textbook examples as opposed to, I think what the kind of early generation of models do today, which is, you know, very quantitative things like mean absolute error over the differentially expressed genes and stuff like that. Those are ML benchmarks. And we want to increase the sophistication into something that you could explain to an old professor who has, you know, never touched a terminal in their life.

Jorge

By the way, you talk about textbooks as ground truth. Do you think we're going to find that a lot of the textbooks are wrong?

Patrick

I would say textbooks are compressed. So for example, when you look at these kind of classic cell signaling diagrams of A signals to B, which inhibits C, right? That's a very kind of two dimensional representation of…

Jorge

Of our understanding of a complex system.

Patrick

Right. I mean, yes, textbooks are what they are. They represent the corpus of reliable knowledge, but everyone knows that there are an incredible number of exceptions, and part of what discovery is is to find new exceptions.

00:19:56 Scaling from virtual cells to organisms

Erik

Why don't you talk about the difference between the simulation of biology and the actual understanding? And what would it take to actually be able to model the extremely complex human body?

Patrick

You know, some people don't like the phrase “virtual cells” because it sounds too media-friendly. It's not rigorous enough. But I've always found it funny that you know many people are okay with like “digital twins” and “digital avatars,” which, you know, talks about modeling biology at a way higher level of abstraction.

You know, I think virtual cells, if anything, is actually way more scoped and rigorous than modeling a digital twin or avatar. But, you know, I think these are useful words because they describe the goal and the ambition, right? That no, in the long run we don't care about predicting the, you know, kind of perturbation responses of an individual cell at all, actually. Obviously, we want to be able to predict drug toxicity. We want to be able to predict aging. We want to be able to predict why a liver cell becomes cirrhotic when you repeatedly challenge it with ethanol molecules or whatever, right? And you know, these sort of chemical or environmental perturbations should be predictable.

I think you just kind of have to layer on the complexity, right? Like, why are we so worried about modeling entire bodies over time when we can't do it for an individual cell, where we sort of, you know, accept or broadly believe that this is a fundamental unit of biological computation, if you will. And let's just kind of start there, right? Just like you kind of have to start with, you know, things like math and code and language modeling, right? And things that are just sort of easier to check. You can build to superintelligence over time.

Jorge

Yeah. I think that makes sense, right? That's a very sort of laudable, ambitious goal, if we can figure out how to model the fundamental unit of biology, the cell, then from that, we should be able to build.

Patrick

Like in early AI, we just started with like language translation. Just, you know, basic NLP tasks, right? This was long before, you know, the tremendous ambitious scope that we have today. And I think we hopefully can mirror that type of trajectory, if we're lucky.

00:22:13 Fixing the pharma industry

Erik

It seems like biotech and pharma has been a shrinking industry, certainly the rate of growth. What’s it going to take for these innovations in the science to reflect themselves in business models and in growth for the industry.

Patrick

A lot of these biotech startups would try to initially sell software to pharma companies, and then they would kind of realize, “Oh wow, we're like competing for SaaS budgets, which aren't very large.” And then, you know, now they're realizing, “Oh, we have to compete for R&D budgets.” And I think, you know, there's this narrative from the current generation of these companies that, “Oh, our biological agents will compete for R&D budgets and replace headcount,” or something like that, just like we're seeing in, you know, agents across different verticals. Whether or not that will, I think, pan out, I think depends on just whether or not these things meaningfully allow us to, you know, build drugs more effectively in the pharma context. And I think that's just sort of the most important thing in this industry.

And so I think we believe in virtual cells, not just because we think it will be a fountain of fundamental mechanistic insights for discovery, but also because in the case of success it could be industrially really useful. But, you know, we'll have to see over time, right? If we have 90% of drugs failing clinical trials, that kind of means two things, and you're not sure what percent of which, right? One is we're targeting the wrong target in the first place. The second is the composition, the drug matter that we're using doesn't do the job. It's not clear for each individual failure which one it is, or if it's both, or what proportion of each.

And you know, we'll have to kind of sort that out over time. Like you can imagine, even in the case of success, when we have 90% accurate virtual cells, you'll probably end up with suggestions like, “Okay, now you need to target this GPCR only in heart. But not in literally any other tissue.” We don't have the drug matter that can do that today.

And so that's also why, again, you probably need research to figure out novel chemical biology matter that allows you to drug pleiotropic targets in a tissue or cell type-specific way. And so, you know, I think part of why biology is slow is because there's just this Russian nesting doll of complexity, in terms of understanding, in terms of perturbation, in terms of safety, and, the crazy thing is the progress in just the short time that I've been doing this is insane, right? Like I did my, you know, PhD at the Broad Institute in the heyday of developing single-cell genomics, human genetics, CRISPR gene editing, and, so many other things. And I think the kind of early 2010s papers on single-cell sequencing would have like 20 cells or 40 cells. And at Arc in the next, I don't know, relatively short amount of time, we're gonna generate a billion perturbed single cells. I mean, how's that for a Moore’s Law?

Jorge

Yeah. That's remarkable.

Erik

Jorge, I want to hear your answers to a couple of these questions too, as the lead of our bio practice, both on the GPT-3 moment, what that could look like. And also, like I'm curious if you think it's GLP-1s or sort of building off that, or if it's gonna be something different. And also, what's it gonna take for the science to kind of reflect itself in the business, for the industry to grow?

Jorge

Yeah. So I'll take the second one first if I could. So I think, you know, in terms of where the industry is right now, I think one of the big challenges we have is, as Patrick describes very nicely, like, you know, discovery is hard, and it takes time. And, you know, the fail modes are exactly as you described. Oftentimes when drugs fail, which they do 90% of the time in clinical trials, it's because we're going after the wrong thing, or we made the wrong thing to go after the right thing, right? Like those are the two fail modes and that happens all too often. And so I think a lot of the stuff that Patrick is describing is going to basically improve our hit rate or our batting average on figuring out what to go after and then making the right thing to go after said thing. The challenge we have, I think, in the industry is that the bottlenecks still are the bottlenecks. And the biggest bottleneck we have, which is, you know, a necessary one, is we have to prove that whatever we make, that we have the right thing to go after the right thing, so to speak, and that when we have it, that it's going to be as, you know, de-risked as possible before you put it into humans.

Patrick

And we have to be good at making them in the first place.

Jorge

And we gotta make them too. Yeah, exactly. And so that bottleneck is a necessarily important one. That bottleneck should exist. I'm not suggesting we've gotta remove it, but are there ways to reduce the cost and time associated with getting through the bottleneck of human clinical trials? And you know, it's interesting because we talk about, you know, all of the various stakeholders when you're making a drug. There are the companies, there's of course the science that supported the company that's trying to commercialize a product, and there are the regulatory agencies. You know, and everyone is trying to ensure, again, that what’s first and foremost is the ability to discover and commercialize drugs that are safe and effective for humans.

That middle part part of actually getting through that bottleneck is hard to speed up in a very obvious way. Like you can increase the rate, the way you enroll clinical trials. You can use better technology to change the way we design these clinical trials so maybe they can be faster or shorter, etc.

But some of them just have a natural timeline they have to go through. Like if you wanna demonstrate that a cancer drug promotes survival, guess what, it’s going to take some time to demonstrate a survival benefit. Or if you know you want to do a longevity drug, that by definition is a lifetime of a trial in terms of length.

So there's a lot of these bottlenecks that are really hard to get through. So what helps the industry? I think there are a couple of things that help the industry. One is capital intensity will hopefully at some point go down over time as technology gets better. Capital intensity is something that our industry faces. In some ways, it looks a little bit like AI now, right, in terms of the cost of training these models. But the capital intensity is very, very high. That has not come down. So, we gotta get to success rates up to impact capital intensity to get it down. The second thing is, where can we compress time? So good models can help us compress early discovery time.

We still haven't seen—and I think it's coming, but it hasn't happened yet—we haven't seen artificial intelligence or other technologies massively compress the amount of time it takes us to do the clinical development, the clinical trials, the enrollment of patients, all those things. We're seeing some interesting things coming. We haven't seen sort of the payoff there yet. And the third thing is if we can make better drugs going after better things, the effect size should be higher. So therefore the answer should be obvious sooner. If we can get those three things right, reduce capital intensity, compress timelines, and effectively increase effect size in some intractable diseases, that is what I think fixes the industry. And from where we sit at the early stage in terms of being early stage investors, the reason why that helps us is if the capital intensity goes down, and the value creation goes up, it becomes easier to invest in these companies in the early days because you get rewarded for coming in early. The problem we have right now is that most companies aren't—you're not seeing rewards happening when there's value inflection.

So you come in early, you bear the brunt of the capital intensity, and even if a company is successful, that success isn't reflected in the valuation. So we're not seeing the step ups that you see in other parts of the industry, and that's just really, really hard from an investing standpoint. So I think we need to see those various factors addressed for this space to really get, you know, fixed, to use your word.

Patrick

Yeah, that was great. I have a lot to add onto this.

Jorge

Please. Add away.

Patrick

A few simple observations. The first is the amount of market cap added to Lilly and Novo, based on the development of GLP-1s, is like over a trillion dollars, or, you know, I mean, Novo stock has decreased a lot, so a trillion dollars, let's say, is more than the market cap of all biotech companies combined over the last 40 years that have been started. And I think that, you know, one of the kind of interesting corollaries of this is that, you know, when we have a 10% kind of clinical trial success rate for a kind of preclinical drug matter, you tend to circle the wagons a bit and try to manage your risk, right? And so the way that you do this is you try to go after really well established disease mechanisms where if I developed new drugs that go after well understood biology, it should work the way that I hope it will in the trial, which is really, really expensive and costs a lot more in many ways than the preclinical research.

The problem with this is you go after very well validated disease mechanisms, but with really small patient populations, right? So then the expected value of this actually is relatively low. One of the kind of things that we've seen with GLP-1s is the, just the kind of value that you can create when you go after really large patient populations.

And I think that has culturally really net increased the ambition of the industry, both from the investor and from the drug developer side. And I think, you know, that's something that we should keep our foot on the gas for.

Jorge

Yeah. And look, I think the trend on that is, I would argue the trend on that is positive.

So you're absolutely right. Like the demonstration of the value that has been created with the increasing use of GLP-1s and the value transfer that's gone to companies like Lilly and Novo, I would argue is like very merited, right? Because they've cracked an endemic social problem, in terms of managing diabetes and eventually helping manage obesity.

And so I think that's remarkable. And there's a lot of value that goes to that because they tackled, they cracked a very, very challenging problem for society beyond just science. So that's great. And I agree with you like the juice needs to be worth the squeeze. You're right. A lot of biotech has been around like, go after the low hanging fruit because it's low risk and we gotta eat today. So you go get it, you know, and you start to, you push off the big ambitious indication, the large population or the really tough-to-crack disease. But you know, I do think we're seeing more and more of that.

And by the way, like we can get into some of these genetic medicines, but some of these genetic medicines are going after some of the hardest problems, the things that you quite literally couldn't address but for editing DNA. And, you know, I think that's incredibly, you know, remarkable and laudable and frankly inspiring. But the fundamental elements of the industry have to work so the capital formation is there to support those kinds of things. And right now it's hard, right? Because of the issues we talked about before.

00:33:19 Predicting biotech innovations

Erik

15 years from now, we're back in this room, we've barely escaped being part of the permanent underclass, and we're reflecting on sort of the GPT moment or maybe the legacy of GLP-1s, sort of beyond where they are now. Jorge, I'm curious to get your take on, what do you think is gonna be the technological breakthrough that we're gonna point back to and say, “Oh, this is really what set it all”? Or do you think it's gonna be sort of, you know, multifactorial, a combination?

Jorge

Yeah, look, I think it's going to go back to sort of where we started this conversation. GLP-1s as a drug are, you know, what, four decades in the making or something like that.

You know, these are not overnight successes. But I do think what we are going to see more of and our hope is that when you combine the fact that we're getting better at understanding what to target, getting better at designing medicines to hit those targets, by the way, in a whole array of new creative ways. So we have small molecules. The natural products that we got from boiling leaves, as you said earlier. We're getting really good at designing smarter and better small molecules that do new things, that function in ways that they didn't before. We've gotten quite good at designing biologics or proteins with a lot of help from things like AlphaFold that helps us understand how proteins fold.

We're gonna get a lot better at designing some of the more complex modalities, like the gene therapies of the world or the gene editors of the world. And when you can do that and combine that with our ability to hopefully use things like virtual cell models to really understand what to go after, like we're gonna have drugs… I would hope, and I would expect, that the industry will continue to bring forward drugs that have very large effect size for very difficult diseases that hopefully affect a lot of patients. If that's true, then we'll start to see some of these really, really difficult diseases that affect all of society get tackled, hopefully, you know, one by one by one by one.

And so we have obesity, we have metabolic disorder. We're dealing with cardiometabolic disease. We're starting to see interesting, promising things happening in like neurodegenerative diseases. You know, if we can tackle cancer, or at least you know, several cancers that now have begun to be treated more like a chronic condition than a death sentence that they were in the past. The more we see of that, like I think that value to society will accrete over time. And I think this should be an industry that is extraordinarily valued by society and, candidly, by the markets. We have to deliver.

Patrick

If we play this out, and let's say these AI models work, and you can make a trillion binders in silico, that will, you know, be exquisite drug matter, right?

We still need to make these things physically and test them in animals and hopefully predictive models and then actually in people. And I think, you know, that will increasingly be the bottleneck in many ways. And, you know, my friend Dan Wang recently released a book called Breakneck, which talks about, you know, kind of like the US and China and the difference between the two countries and their philosophy, the way they approach markets and…

Erik

We're a country of lawyers, they’re a country of engineers, at least their political class.

Patrick

Exactly. That's right. China is an engineering state, right? Politburo is, you know, folks who have engineering degrees, you know, you need to build bridges and roads and buildings, and these are the ways that we solve our problems. Whereas I think from, you know, the first 13 American presidents, 10 of them practiced law. From 1980 to 2020, all Democratic presidential candidates, both VP and president, went to law school. And so you kind of see the echoes of that in the FDA and the regulatory regime and, you know, all the kind of bottlenecks that people talk about developing drugs stateside. And increasingly you see folks thinking about how we can run phase Is overseas, right, build data packages that we can, you know, bring back domestically for phase II efficacy trials. I think that's interesting, directionally, but it's not enough. And you know, I think we need to kind of figure out these two bottlenecks, the making and the testing. Even if we can solve the designing part.

Jorge

Oh, I agree. Yeah, that’s. That's the bottleneck. You know, we joke about it, and what you have to do is you have to get a molecule that can go, you know, first in mice, and then in mutts, and then in monkeys, and then in man.

You know, that takes a long time, and it's just so hard to compress that. And so when you do, you should make the journey worth it, right? So when you fail on the other end of that, like, that's obviously horrible. And so finding ways to make sure that when you walk that path, that it'll be a successful journey as often as possible, is what this industry desperately needs.

00:38:15 The state of AI drug discovery

Erik

AlphaFold solved the protein folding problem, but why didn’t it solve drug discovery, or more broadly, what would it take to get AI drug discovery? What is sort of the bottleneck, on the tech side at least?

Patrick

On the tech side?

Jorge

Maybe another way to ask the question, because I always ask the founders a version of this question, like the AI ones that are like, “Oh, we're gonna do AI for drug discovery.” So my question that I always like to ask founders is: give me examples of where you think AI is hyped, potentially overly hyped, where there's real hope, like the sort of, “What do we expect,” “What's next,” and where we already see real heft. So like if I asked you like in AI, you know, where is there hype, where is there hope, and where are we seeing heft today?

Patrick

I would say there's hype in toxicity prediction models.

Jorge

Okay. So that's the idea that we will say, I'm going to show you a molecule and the model is going to tell me if it's going to be toxic or not.

Patrick

That's right. There's heft in anything to do with proteins. Obviously protein binding, but increasingly in protein design. I think there's real heft there. And then, you know, where there's hype is in multimodal biological models, whatever that means. And I think, you know, pick your favorite layers. It could be, you know, molecular layers. It could be spatial layers. It could be, you know…I mean, actually I would say there's also heft in the pathology AI prediction models. Like, you know, automating the work of pathologists and radiologists. That's interesting.

Jorge

Yeah. I think that's a very powerful use case for sure.

Patrick

And there's a lot of stuff where you don't have to train, you know, weird biology foundation models and you can write, you know, regulatory filings and reports and things like that. That's impactful and important.

Jorge

So now going back to Erik's question is, why hasn't AI turned out drugs yet? I think that was your question, right?

Patrick

You know, AI for drugs is one of these weird things where everyone who works in the industry is trying to claim that their drug is like the first AI-designed molecule. I feel like, you know, I mean, increasingly in just a few years, this will just be a native part of the stack. Just like we use, you know, the internet and we use phones, we're gonna have AI in all parts of the stack. And so it's just going to become a native part of everything that we do. And so, you know, like, “Why hasn't it worked yet” is this long multifactorial process that we've been talking about today.

There's designing, there's the making, there's the testing, there's the approvals side of it. And you know, I do think safety and efficacy as the kind of two pillars in the industry are the two things that we need to get right. We need to be able to figure out faster ways that we can predict whether or not a molecule will work and if it's going to be safe or not.

I mean, there are like ways that AI can operationalize this. If you designed a small molecule, you could now computationally dock it to every protein in the proteome and see if it's likely to bind to off-target molecules. You can use this to tune binding selectivity and affinity. That might be ways to predict, you know, safety and efficacy. And, you know, how well will that work? Well, that's a feedback loop that we'll have to actually test in the lab. And that's part of what's slow is the testing, you know, takes real hours, days, months, right, years. And that's really why we've picked at Arc the virtual cell models as our initial wedge because we think it can integrate a lot of these different pieces.

00:42:04 The bull case for AI in bio

Erik

In Dario Amodei’s essay, “Machines of Loving Grace,” he predicts, among other things, the prevention of many infectious diseases and the doubling of lifespans perhaps in as soon as the next decade. What’s your reaction to his essays, his bullishness, and some of his predictions?

Patrick

I think the core intuition that Dario had was the idea that important scientific discoveries are independent, or they're largely independent, and if they are, you know, statistically independent, then it would stand to reason that we could multi-parallelize. And so if we had models that were sufficiently predictive and useful, you could have not just a hundred of them, but millions, billions of these discovery agents or processes running at a time, which should compress the timeline to new discoveries, and turn it into a computation problem. I think that is a very futuristic framing for something that is actually very tangible today.

And if we can have virtual cell models that work, for example, that can start to do these kinds of things that we've been talking about. We can have, you know, molecular design models, we can have docking models. We can then have, you know, when you bind to this thing in this cell versus all the other off-target proteins, will a cell kind of be corrected in the right way? These kind of layers of abstraction and complexity start to get to things that feel very tangible through drug discovery. If you could actually traverse these steps reliably and in sequence, you could start to see how you can get the compression. And so I think in the long run of time, this should be possible.

Jorge

One of the core suppositions in building a good virtual cell model is that we are feeding it all the relevant data.

Patrick

The right data, yeah.

Jorge

The right data. And so we'll work to, you know, it's gene expression data or it's DNA data or you know, any number of factors, protein and protein interactions, all the things you described. What if we're missing a core element? Like what if we just haven't discovered the cork or whatever? Like we just don't know what we don't know, and therefore what we're feeding the model is fundamentally or importantly incomplete.

Patrick

I think that's almost certainly true, right? Like it seems almost obvious that we're not measuring many of the most important things in biology, right? And you can of course find many important exceptions for any of these measurement technologies. Like in biology, we ultimately have two ways to study it in high throughput. It's imaging and sequencing, right? But there are so many other types of things that you would care about that those things aren't necessarily going to do at scale.

And that's really why I think the stuff that we're talking about of the RNA layer as a mirror for other layers of biology is one that we spent a lot of time thinking about. And there's a difference between a mechanistic model and a meteorological simulation type of model. So, for example, if you want to predict the weather, right, you can build AI models that will predict whether or not it will rain next Tuesday. It won't explain physically or geologically or whatever why and how that happens. But as long as it knows if it's gonna rain next Tuesday, you're probably happy, right? And I would say similarly with a virtual cell model, it may not tell me literally why, just like AlphaFold doesn't tell me literally why did the protein fold this way and how. But it just told me the end state, and it was reasonably accurate. I think that would already be very important.

00:45:50 Patrick’s approach to investing in AI

Erik

Shifting gears a little bit, we've been talking about science and biotech, but in addition, you're an elite AI investor more broadly, so I want to talk about where your investment focus is, right now, just as it relates to AI more broadly. Where are you excited? Where are you spending time? Where are you looking forward to?

Patrick

Yeah, my goal is to really try to figure out ways that we can improve the human experience in our lifetime. If I think about the future that we're gonna leave to our children, there are a few things that if we get them right in our lifetime, will fundamentally change the world and, you know, how we live in it. I think synthetic biology is obviously one. You know, think GLP-1s, right? Things that improve sleep, things that can, you know, improve longevity, right?

These are, these are all things that are kind of, you know, easy to get excited about. I think brain-computer interfaces is another area where we're gonna see really important breakthroughs over the decades to come. And then I think the third is in robotics, both industrial and consumer robotics, that allow us to basically like scale physical labor right in interesting ways. And, you know, you can kind of see how each of these three things, even in the sort of medium cases of success, really kind of change the world. And so I'm very interested in helping make these kinds of things possible.

In the kind of techno-optimist sort of vision of the world, there's a few different types of scarcity. It's very easy when you do research to come up with important ideas. The hard thing is to tackle them in the right timeframe. It's like, you know, writing futuristic sci-fi things is not that hard. Being able to actually execute on it in the next five years or eight years, much, much harder. And I would say, you know, academic discovery is littered with plenty of ideas that are interesting and important, but you know, kind of long before their time, and in many ways the story of technology development is, you know, trying to use new technologies to solve old tricks, right? Like most of our tools are, you know, for productivity in many ways, whether that's the industrial revolution or the computing revolution, or the current AI revolution. We're trying to kind of do the same stuff.

You know, I think there's a relatively small set of very powerful ideas. New technologies give us new opportunities to attack them, and there's a set of people and teams that are gonna be positioned to be able to do that. They need to have technical innovation and then an intuition about product and business in a way that, you know, you kind of in the RPG dice role of the skills that you get in these three domains. People start at different base levels. And, you know, you might have an incredibly technical founder who doesn't know how to think commercially or someone who's just natively a very commercial thinker who, you know, doesn't have very strong product sense even though they could sell the crap out of it.

And so I think these sort of three broad categories of capabilities you need to kind of bring together in a way that you can allocate capital to in the right times in order to make these ideas possible in a really differentiated way. Like this thing literally wouldn't happen if we didn't get these people together and fund it at the right time in the right way. And that's really what motivates me. And these are the kinds of things that I've been excited about, you know, backing, you know, longevity companies like NewLimit, BCI companies like Nudge, robotics companies like The Bot Company. These are some of the examples of things that I think must happen in the world and therefore should happen and, you know, how do we actually find the right people in the right time to actually kind of go on the Fellowship of the Ring hunt.

Erik

Yeah. If it’s not too difficult. I wanna ask Jorge’s question adapted to these additional spaces, robotics, BCIs, and longevity, if appropriate. And the three questions I believe were, what's overhyped, where do you see opportunity or a path, and what's got heft already?

Patrick

I think the cool thing about agents, generally, is that they do real work. Compared to like SaaS companies that came before, agents replace real productivity. And I think, you know, they have a lot of errors today and I would say the computer use agents will probably trail the coding agents by maybe a year.

But it's coming. And we'll follow the trajectory as these go from doing, you know, minutes of work without error to hours to days. And I think, you know, you're gonna get a completely different product shape as we march through that across legal, BPO, you know, medicine, healthcare, whatever, right?

And we'll kind of follow that as an industry and that's going to be really exciting. And I think that's where we're going to see real heft because most of the economy is services spend. It's not software spend. And, you know, the reason why we're all excited about this stuff is that it can attack the services economy. And I would say like, you know, where is there hype? There's a tremendous amount, right? That's no doubt. The hype is in the model capabilities. And you know, we're working with an architecture that, you know, dates back to 2017. And if you look at the history of deep learning, it's like kind of every eight years, there's something really different. It feels like in 2025 we're really overdue for some net new architecture. And I think there are lots of really interesting research ideas that are bubbling up that could do that thing. And in many ways there's a set of really interesting academic ideas, especially in the golden age of machine learning research from, I don't know, like 2009 to 2015, right? There's so many interesting ideas and little arXiv papers that have like 30 citations or less. And as the marginal cost of compute goes down year on year, I think you're gonna be able to take all of these ideas and actually scale them up, right? You don't see the scaling laws when you're training them at a hundred million or 650 million parameters like back then.

But if you can scale them up to 1B, 7B, 35B, 70B, you start to see whether or not these ideas will pop. And I think that's very exciting because, you know, there's just going to be a lot of opportunity for new super intelligence labs to do things beyond what the kind of established foundation model companies are doing today, as they kind of, you know, in addition to these research teams, these are in many ways becoming applied AI companies, right? They need to build product shape and, you know, all kinds of different enterprises and do RL for businesses and make money, right?

Or build coding agents and make API revenue and that's important and I think, you know, a timely race to survive today. But I'm just very bullish on the research of say, like a Sakana AI, which was founded by one of the authors of “Attention Is All You Need,” Llion Jones.

And they're doing incredibly interesting stuff on model merging and how you can have sort of like evolutionary selection of, you know, different, models in MoE. And I think the sort of opportunities here, in the long run to move beyond just like RL gyms, for example, also to kind of figure out new ways to learn and find like kind of reward signal is going to be really exciting.

00:54:02 Arc’s Virtual Cell Challenge

Erik

I think that’s a great place to wrap. Gearing towards closing, anything upcoming for Arc that you'd like us to know about? Anything you want to tease? For people who want to learn more, what should they know about?

Patrick

So AlphaFold in many ways came out of a protein folding competition called “CASP,” [Critical Assessment of Structure Prediction]. And, you know, we created our own virtual cell challenge, at virtualcellchallenge.org, where we have, you know, hundred thousand dollar prizes, sponsored by NVIDIA and 10x Genomics and Ultima and others.

And it's an open competition that anyone can enter, where you can train perturbation prediction models, and we can openly and transparently assess these model capabilities, both today and in subsequent years follow them to get to that ChatGPT moment, right? And so I'm extremely excited about this. We'd like more people to, you know, train models and apply both bio, ML experts and engineers in any other domain.

I want this thing to exist in the world, you know, hopefully we're important parts of making that happen, but I'd just be happy that someone does it.

Erik

Yeah. That's an inspiring note to wrap on. Patrick, Jorge, thanks so much for the conversation.

Patrick

Thanks so much guys. Appreciate it.

Jorge

Thanks for having me.

Stay Updated

If you enjoyed the show, please share, follow, and leave us a review on your favorite podcast platform.

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify and Apple Podcasts:

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Discussion about this video

User's avatar