Russ Fradin & Alex Rampell
The &700 Billion AI productivity problem no one is talking about
You can watch this conversation on YouTube here.
Introduction & The Ad Tech Parallel
Alex Rampell: I’m excited to be here with my friend Russ Fradin.
Russ Fradin: Yeah, good to see you.
Alex: I’ve known you for a long time, and I think when I first met you, I still actually remember meeting you the first time. It was from Josh McFarland. Josh was at Google, and he said there’s this guy Russ Fradin who started this company Adify and sold it to Cox for all this money. Back then, 300 million dollars was a lot of money.
Russ: I know. It is amazing.
Alex: Now it’s a B round, but back then that was a huge acquisition. And they were saying, “Oh, Russ, amazing person that pulled this off.” I think we met in Florida on a Silicon Valley Bank trip, and everything comes full circle in the end. AI is probably the hottest thing in the history of the world. But you also worked in what was the hottest thing in the history of the world in Web 1.0. Now there’s this big question, and it reminds me of ad tech. I think it’s a nice little segue, because with ad tech, you’re trying to figure out whether the advertising works. A lot of ad tech is, here’s an advertisement, and there’s this attribution problem. The sale happened. Who is responsible for that sale? Was it the banner ad on Yahoo? Was it the last click that happened on Google? Was it the coupon site that stuffed a cookie on your machine? So part of ad tech is buying ads, but part of it is also, did it work? There’s all sorts of stuff around making AI work, which is technically very challenging. But then there’s the question of did it actually yield a benefit, which is probably the biggest question. There are a lot of myths on this on both sides, but I’d love to hear about the origins of Larridin and how you think about even some similarities between the two.
Russ: There are a lot of parallels really to what happened in the nineties with advertising and the growth of the internet and what we’re seeing with AI. Forget the capital markets perspective. It is funny to think about what is defined as big from an exit these days versus five years ago, ten years ago, twenty years ago, and that’s its own topic. But just when I moved out here, I moved out to Silicon Valley in 1996 and I was the first guy at the first online ad network. In the early days it was just, there are websites, we should put ads on them. Great. How do we do that at scale? Great. What are the metrics we should capture? Then you saw the growth of things like comScore or Nielsen as they moved into television to figure out how do I actually plan this? How do I spend this? How do I give tools? All of the money lived in TV or in radio, and there were these tools, Nielsen, Arbitron, IMS Health on the pharmaceutical side. There were all these tools to help people understand what they were getting when they advertised on television. You had to build that entire stack for the internet.
You had companies like DoubleClick or Flycast where I was, or companies like Omniture building a different part of the stack, companies like comScore building a different part of the stack. Obviously Google and Facebook are two of the most amazing companies ever built, but if it wasn’t for all of that infrastructure, their revenue just wouldn’t have grown as quickly. I really do think we’ll see the same thing in AI now.
The technology is unbelievable, and my core thesis when I was thinking about starting Larridin, after having been the first guy at the first online ad network and having been maybe the first or one of the first two executives at comScore way back, twenty-five years ago, my partner Jim and I sat down and we said, look, every time there’s a tremendous shift in budget, and especially when it happens at a great pace, what happened from TV to digital advertising, what’s happened in a lot of categories from client server to cloud, anytime that happens, people need to rebuild all of the infrastructure.
There’s a great opportunity to build all of these tools around measurement, around governance, not with the goal of stopping anything, frankly, with the goal of accelerating it. Because if I am a large company, yes, I’m going to experiment a ton with AI today. It’s the most exciting thing that’s happened in the last twenty years from a technology standpoint. It’s amazing. It’s wonderful. But also there are very boring but important questions. I have thirty-five thousand people in my workforce. They can’t all get retrained all at once with perfect knowledge and perfect security. How does it affect my D&O insurance? Was the project ultimately valuable?
We really wanted to start a company about how you would build the measurement and governance set of tools, not to be a gatekeeper, but to empower more of this spending. I think as we grow, we will be the best friend to all of the AI companies.
Software Eating Labor: The Opportunity
Alex: Maybe we can get into how you’re doing this, but just to level set a little bit, and I love this framing that you gave me. I’ve stolen it, and when I steal a phrase, it’s the most sincere form of flattery. I just released a little video about how software is eating labor. Software eats the world, this was a thesis that our firm was founded on, but it’s eating labor, and that doesn’t actually mean that jobs are going to go away. Largely what it means is that people are going to be ten times more productive, where I can’t hire anybody to do this job, but I can hire AI to do it.
So you have companies where their software budget is very small but their labor budget is enormous. Step one of the mega opportunity that excites us as a firm is that people say they’re going to start hiring software. But now that means your software budget is enormous. Right now, if you have a 10 billion dollar labor budget and a one dollar software budget, you’re not going to try to optimize your one dollar software budget. You’re really going to say, “Do I need to hire more people? Can I make people more productive?” All these things are going through people’s minds right now. This is yielding a lot of the mega growth curves of the AI software companies. But now this chart is going to be a little bit more balanced. Of that 10 billion dollars of labor, maybe that goes to eight, and now you spend 2 billion or 1 billion on software.
So the net spending for the company is actually lower, the company is more profitable, productivity gains galore. But then is this productive? I always want to know if the humans are productive, but then is the software yielding more productivity? How do I measure that? So everybody’s excited about this gold rush, I’m going to use these tools, but do they work, how well do they work, and what’s the baseline? I’ve stolen your framing, if Chase spends 18 billion on software or whatever and now they double that, they need to know if they’re getting their money’s worth. They need to figure out if this is actually efficient spend.
Russ: A thing you will hear said frequently by the people running the largest AI companies in the world, the people running the largest firms investing in AI in the world, is something along the lines of, today, global IT spend is 1 trillion dollars, and we think because of AI and agents that could go to 10 trillion.
Let’s ignore whether that’s true or false. It’s certainly the bull case for Nvidia, for OpenAI, for all of the other things that we spend all of our time doing. So when you think about it, if I remember correctly, JP Morgan Chase’s global IT spend is on the order of 18 or 19 billion dollars, and they spend a couple hundred billion dollars a year on people. If you really think about that, is their IT spend going to go from 18 billion to 180 billion? It seems unlikely in the next couple months, but it’s certainly going to go up. And if it’s going to go up, what does the CFO need to understand? At the same time, because of the pace, a way I frame this that I think everyone knows, but it’s important to say out loud, is yes, there have been tons of shifts. We’ve shifted society a million times. We’ve shifted from farms to cities. We all know all of these examples. But we’ve never had a time where we’ve expected the entire global workforce of knowledge workers to be retrained immediately on a new set of tools that didn’t exist six months ago. There is an element where everyone needs to figure this out as we go along.
So, what did we start with as a company? Our first set of tools is just, what do you have in your company, and are people flat-out using it? You’ve spent all of this money, are people using it? What you find is that eighty-something percent of our customers find far more tools being used by their employees than they know about and have licensed. That doesn’t mean it was bad, by the way. Some of those tools are dangerous and they should worry about that. Some of those tools might be very popular and they need to bring them into the fold and understand what’s happening. But from an IT standpoint, you normally don’t allow software to just be used across your organization with access to your organization’s data while having no idea what’s happening.
We’re letting that happen in AI all the time. And I don’t really say that to our customers as a fear sell, it’s to be expected. Things are moving quickly; you have to know what’s going on. So we start with the baseline of just flat-out what’s happening. The second set of things we try to solve is, how do we get people using this stuff more in a productive way? On the AI side, on the agent side, how do we get people using this in their workflow? If I am a marketer working at General Mills, how is General Mills going to help me use these tools?
What I’ve generally found with employees is that if you really want to drive employee usage of tools, you have to make them feel safe so they won’t look dumb, and you have to make them understand that they can use this safely without getting fired. It’s one thing if you’re twenty-two years old and you’ve been using these tools effectively your entire life since high school. But if you’re a forty-two-year-old person who’s had a twenty-something-year career and you’re working in your job every day, and by the way, you also have things you do at home, business travel, all of these things, also, you have to become an AI expert. You really would rather not look dumb, and you’d rather not accidentally upload the wrong data and get yourself fired. This is actually a bigger issue in some countries where there are a bunch of EU regulations around AI that do matter. If I’m an employee at a company, I don’t want to look dumb.
So if I’m the CFO, we bought all these tools, what did we actually buy? That’s number one. Number two, how do we get people actually using these tools? Because the usage on these tools in the enterprise is less than people would think today, which makes sense, by the way. I’m going to get to the productivity thing in a second, but anyone listening to this, if you’ve ever been a part of any software rollout at any enterprise, ever, a very boring but very important question is, how do we drive actual usage? Sure, everybody uses email. People use Workday because if you don’t use Workday, you’re going to get fired, you’re not going to get your paycheck. But most enterprise software, your intranet software from SharePoint, things like that, are used by a relatively small set of the population that you wish were using it. If the goal is to get people more productive using AI tools, you want to drive actual employee engagement. So we built a suite of tools around that.
Then you have to get into productivity, which is, did this get people actually more productive? Is my organization actually more productive? Today, I know where I want to go with Larridin. What I think about today is that what we’re doing on the productivity side is not as far as I’d like it to go, but it’s certainly better than anything that exists in the market.
What we’re doing today is marrying the behavioral data that no one else has, which is, is Alex a heavy user of ChatGPT or not? Just flat-out. We’re not doing it at the individual level, but we’ll use that example for the podcast because we have to worry about the employee privacy concerns that companies have for their own employees. At the end of the day, I want to understand, did my users in the legal department that were using this expensive legal tool I bought, are they more productive than my users in the legal department that are not? Because what I’ve definitely done is I’ve bought this software and driven up my opex, but are they more productive? Are my marketers that are using Claude or ChatGPT actually more productive?
Alex: And how do you measure that?
Russ: So today we do it the only way productivity research has ever existed so far, which is we take the normal productivity survey market research that people have done for fifty years. Not ideal, but it is the gold standard. It’s McKinsey. It’s Towers Watson. It’s Accenture. And we lay on top of it proprietary data that other folks don’t have, which is actual usage. The way I think of it is, the worst way to measure productivity is to send a survey to my employees and say, “Do you feel more productive today from using ChatGPT?” First of all, there’s a definition issue. Second of all, people are going to answer the way you hope they’ll answer. But third, you have no idea if they’re actually using the tools. So a better way to do that, I learned this years ago at comScore. One of the many things I did at comScore is I ran our survey market research group, and one of the reasons the comScore surveys were great is we had the behavioral data married with the actual survey responses. We’re doing the same thing here.
Where I ultimately would like to get to is full passive measurement on productivity. The truth with that is for enterprises, that’s going to require a level of additional data sharing that we’re not getting yet from customers. We will eventually get there.
Measuring AI Productivity in the Enterprise
Alex: To put a finer point on this, I’m a lawyer, and I work at some big company. Productivity, to a certain extent, if I only have to work four hours a day versus eight hours a day, that’s great for me. I feel like it’s a win because I often think about the principal-agent problem. Everybody is an agent, and then there’s the ethereal being of the corporation, which is the principal. Yeah, I guess if I own stock in my corporation, I want it to be more profitable, but really, I want to work as little as possible and get paid as much as possible. That’s every individual agent’s job. Then you have these tools. So theoretically, everybody’s going to adopt these things if they get to be lazier.
Russ: Yep.
Alex: Everybody wants to be lazier.
Russ: Sure.
Alex: They want to be lazier and richer. I feel these are the universal human conditions.
Russ: There’s some small set that want promotions, but I agree.
Alex: But that’s richer, right?
Russ: That’s true.
Alex: So if you can get the promotion by doing less work, I’m sure people would opt for that. But I guess, think about everybody will use these tools. I was telling you this sad story because our kids go to the same school, younger kid gets busted cheating with ChatGPT. Clearly productivity gained for him, because it allowed him to be lazier and richer with his video game time, until we confiscated his phone.
Russ: But also a set of rules that you can get in trouble for.
Alex: Take that example. You can imagine the individual agent, the human being, the lawyer in this example is benefiting, but then does the company benefit? To a certain extent, I’m paying you the same amount of money. I want you to work for eight hours a day. So actually, my expectation should be that if you’re, I don’t know what the lawyer does, but if you’re drafting legal drafts and you can now do it in four hours versus eight, and spend four hours playing golf, you’re thrilled. You got a productivity gain; the company didn’t actually benefit.
What you want is for both parties to benefit, which is always tough because sometimes it’s very hard to sell products to people that eliminate their jobs. That’s probably the hardest part. But maybe taking this example and riffing on it, now in four hours I can do what used to take me eight. The company’s saying, “Oh wow, you’re operating your baseline, but actually you should be able to do twice as much with this tool.” So how do you define the baseline? How do you address that problem? Am I framing it the right way?
Russ: I think you’re certainly framing it a right way for certain sizes of companies. We all know for Silicon Valley, because of the competition and the equity form of competition, what you’ll have is, if I can get done in four hours what I could have done in eight, I’m just going to work four more hours, and then another four. That’s very different. There’s some subset of workers at all size companies, probably a larger percent in Silicon Valley, but a smaller percent at GE. There are people at GE who want to one day become the CEO of GE, and those people will work as much as they possibly can. So there’s some subset of workers there.
For the rest, there’s an interesting question about how management is going to evolve overall. I think behind all of this, the first question we’re trying to solve is, do people use these? From a corporation standpoint, for our measures of productivity, which is we’re defining it with each of our customers, for our measures of productivity, as we ping folks, is there a difference in productivity between the heavy users and the lighter users? What we want to measure with that, we’re not doing this today, what we want to measure with that is then some concept of raw tonnage of work.
There’s this lingua franca when we talk about employees of FTE. We all know that you work differently than I work and various people work, and we all know that. Yet, if I’m the CFO of JP Morgan, I have a fundamental horse sense for what 1000 FTE do versus 500 FTE versus 2,000 FTE, and AI is going to break all of that for sure.
Our main goal today is just to build the baseline for our customers, which is, at the end of the day, are the people using these tools fundamentally more productive than the folks that aren’t? Layer on top of that tonnage of amount of time worked. It’s never perfect. People are on vacation. You have to measure this as groups. Any given person was out sick one day or was on a flight one day or was at a training one day, it’s impossible to measure. It seems from a system standpoint they weren’t working, they actually were working; they were doing a training. So think of this as the aggregate data. It’s never useful. None of this data is ever useful at the Russ Fradin level. To get existential, was I productive yesterday? It is unknowable. I can’t know if I was productive yesterday.
Alex: I think you were.
Russ: I was all for it. But what we’re trying to do at the systems level for companies is understand, is there some correlation between specific use of these tools on an advanced side, light side, heavy side, heavy user of the tool, lighter user of the tool. Were the users more productive in their job? Were the employees more productive in their job? Then measure on top of that amount of time those segments of workers were actually working, because the goal if I’m a CFO today is not to understand did Ben do a good job? And did Tina do a good job? The goal is to understand, I have definitely been asked to spend fifty percent more on opex. Did I drive something? By the way, there are interesting questions. We know this around staffing size and will companies get more done because people will actually work eight hours?
I suspect it is true that this is one of the things managers do. I suspect it is true over time that if it becomes clear that all of your employees are now working four hours a day instead of eight, you will probably decide to have fewer employees, and the remaining employees will work six hours a day. So I’m not sure I really buy that in the next couple of years you will see people in large companies actually just working half as much. A sole proprietor is what it is. If I were a sole proprietor lawyer, my only measure of productivity today is to myself anyway. It’s how hard do I want to work and how much money do I want to make?
Alex: Well there the principal is the agent.
Russ: Right, so that’s fine.
Alex: This is why it’s so important, and this is why, candidly, I love what you do; obviously, that’s why you’re here. There is no baseline. It’s, “Did this work? Well, first you have to define the outputs. You have the inputs, which are largely just time and money, and then you have the outputs. Part of it actually is complicated to come up with an output. Do you know Goodhart’s Law?
Russ: Go ahead.
Alex: Goodhart’s Law, when a measure becomes a target, it is no longer accurate as a measure. So if I say I’m going to judge you based on how many emails are sent every day, that’s a measure. But once it becomes a target, I want you to send more emails, the measurement gets corrupted, because now people decide to do more things to hit this target, and it’s no longer an objective measure. Part of it is, if I’m trying to figure out, okay, there’s a product called Harvey. A lot of people love Harvey, and it seems to make people a lot more productive. But compared to what?
Russ: To me, the only way you answer that, and we will talk about Harvey; nothing against Harvey, I’m sure Harvey’s amazing. To me, the only way to really understand this, and that’s why I think the traditional way companies are doing this just doesn’t work at all, which is, “Hey, let’s survey the people that use Harvey and ask them if they were productive.” By the way, they will all say yes because no one ever answers they weren’t, number one. Number two, my boss paid for the product. I’m going to say it was a good product unless we all universally hate it, which I assume is not true of Harvey because everyone seems to like Harvey, so that’s wonderful.
I think all you can actually do is, it’s why I think the traditional way of measuring this is broken. It’s why we started Larridin. All you can really do is understand, without asking people, how much usage of Harvey are these people actually doing. We have five people, we have six people, whatever they’d say on a survey. Two have never logged in. We’ve all seen the joke, your project is due in an hour, you said you were caught up, and then, “Oh shit, I have to ask permission for this Google doc.” So there’s two of the six, I’m making these numbers up, actually signed up for Harvey the day they were told and then never went back to it. They’re very happy with the way they work; they worked that way all day, every day. Two of the six log in and use it a little bit, and two of the six use it all the time.
The only way I can even begin to understand if that software is valuable is by knowing that data passively without asking those folks a question, then asking everyone the same questions about productivity, and measuring it with the amount of work, actual output. If I take those three things together, then I can begin to form an understanding of whether Harvey was useful.
You and I had a discussion with someone where they were talking about one of the ways they incent their engineers at their company is they have a leaderboard of the amount of money each engineer spends on Claude Code. The founder was talking about how he went to one of his best engineers and said, “I don’t understand what’s happening. You’re one of our best engineers. Why aren’t you spending any money with Cursor? I really don’t get what’s going on.” That was an example of, for these companies where they’re very developer-heavy, you probably don’t need us. If you’re a very developer-heavy company, probably measuring the amount of money spent on Cursor plus your normal management understanding of “Is this person actually working?” If they come in for two hours a day, you may be happy with that, you may not. That’s going to be company-specific, lifestyle-specific, and culture-specific. But you’re in the office, I see you’re there, you’re not spending any money on Cursor, what’s up? We have these metrics.
The issue, though, is we see this explosion of hundreds of AI tools, and companies have hundreds of roles. That’s why we want to try and replace the McKinsey Corporate Health Index of Towers Watson or the Accenture service with some real useful data around AI. But I think that Cursor example really crystallized in my mind what you’d want to be able to do for a whole company, how much did this person work? So I have that quantitative judgment. Qualitatively as a manager, we’re not replacing this, did they do a good job? Then fundamentally, did they use the tools? When you take those three things together, that’s the only way you’re going to have measurement.
When you think about my micro world, if you really think JP Morgan is going to go from spending 18 billion in IT to 30 billion or 40 billion, the CFO’s not just going to say “no problem.” Today our customer is the CIO; I think over time our customer becomes a partnership with the CIO and the CFO.
The numbers are just big. It’s like cloud spend. The numbers are just so big. People are going to pay attention. It’s going way beyond experimental.
Alex: And obviously the companies themselves, if you ask any company that is trying to sell you anything and you ask, “Does your product work?” They will probably ninety-nine times out of ninety-nine say, “Of course it does. It’s the best.” You need to have an independent arbiter, and that’s where you guys come in.
But double-clicking on this point from before, it’s almost like reinforcement learning at a company-wide level. What is the outcome that I’m looking for? Sometimes it’s clear. This is where the measurement and target thing is also relevant, because it’s like, “I want you to write more lines of code.” Whoa. There’s a measurement of how many lines of code written, but if it becomes the target, then you’re just writing gobbledygook code. For sales, it’s very easy, I want you to sell more stuff. But there’s a lot of latency between when you go talk to a customer and when you collect money. So you might have targets in between, you might have measurements in between. If you’re a lawyer, draft more contracts. So how do you try to define the goals? Because some of them are just background information that’s going through, emails that are being sent, Slacks that were sent, Google Docs that were edited. There are these very clear measurements, but those aren’t necessarily outputs.
Russ: I’ll tell you an interesting story to reinforce your point on why this is hard, and then I’ll talk about how we’re trying to solve it. A very hot, few-billion-dollar, venture-funded Silicon Valley company, I was talking to their head of sales, and he said, to your point on quota, we can measure his output. He was talking about how little it turned out he actually knew. His boss came in one day and told him, for a cost-cutting measure, he wanted to shut down the Seattle office and they were just going to get rid of everyone in Seattle. Some were salespeople, some were not salespeople; they were just getting rid of Seattle. But all those salespeople are hitting quota, we know this. He said, “I don’t care. We’re letting those people go. It’s a headcount reduction.”
He said, “I have to be honest. The next quarter, it turned out we just picked up the quota just fine. It turned out, even though we thought we had our quota set and we thought everyone was productive, it turned out everybody else just picked up twenty-five percent more. It all worked fine.” Those people got paid more. From a human cost, it was unfortunate for those people that got let go. But from a company standpoint, it turned out we felt we were productive, and actually it turned out we could be much more productive.
I don’t say that to say that we have a magical solution at Larridin. I say that as we are on a journey with everyone else, and we’re trying to lead the pack of how we will get there. Honestly, same thing with advertising. So first, your point on measurement. This is why I said earlier, if you think about anytime true third-party measurement exists, there’s this interesting dynamic. We saw this at comScore, but everyone has seen this anytime they tried to build a third-party measurement company. Omniture saw this in the early days. Google at some point fought it and then actually bought Urchin and built Google Analytics, because it turned out it’s actually good when your customers can track value if what you do is actually valuable.
My general perspective is, I think today a lot of the AI companies probably look askance at us, but I think over time, certainly the AI tools that actually provide value are going to love us. The way you will ultimately unlock real enterprise budget is because people believe these tools are actually valuable.
What we do today, and this is a journey; the company’s about a year old, is we work with all of our customers. We say, “Here are the baseline productivity questions that are gold standard that people have asked for seventy years. There’s pros and cons to them, but you have to start somewhere. This is where we start. And let’s define a set of metrics for each of your departments.”
One of the things we’ve found that actually seems to matter, not as a metric that companies share with their employees, because then you have the Goodhart’s Law problem, but as an actual reality on the ground, is fundamental responsiveness. There is an element of, I spend some amount of money on my legal department, and I am happy with the amount of productivity they do today. So there is an element of, unless I’m trying to fire lawyers, which I’m not, you can argue, how would I measure the value of software? I guess my lawyers might be happier, but I don’t have a churn problem there, so frankly, why should I do this?
What we found is just almost an interdepartmental SLA. It turns out if I roll out these tools and I’m not firing employees, because one way to look at this is, could I fire half my lawyers? It turns out companies don’t really like firing people. Companies do fire people if they have to, but I’ve actually never met a CFO that got excited about firing thirty percent of the workforce outside of call centers. That’s a different issue we could talk about, companies treat their call center employees differently from the rest of their employees. But outside of call centers, I’ve never met a CFO who if you went to and said, “You can fire half your FP&A people,” he doesn’t want to fire Tina. He knows Tina. He’s met Tina’s husband and children. He doesn’t want to fire Tina. He’d like Tina to be happier and more productive, and actually, he’d like her to do a great job and never quit. Companies don’t really like churn.
One of the metrics we found that people seem quite excited about is just, did this raise or lower the interdepartmental responsiveness? A measure would be, am I now comfortable sending more things to legal? If I’m going to keep my legal department the same size, I’m not going to start suing more people, we’re talking about companies here, not law firms, where there’s a different measure of productivity. They’re cost centers, not profit centers. So one thing to do is, did, over time, because my lawyers are now more productive, are other departments asking them more questions? Are they getting their responses faster? When I am in product and I’m asking for input from engineers, are they responding more quickly? That is a good way for me to see behaviorally we’ve become more productive. That’s not lines of code.
Now, by the way, I agree, if you expose the metric and say, “Hey, you better be responsive,” people can lie. They can send Slack messages back and forth. But what I’d really like to understand is, as a map, which of my departments use these tools more, and do they become more responsive to my other departments? Because there’s an element of, when you’re at a big company, people know this, one of the reasons small companies do so well in innovation is there’s just a giant coordination problem for all of these companies, and we know this. In Silicon Valley, it’s fun to make fun of these companies, but actually, every entrepreneur’s secret dream is to become so large that they have a giant bureaucratic company. Google did not plan to have a giant bureaucracy thirty years ago. They just became so successful that they now do have a giant bureaucracy.
Survey Findings: $700B in AI Spend, 70% Wasted
Alex: That’s a good segue into perhaps the state of AI enterprise. So you went on this whole list, you talked about 350 people?
Russ: Yeah. 350 heads of IT at major companies.
Alex: And across the whole gamut right? It wasn’t just Silicon Valley companies?
Russ: No. In all honesty, my whole career, basically, I spent a couple years helping my friend at Carbon, and I spent a year trying to fix wine.com, but other than that, my whole career has been selling software to large companies, mostly large companies or older companies. Yes, there’s the occasional Silicon Valley company that grows very quickly, but if you are in the Fortune 500, you are going to be twenty-plus years old ninety-nine percent of the time. If you’re going to sell to someone with more than a thousand employees, they’re almost by definition an older company.
Alex: So maybe give us the highlights of what you learned.
Russ: We saw a bunch of different things, and people have seen this before. I actually don’t think of this. You’ll see people turn this into clickbaity fearmongering things. I don’t really think of it that way. So first of all, we know this from Gartner. There’s 700 billion dollars being spent in enterprise AI. It’s growing very quickly. It’s going to keep growing quickly. One of the things we found is something like seventy percent of leaders we talked to said we are sure we are wasting money here. It’s being spent so quickly, and, by the way, shame on us, we had no system to measure this in the first place.
I’ll get back to the report in a second, but I was talking to a customer today. Why did we sign them as a customer? They’re a very profitable business owned by a PE firm, and their bosses, their PE owners gave them five things they had to do this year, and one of the five was adopt AI across the organization. He said, “Every board meeting I go in, for my other four metrics, I have some report of how we are doing against those report. And on AI, all I have is the amount of stuff we bought. It’s not—”
Alex: So yes, we adopted AI. We have a large family of AI. We adopted all these AI children.
Russ: Yeah we bought all of it, it’s all great. But it turns out we actually want to do it. What we found is these leaders, maybe they’re right that seventy percent of their projects are failing. Regardless of whether they’re right, it’s a giant problem they feel that way, because they have no system to figure it out in the first place.
No one believes seventy-five percent of their ad spend is failing. It’s not because their ad planners are smarter than their AI buyers. It’s because there are twenty years of systems in place to help me understand when I buy this ad campaign, when I spend this money, when I do this app install, whatever it is, did it actually drive value for me? We just don’t really have that in AI outside of some very specific verticals.
Really, the biggest thing we found was three things. One, you saw the AI spend. Two, they believe seventy-something percent of AI projects are wasted. But the other thing we found is basically eighty to eighty-five percent of the companies we talked to said they really believe they only have the next eighteen months to either become a leader or fall behind.
I think one of the reasons you’ve seen this giant unlock in budget is there’s tremendous anxiety at these enterprises going. We are going to lose if we don’t adopt this stuff. So we’re adopting it quickly. We have no particular idea if it’s succeeding. Our employees aren’t really using it, by the way, a forgotten group in the company for all of this AI.
From my last company, we built a very large HR technology company. We sold into heads of HR, touched all the employees in the company. As we’ve talked to a lot of our old customers who aren’t really our customers today but are influencers, what they will all say at all of these large companies is, “Our employees are really worried.” It’s not even that they’re worried they’re going to lose their job. There’s a base level of worry about AI and the economy and all that stuff. It’s not even they’re worried they’re going to lose their job; it’s just they’re getting told to use a new system all day, every day. Generally, if you work in a large company, there’s one or two new systems initiatives a year. Now there are twenty new tools. They don’t know what they’re allowed to do, and they have no training. How do I get people using these tools?
You have this weird, almost perfect storm, it’s why we’re excited about Larridin. You have this perfect storm of tremendous growth in budget, tremendous anxiety that none of it is working, and tremendous anxiety from their employees about what they’re even allowed to do.
What we’re trying to do, I don’t think we solve all of that, that would be an absurd thing to say, but I think we really help with all of that. What is your plan to measure this in the first place? Did anyone use it? Did they become more productive when they did? How do you give them the tools to use it more?
Alex: That last point is super interesting as well because there’s the “did it work?”, “how well did it work?”, “what are the measurements?”, and “make sure that the measurements don’t become targets,” all the stuff that we just talked about. Then, to use the metaphor of my son who cheated on his math homework, there are people who are just like, “Wow,” they’re the go-getters in the company.
This is actually why I am convinced that AI is underhyped. We have our little group chat where we have another friend who says, “Oh, all this stuff is overhyped and it’s going to zero.”
Russ: Totally wrong. Every time I use the AI, it’s amazing.
Alex: You have the nineteen-year-old kid, or my thirteen-year-old son who is saying, “Wow. Normally homework would take me two hours. Now it takes me one second.”
Russ: It’s amazing.
Alex: And obviously that’s bad, that’s why we confiscated his iPhone. But there are these productivity unlocks where it’s probably not going to happen top-down. It’s somebody in the company. Sometimes, not to oversimplify human behavior, it’s, “I want to be lazy and I want to be rich.” These are the two things that are motivating people, and “I found this tool that allows me to be lazier and richer that actually helps the company.” So not the cheating. It’s, “I know that my boss was thinking this would take eight hours. I’ve figured out a way to do it in five seconds.”
Russ: And it’s really good, by the way. It’s amazing!
Alex: It’s really good, and the worst thing that can happen, this is the inverse of everything that we just talked about, the worst thing that can happen is that guy keeps it a secret, because he might be afraid. It’s like, “Am I allowed to use this?” But what you should do, this is how AI will go from underhyped to correctly hyped and correctly diffused, there’s somebody at every big company who has figured out, “I could do something in one minute that used to take eight hours.” We need to make this person a hero, memorialize this and push it out through the entire company. So how do you do that?
Building Tools for Engagement and Governance
Russ: This was my point on what we’re doing on the AI engagement side. That’s a great question. This is one of these areas where everyone’s interests are aligned. The employee that’s working very hard loves recognition, and he’d like his coworkers to come up to speed. The employee that’s scared wants support and wants training. And companies actually want their employees to be more productive. I know it’s a fun thing for a subset of people to tweet about, but I’ve spent thirty years selling things to CEOs, I have yet to find the CEO who wakes up in the morning and wants to run a smaller company. He wants more employees, he wants more profit, he wants more revenue. Contrary to popular belief, they want more employees, they like running big companies, they do. You can find old interviews of Larry Page talking about his plan for how Google was gonna have a million employees one day and he was spending a lot of time thinking about self-driving cars to move the cars around the parking lot, because this was before remote work, and literally where were the million employees gonna park all the cars. I remember I read that fifteen years ago and that stuck in the back of my mind. I have never met a CEO who wants to run a smaller company.
By the way, a totally unrelated point in the capital markets, if you ever meet a CEO of a conglomerate, they never wanna break up the conglomerate, because they like running bigger companies. It’s more fun. I’ve had my companies grow. They’re fun when they’re bigger. It is. It’s super cool.
From an employee standpoint, what we built with this Nexus product is effectively a product where we said—I’ll use an anecdote. I was in the UK in July and I went on a bunch of sales calls. I was talking to someone at a bank, a very large, very regulated European bank, which is among the most regulated folks in the world, least fast to adopt new technology for good reasons, honestly. They were telling me a story about how they had a twenty-eight-year-old guy, I don’t remember what level that makes you at an investment bank, so let’s say a director, who was using ChatGPT really, really well in the investment banking side of the business. They had him create a thirty-slide deck, and they did a global call for everyone in the investment bank for this guy to spend an hour walking people through how to use ChatGPT.
I’m sure that was very cool for him. But that’s absurd. That’s an absurd way to hope people adopt world-changing technology. Another absurd thing to do is to go out and buy some LMS course that HR is going to buy where, the secret to a lot of LMS is, other than things you must do or you will lose your job like sexual harassment training, HIPAA training in certain organizations, no one does it. They just don’t go. So how do I actually get people using these tools? This was my point from earlier, you want to help them, first, not look dumb, and second, know they won’t get fired.
What we effectively did is built these wrappers that exist around the models. We don’t tell people to use Claude or to use Gemini or to use ChatGPT. Great. We’ve never presumed to tell people to use that, and they wouldn’t listen to us anyway.
What we did, though, is we said, “What’s the core of this? The core of this is prompting well.” So we built effectively an auto-updating, constantly evolving prompt library based on your role. So what works really well, if I’m in FP&A at an org, the people that are using it well are effectively the authors here, the publishers. But for the folks that are in FP&A that are maybe later adopters of this, now I can go in and see these tools, see what actually works well for the org.
This, by the way, also has a very nice upside to the organization of almost being like a prompt CRM, so that when the twenty-seven-year-old who’s amazing and ambitious and did all of this stuff leaves and goes somewhere else, that is a shame that she has left our company. But much like with email, much like with CRM for context, to the extent you think AI is going to matter and prompting is going to matter, I want to understand as an organization what works well. The only way to do that is to capture that, to understand that, to catalog that, to build leaderboards, to build tracking.
So if you think about it, if we know what the prompts were and we have our productivity measure on the back end, the goal is to have the head of FP&A say, “Great. By rolling out this tool, I was able to take those two users that never logged into the tool, now when they log in, they’re not going to feel dumb.” There you go. Here’s what’s going to make my day more productive.
Then the other thing we built, because again, people are also worried about getting fired. They’re worried about getting fired because of the economy, because of AI, because of whatever. It’s a new tool. By the way, when you’re talking about European banks, there’s a lot of regulation. It’s a legitimate concern that if our employees do the wrong thing, we will get fined. Forget whether you fire them; these companies don’t want to get fined.
The other thing we did is we basically trained our own customized Llama model to block people from asking questions that are illegal or the company doesn’t want you to. We’re not talking about hackers here; true bad actors in the company have plenty of security solutions. What we’re really talking about is the X percent of the people who—I’m in people ops and HR at a large company, I’m supposed to do a workforce analysis. Am I allowed to go into ChatGPT and load in our full employee database with race and gender? I don’t know. I would like to not get fired. Maybe I’m allowed to, maybe I’m not.
I think it’s incumbent on the company to say to their employees, “Here is a safe space. Nothing you can do here is going to get you fired.” Oh, Alex, “You’re not allowed to upload that. That has social security data. Don’t share that. You’re not allowed to ask that prompt because in Europe, we’re not allowed to use AI to write employee reviews.” I don’t know if that’s a good law or bad law, I didn’t write the law, but there are companies that look at the EU AI regulations and say to themselves, “Our read of the regulation is we believe it’s illegal and we will get fined if our employees use AI tools to do employee reviews.” So great, if I am a European-based company, I want my employees using AI. I have to block them from using it for those use cases.
What we’ve tried to build is this almost harness to say, you can be more productive, you’re not going to look dumb, you’re going to be more productive, and you’re not going to make any mistakes that get you fired. What we found is that actually drives more AI usage, surprising literally no one.
From a company standpoint, what do you want? A, I want the usage, and B, I want to build up that IP of what really works for my company, and in a few weeks, we’ll launch the same thing around agents. It’s fun to talk about MCP, but we all know the small percent of people that use AI well, it’s a total unlock. Same thing on the coding side, Cursor has taken mediocre engineers and made them good, but it’s taken amazing engineers and made them gods. Our goal should be, how do we help people get much more productive with all of this? How do we help them use Cursor more effectively, Harvey more effectively? We started with all of the LLMs more effectively.
The Future of Work and AI Job Displacement
Alex: Maybe we could talk, this is a little bit philosophical, about the future of work. To a certain extent, if you’re the measurement, the measurement inevitably will become a little bit more of a target. I always like to remind people that I think it was ninety-seven or ninety-eight percent of Americans when the Constitution was ratified were farmers, and they all lost their jobs due to these pesky things like the tractor and fertilizer.
Russ: And it all turned out okay.
Alex: I think the average life expectancy was thirty-five, and most children died in childbirth or shortly thereafter. Things have changed, this is what technology brings you.
Nobody knows the answer to this, but given that you’re in charge of a company that’s measuring AI productivity and human productivity and AI and humans working together, what’s your timetable for how fast things change? Are we going to see net new jobs created? Behind every one of these, there are all sorts of jobs that start becoming things that didn’t exist before. So maybe that’s part two of the question, because the job that we have right now, filming a podcast, that wasn’t a job. There are so many jobs that nobody could even think of. So where do you think things are going, and what types of future jobs do you see in and around this new stuff?
Russ: I don’t buy for a second that there’s going to be large-scale job loss because of AI, frankly because of what we’ve seen through all of history, which is just flat-out capitalism. If my two choices are I can maintain my base level of productivity but fire a bunch of my employees and be more profitable, that is a fine idea in the short term. There’s probably a good idea for a PE firm to go around and buy a bunch of marginally profitable companies, fire half their employees and make them more profitable, but that’s what PE firms have done for a long time for non-competitive companies anyway. Yet, employment has still increased.
You can argue we’ve had a function for the last forty or fifty years, whose goal is to take underperforming companies and fire a bunch of employees. Let’s say that’s what PE firms have done, and that’s what AI could theoretically do. Yet employment has increased.
Look, it’s philosophical, and I don’t have any special expertise because I am building a measurement company. But I don’t buy it because your competitor across the street is not going to fire all those employees. He’s just going to do more with those employees, and he’s going to kill your business. This is the Jeff Bezos “your margin is my opportunity” line. To the extent that AI is going to drive up your margin, that will be all of your competitors’ opportunity to be less profitable and compete with you. So, other than some very niche monopolistic “I can fire everybody,” one-man firm, will we have one-woman firms that do a billion in revenue? Probably. But today we have very profitable one-man, one-woman operations. Not many people work at the Joe Rogan Podcast. I don’t think that many people work for Ben Thompson Incorporated, and yet I imagine those are quite profitable businesses, best I can tell.
That’s amazing, and there’ll be a ton of opportunity to be a more successful solo entrepreneur. I absolutely believe there’ll be even more entrepreneurs. But at a very high level, I just don’t believe the Fortune 500 will employ fewer people in thirty years than they do today, because the ones that try and cut all the people will no longer be in the Fortune 500. Just flatly, because we live in a competitive world, we haven’t seen any proof yet that the economy is zero-sum. You can argue that, but we haven’t seen any proof yet. GDP keeps increasing. It increases slower in some places and faster in other places, but it’s generally grown. Employment has generally grown. I just don’t know why you’d believe that this time is different because of the competitive point of view.
The tech is different. The tech is amazing. But fundamentally, what will almost definitively happen, there is an interesting theoretical question that’s more of an Ivy League grad school discussion about wouldn’t it be more fun as a society, wouldn’t we all be happier if everyone agreed we’d work half as much and be just as productive as we are today? I don’t know. Maybe. But that’s not human nature. I’m not even sure that’s true. I tend to believe in the Tyler Cowen point that all that really matters is growth.
My general perspective is, you as a VC would just never get excited if one of your companies came in here and said, “Hey, we got to 100 million in revenue, and you know what? Because AI tools are so good, we’re going to fire ninety percent of our employees and we’re going to make 90 million in profit.” You would not be excited with that entrepreneur because you know that Sequoia is going to fund a direct competitor to that company who’s going to keep hiring, who’s going to be happy with ten percent margins, and who is going to destroy your company. We all know this.
It’s one of these things. There are a lot of fun headlines about AI, and then you have to have the counterpoint of “oh, it’s going to take the jobs” and “oh, kids these days.” We all know this. We’ve all seen this. You can find articles about when the TV came out, it was the end of reading. When newspapers came out, it was the end of conversation.
I do think it is scary that new tools are coming out, and it is impacting the entire globe of all knowledge workers everywhere. So what? So I think there’ll be opportunities to be podcasters. There probably will be more plumbers. There will be a lot more employment around building data centers. There’s going to be a whole set of engineers. Maybe we will need a lot more astronauts. Elon says we’re going to Mars. Someone is going to have to scrub the toilets in the space station, and someone is going to have to pilot the ship to the space station.
Alex: Well that’s self-driving. Self-driving spaceships.
Russ: By the way, there is a chance I will turn out to be wrong, and in that case, I don’t know. Maybe I’ll spend more time on vacation.
Alex: It’s interesting. I talked to this economist, Ed Glaeser, I think he’s at Harvard, and he was saying—I asked him this question about what’s going to happen with jobs and how do you compare this to everything else. He said what’s really interesting is that this is arguably the first time that the job losses might be borne by white-collar, super-educated people.
Russ: That’s why everybody gets scared.
Alex: He actually had a different framing on it. So, yes, agreed. But almost tautologically, hyper-educated people are hyper-educated. So they should be able to rejigger themselves and do something else. Versus in all of these previous revolutions where you have somebody that really has no skills and just showed up at work with no skills and got paid. There are a lot of jobs that look like that when there’s tremendous labor shortages.
So if you were doing anything in 1849 in California, boom. It’s just, “Oh, you’re a human? You have a pulse? We need someone with a pick. Go do this. Or, you see that line over there? Yeah. Straighten it out.”
But what’s different is that yes, it’s scary for some people, but everything right now, and maybe robots will work better in the future, but everything right now is bit manipulation going after or augmenting white-collar, hyper-educated people, by virtue of the fact that they’re hyper-educated. This does not mean what happened to Detroit. That actually wasn’t about automation; that was about the Japanese building better cars. There were a lot of reasons why that happened. But what do you do with somebody who had a very high-paying job but actually didn’t have that many skills? Now they’ve lost that job, and because they don’t have any skills, they can’t find another job. Whereas if you are highly skilled, you will find something else to do.
There is an element of, look, there are certainly a set of people who were pretty highly educated. They were in good classes, they got into a good school, whatever that meant. They got a good job. They worked pretty hard in their twenties, a little less hard in their thirties, and a little less hard in their forties. But they’re paid pretty well, and those people probably are a little uncomfortable today because their career, frankly—There are some professions that just require continuing education. If you’re an electrician or a plumber, or a doctor or a lawyer, some of these professions just require constant upkeep and constant education. That’s not true in a lot of professions. There are a lot of jobs where you get to forty or fifty, and you can keep doing a good job but you don’t really have to learn much new. You can just keep doing a good job doing what you’re doing, and you don’t have to learn much new. That’s probably quite uncomfortable.
I acknowledge it is quite uncomfortable for those people. But to your point, they’re educated, they have skills. We have much more of a knowledge economy. So I don’t necessarily have the issue of, “I literally have this house in Detroit. The jobs are now in Knoxville. Forget Japan. The jobs are now in Knoxville. I don’t want to move to Knoxville.” We all know the data on mobility and housing costs and all that. But at the end of the day, to your point, yes, there’s a set of people who’ve probably been slowly working less and pushing themselves less, and now they have to push themselves more, and that just is.
My joke all the time, I won’t say the company I use, but part of my sales pitch for Larridin when I’m talking about this, I say, look, when we talk to employees, your average employee is a forty-two-year-old associate brand manager. If you ask them what they want out of AI, they do want it to go away. Their number one wish would be that it would just go away. “I’d like that, because I liked yesterday.” But we’d make a lot of money if we had the power to make AI go away, super big blackmail business. But we don’t have that power. So all we can do is give you the tools to use these things better, help you be more productive, and help you as a manager understand if your team is using these tools better.
My macro point is that I just don’t really believe there will be wide-scale mass unemployment. Might an individual have to push themselves more? Yeah, for sure. Some of those will be sad about that. That same thing is true in the entertainment industry. Jobs have moved and jobs have shifted. People don’t watch movies the way they used to. TV seasons used to be twenty-two episodes, and now they’re eight because consumer preferences have changed. That is probably lousy if you were a guy who played a part on Law and Order. That probably is uncomfortable. I’m not being callous. There are many ways it will impact my life negatively. But I just don’t buy that they won’t be more educated.
The Product Marketing Problem
Alex: A lot of this actually predates AI. There’s this great article or interview with the CEO of Waste Management before ChatGPT came out, and he was saying, “I get resumes every day from somebody who has an MBA and wants to work in our office.” And they’re negotiating against themselves. The price keeps going down there. A hundred applications for every opening. I need to hire truck drivers, and not just self-driving, somebody who actually is collecting the trash, that’s what Waste Management does. For $150,000 a year. I can’t find them. It’s interesting how things have flipped.
But I guess the other thing that I would say is, I would almost argue that a lot of AI’s problem right now in terms of diffusing into the workplace is almost a product marketing problem. Where it’s like, “Ok, AI can do anything.” But I’m not looking for anything. If you say, “Hey, I could do anything,” and I’m like, “Oh, I don’t need you.” “No, I could do this one thing very...” “Oh, you do that?” I think once you have more of these articulations of what can be done, and the things that have really gone hyper-growth, it’s like, “Oh, I have AI, it does everything.” “Oh, I will help you code better.” “Oh, I want that.”
Russ: “I will help you code better. I have this chatbot for...” Yeah. It’s funny when you get old, a long, long time ago I was the first guy at comScore, and comScore’s sales pitch in the early days—ComScore, for those that don’t know, basically had all the data for everything that was happening on the internet. The founders were true geniuses, and they basically knew everything that was happening everywhere on the internet. Our sales pitch in the early days would be, “We know everything.” I mean, obviously not, but it would basically be, “We know everything. What would you like to know?” And it turned out that wasn’t a really good sales pitch. You would sometimes accidentally run into someone who would go, “Oh my God, I need to know this. Could you do this?” And we’d say, “Yes, we could.” And there you go. But it turned out we only had a couple sellers that could figure that out in real time.
Then it turned out, if we said, “We can tell you the market share for Visa versus MasterCard versus others in Japan,” it turns out Visa really wants to know that. But I can also tell you the share for your pharmaceutical drug versus others in research online. Turns out they also want to know that. I’m going to get it wrong, but Ford had some giant issue with Bridgestone tires setting on fire. It turns out they really do want to know, did search for Ford get worse because of Bridgestone?
So people do want to know these specific things. As I said, I think that’s exactly the right way to think about it, it’s a product marketing problem. It’s why we’re so focused on “what’s happening?”, “are they more productive?”, and “how do you get them to use it more?” We can actually do a lot of things, but you can’t sell things that way. That’s more general entrepreneurial advice. It turns out building something amazing that people don’t know how to use mostly doesn’t work, unless it does, which is ChatGPT. So one in a million times, it does work out. Facebook, ChatGPT on the consumer side.
Alex: There, it’s just magic.
Russ: Yeah, it’s amazing.
Alex: If you show somebody a magic trick, or you get somebody addicted, you can guess which one I’m referring to for which. If you ever watch Seinfeld, there’s this great episode where Jerry buys his father a Wizard, which was an early Palm Pilot kind of thing, an early smart computer in the 1990s. Never went on to great things, but it did everything. Jerry’s trying to explain it to his dad, and his dad’s like, “Well, I don’t get it. What does it do?” And Jerry says, “Well, look here. It has a tip calculator.” His dad goes, “Oh my God. A tip calculator.” And then he explains it to all of his friends, “Look at this. My son, he’s a comedian. He’s doing great. He got me a tip calculator.” And Jerry’s like, “No. It does other things.” It often ends up being frustrating for the company that does the other things because they aspire to have this more broad, horizontal platform, but what we need is more of these tip calculator things.
Is there anything that we haven’t talked about that you want to get in, a little soliloquy?
Closing Thoughts
Russ: No. I’ll leave you with two thoughts. One is related to something you brought up in the conversation and the professor, but one is related to Larridin. My general perspective is anytime you see some giant shift in budget, you’re going to build a set of very important but very boring tools, what’s actually happening, are people more productive, how do I get them to use it more. And there’s a ton of business there.
Then I’ll leave you with an unrelated thought to your Harvard professor. As you said, our kids go to the same school. My oldest kid is in twelfth grade and just got into college. He got into his top choice. It’s a very highly rated school, and I’m very happy for him. He came home and said, “Dad, look at this ranking.” He showed me the new US News & World Report ranking that showed the different rankings. His name’s Henry. I said, “Henry, I’m very proud of you. There’s going to be a lot of different rankings over a lot of different years, and there’s only one thing you have to know for sure. Whatever the ranking says, everyone knows number one is Harvard. It doesn’t matter. Don’t get excited. Whatever it says, wherever it puts your school. Everyone always knows whatever the ranking is, Harvard’s number one.” I did not go to Harvard.
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.








