Institutional AI vs Individual AI
Where did the productivity go?
America | Tech | Opinion | Culture | Charts
AI just made every individual 10x more productive.
No company became 10x more valuable as a result.
Where did the productivity go?
This isn’t the first time this has happened.
In the 1890s, electricity promised enormous productivity gains.
Textile mills in New England, built to harness the rotational power of steam engines, quickly installed faster electric motors in their place.
But for thirty years, electrified mills saw almost no increase in output. The technology was far superior. But the organization was not.
It wasn’t until the 1920s, when factories completely redesigned the mills once again, with assembly lines, individual motors within every piece of equipment, and workers and machines executing drastically different jobs, that electrification produced meaningful returns.

These returns came not from the technology itself, and not from making individual workers or machines faster at spinning thread. It was when we finally redesigned the institution and the technology together that the upside materialized.
This is the most expensive lesson in the history of technology, and we’re learning it again, right now.
In 2026, AI is driving a 10x increase in the productivity of the individuals who know how to leverage it. But that’s not enough. We’ve swapped the motor; we have not yet redesigned the factory.
Because of a simple fact: productive individuals do not make productive firms.
The wide majority of AI products evoke the feeling of being productive, but they haven’t moved the needle on driving value. The majority of publicized AI use is individuals self-indulgently “productivity-maxxing” on Twitter or in company Slack channels, with zero real impact.
The “services as software” motif that’s been repeated for a year now points in the right direction, but offers no blueprint. And it misses the bigger picture. The real shift isn’t from tools to services, it’s building the technology and the institution together (whether legacy or new). A truly productive future requires an entirely new class of product. The assembly line of tomorrow.
Productive organizations require “Institutional Intelligence.”
This essay will dive into the seven big factors that differentiate “Institutional AI” from “Individual AI.” The entire field of B2B AI companies for the next ten years will be built upon these differences:
The Seven Pillars of Institutional Intelligence
1. Coordination
Individual AI creates chaos.
Institutional AI creates coordination.
Let’s begin with a thought experiment. Imagine you doubled your organization’s headcount tomorrow with clones of only your best employees.
Each of these employees have minor differences, predilections, quirks, and perspectives (especially true if they’re your best employees). If they’re not sufficiently managed, if they’re not sufficiently communicating, if their swim lanes, OKRs, roles and responsibilities are not well defined ... you’ve created chaos.
The organization, while measured on an individual basis, may be more productive, but thousands of agents (or humans) rowing in opposing directions creates a standstill at best, and destroys organizational harmony at worst.
This isn’t hypothetical. It’s happening right now in every organization that’s adopted AI without a coordination layer. Every employee has their own ChatGPT habits, their own prompting styles, their own outputs that don’t talk to anyone else’s outputs. An org chart might exist, but the actual flow of AI-generated work says something else entirely.

Coordination is an absolute imperative, for humans and agents alike.
Institutional intelligence will evolve into an entire “Agentic Management” industry focusing on agent roles and responsibilities, agent-to-agent and agent-to-human communication, and measuring agentic value (consumption based pricing alone doesn’t cut it).
2. Signal
Individual AI creates noise.
Institutional AI finds signal.
Humans today are able to create, or rather generate, anything they can imagine: AI-essays, presentations, spreadsheets, photos, videos, songs, websites, and software. What a gift.
The issue is that almost everything generated by AI is complete slop. The proliferation of this AI slop has become so bad that some organizations are over-rotating and banning AI outputs altogether. This resonates personally... I run an AI company but ask our executive team not to use AI for any final written product. I can’t stand the slop.
Imagine what the world of PE is quickly becoming. Last year, 10 deals may have come across your desk. This year, you’ll receive 50 opportunities next quarter, each one AI-polished to perfection, and you have the same number of hours to find the one real deal.
Generating anything is no longer the problem. The problem, for any serious organization today, is generating and selecting the right thing. Finding the one good artifact, the one good deal, the signal in the noise, matters more and more in an AI-driven world. The key economic driver for the next decade will be uncovering the signal in the mountain of exponentially increasing slop.

Institutional-grade intelligence must find the signal, it must structure the noise to cut through slop, and it must be defined, deterministic, and auditable in the work it does.
Whereas individual AI might emphasize the “always on” productivity of a Clawdbot exploring unpredictable ways to tend to one’s 24/7 needs, i.e. a nondeterministic agent, institutional AI will rely upon the load-bearing predictability of deterministic agents. Agents that have predictable checkpoints, steps, and processes that they run will scale, will uncover signal, and through that signal drive returns via revenue for an organization.

3. Bias
Individual AI feeds bias.
Institutional AI creates objectivity.
Concern around sociopolitical bias dominated AI discourse for years. The foundation model labs eventually circumvented the issue with enough RLHF to effectively turn all models into sycophants. Today, ChatGPT, Claude, etc. are so (overly) aligned that they’ll agree with you on any topics within the Overton window (and sometimes slightly beyond, looking at you @Grok). The discourse on sociopolitical bias has died down. A new problem has taken its place.
But this level of agreement—of over-alignment—on everything has become comically bad. It’s become a meme in its own right ... Claude’s reflexive “you’re absolutely right!” regardless of whether or not you are, in fact, absolutely right.
This sounds harmless. It is not.
The loudest AI advocates inside many organizations may soon be the historically worst-performing employees. Think about why.
Organizations’ worst employees, who receive little to no positive reinforcement every day, will soon have ASI agreeing with them. They will whisper to themselves, “the smartest intelligence that has ever existed agrees with me. My manager is wrong.”
This is intoxicating. It’s also organizationally toxic.

This highlights something important. These individual productivity tools reinforce the user. In reality the most important thing to reinforce is the truth.
Organizations have evolved over thousands of years to build systems that counteract exactly this problem:
Investment committee meetings
Third-party diligence
Boards of Directors
The executive, legislative, and judicial branches of the US government
Representative democracy, and democracy as a whole

Organizations rarely fail because people lack confidence. They fail because no one is willing, or able, to say no.
Institutional AI must play that role. It will not be RLHF’ed into flattering users or echoing their beliefs, but to challenge their bias. It will reinforce behavior when productive, and draw a hard line in realigning non-productive tendencies.
Thus, the most important agents inside organizations will not be “yes-men” but disciplined “no-men” that interrogate reasoning, surface risks, and enforce standards. Some of the most consequential future applications of AI will be built around institutional constraints: AI board members, AI auditors, AI third-party testing, AI compliance, and many more…
4. Edge
Individual AI optimizes for usage.
Institutional AI optimizes for edge.
The goalposts in AI evolve on a weekly and sometimes daily cadence. Foundation model companies, competing for every person and every organization, are rapidly iterating on capabilities.
But in the classic innovators’ dilemma, depth beats breath for specific applications every time:
It’s @Midjourney’s job to be slightly ahead on designed imagery.
It’s @Elevenlabsio’s job to be slightly ahead on voice models.
And it’s @DecagonAI’s job to be always ahead on full-stack customer service experience...
And while the foundation models will get close, the true edge matters for experts in their field. Many of the best designers use @Midjourney, many of the best voice AI companies will use @Elevenlabsio, etc … because even as the foundation models improve, the unyielding focus purpose-built applications have on driving their specific edge defines the edge itself.
As long as purpose-built solutions evolve too, the capabilities that matter for economic outcomes, for businesses, will always be with purpose-built products.
This plays out to a tee in finance – the hottest area for LLM development right now. As soon as a capability is wide spread, it definitionally isn’t going to help you beat the market. But if frontier technology can yield an ephemeral 1 percent niche advantage? That 1 percent can be levered into billion dollar outcomes.

Our users have always exceeded the frontier. Context windows in LLMs have grown from 4K to 1M tokens in four years. Some of our users process 30B tokens in a single job. We have line of sight to 100B-token jobs this year. Every time foundation model capabilities improve, we’ve already pushed further.

Usage for broad populations is important and worthwhile as a goal in itself, especially in onboarding employees to AI. But the future will not be people using ChatGPT/Claude or a domain-specific solution. It will be ChatGPT/Claude and a domain-specific solution.
Institutional intelligence must leverage domain-specific, perhaps even task specific, agents.
We ask ourselves a question that sounds absurd but isn’t:
“What are the agents an AGI would choose to use as a shortcut? Even superintelligence would want purpose-built tools for specific domains.”
The goalposts will always change in AI, and the organizations that leverage the true edge of capability are the organizations that will win. Everyone else is paying for a very expensive commodity.
5. Outcomes
Individual AI saves time.
Institutional AI scales revenue.
@MaVolpi once told me something that reframed how I think about selling AI to the enterprise: “If you ask any CEO whether their first priority is cutting costs or scaling revenue, almost all would say revenue.”
Yet almost every AI product on the market today delivers cost-cutting, promising us to save time, do more with less, or replace headcount.
Institutional AI must deliver upside. And upside is a lot harder to commoditize than saved time.
Take the example of agentic software development. Coding IDEs are some of the best individual AI productivity tools ever built, and they’re already facing massive headwinds from Claude Code, another individual AI tool. Cognition is playing an entirely different game. Their most steadily growing business builds tech to sell transformations, not tools. I’d bet on that lasting power.
Pure software “is rapidly becoming uninvestable.” Pure services don’t scale. The solution layer, marrying technology to outcomes, is where lasting value accumulates.
Or take M&A. Individual AI helps an analyst build a model faster. Institutional AI identifies the one counterparty worth pursuing out of a hundred, and expands that universe to a thousand. One saves time; the other generates revenue.

Moving “upstream” is the natural gravity of the market right now. Foundation models are moving to the app layer. App layer companies moving to the solution layer.
Institutional intelligence is the solution layer. And the solution layer, where the outcomes live, will accumulate lasting value and capture the biggest upside.
6. Enablement
Individual AI gives you a tool.
Institutional AI shows you how to use it.
Humans, for all our ingenuity, are reluctant to change.
Believe it or not, there are still successful businesses in NYC that don’t accept credit cards. They’re losing money, they know they’re losing money, and they’re still unflinching in that inertia. Similarly, for the indefinite future, employees somewhere, in some organizations, will refuse to use AI.
Making the transition from a human-only organization to an AI-first hybrid organization is going to be the lasting and defining challenge of the next decade. And in many cases, the most senior, and most important, levels of the organization will be the slowest to adopt.

There is a reason that Palantir is the only “software” company that is still trading at extraordinary multiples amidst a trillion dollar selloff in technology stocks over the last two months. Palantir is one of the first true “process engineering” companies. Whether you call it “process engineering” or “writing Claude skills files,” institutional AI of the future will have an industry of encoding firm processes in agents and actualizing the change management required to put them in action.

I’d warrant that process engineering will become arguably the most important “technology” in the near term.
And in process engineering, business and industry expertise—not software expertise—is most important. Domain specific solutions beget expertise in the professionals doing the forward deployed engineering, the deployment, and the change management.
A top 3 bulge bracket bank that chose Hebbia for wall-to-wall deployment put it best: They were turned off from working with a big model lab, when they “had to explain what a CIM was to the team.” Claude or GPT surely knew the domain, but the lab’s team architecting the rollout did not...
That made all the difference.
7. Unprompted
Individual AI responds to human prompts.
Institutional AI acts unprompted.
There’s much chatter about agent-to-agent communications, and whether the businesses, software products, and institutions of the future even need humans at all.
The better question, however, is whether AI agents of the future will need prompting at all.
Prompting an AGI is like hooking an electric motor into a power loom. It’s fundamentally, irrevocably constrained by the weakest link in the organizational supply chain—us. Humans hardly know the right questions to ask, let alone when to ask them.
The most valuable work AI can do is the work nobody thinks to ask for. AI should find the risk that nobody flagged, the counterparty nobody thought of, and the sales pipeline that nobody knew was there.
This will blow open the manifold of AI use cases.
An unprompted system continuously watches incoming data across the entire portfolio. It detects that one company’s working capital cycle has quietly deteriorated for three consecutive months, cross-references that against covenant thresholds in the credit agreement, and flags the operating partner before anyone at the fund has opened the PDF.
When you remove the need for humans to prompt AI, new interfaces and new ways of working emerge. We @Hebbia have some strong opinions here. To be continued.
Conclusion
None of this negates the need for chatbots, agents, and individual AI as a whole.
Individual AI will be the vector by which the majority of the world’s businesses first experience the transformative magic of AI. Driving for usage, and generalizable ease of use, is the key first step to the change management to build an AI first economy.
But there is an obvious, urgent, and gaping need for institutional intelligence at the same time.
Every organization in the future will have a chatbot from a big lab. And every organization will have institutional AI purpose-built for domain-specific problems—institutional AI that individual AI will leverage as the key tool in its own tool belt.
The “better together” story for institutional AI and individual AI is inevitable.
But remember the lesson of the 1890s textile mills. The factories that electrified first lost to those who redesigned the floor.
We have our electricity. It’s time to redesign our factories.
Thanks to @aleximm and @WillManidis for proofreading, and to Will for his “Tool Shaped Objects” essay which helped inspire this piece.
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.















Such an incredibly important post, thank you for laying this out so clearly.
I have been talking about this for the past 18 months on my podcast (and almost everywhere else).
Individual productivity became a new vanity metric, and told us nothing about how teams, departments, or organizations were doing. One (human) cog going 10x while others go 1-2x means something is going to break.
You have to be measuring AI readiness and maturity with different attributes as you go from individual to institution.
Only two things I would add are 1) we need responsibility and human-centricity by design to mitigate risk and build things properly the first time, and 2) this also points to the importance of capitalizing on the institutional collective intelligence for making better decisions and architecting strategic work.
Thank you for this article. What stood out most for me and how I interpret this is that even more important than an AI strategy, companies need an infrastructure that drives, sustains and ensures consistent adoption of AI. I view this as part of what normally would be a change management strategy. Also, thank you for including the critical component of bias! I feel like the topic used to be a central part of most conversations around AI but it has slowly disappeared as the race to proliferate has taken center stage.