Going against you is a recent survey of 5,000 knowledge workers showing adoption of AI is pathetically low outside of being a fancy Google substitute. My wife works for a large asset manager. I asked her “What tasks are you doing with AI?” Her answer was “None aside from taking notes in meetings.” Lawyers can never use AI because you immediately lose attorney client privilege the minute you feed a document into any public AI system. Microsoft Word has been correcting spelling and grammar for more than a decade. Ask Ken Griffin how many employees he has been able to shed because of AI. The answer is likely close to zero. People who attempt to vibe code hit snags all the time and they often leave huge security vulnerabilities. We were not coding line by line to begin with. We had templates and libraries of code. The people building LLMs will never be able to shed the thousands of consultants working off the books who teach these models legal and scientific concepts because those fields evolve over time. If AI adoption was huge, token prices would not be so low and Open AI would not be losing unspeakable sums. I use AI almost every day of the week. The cost per year is less than what I spend on Uber Eats in five days. If adoption was huge, I would be forced to pay $20,000 a year, not $120.
You can easily build financial models using AI, but how accurate are the numbers? Where did they come from? Were they adjusted for one-time charges? Currently you can pull data from a Bloomberg into Excel. That repeats most of the same party trick of building a model. But you can’t shed that expensive Bloomberg terminal or you would be flying blind in markets. AI models have a limited number of news sources because news is protected by copyright. Feed the WSJ into your LLM and Dow Jones will sue you.
These services often can’t sell into city or state governments because they are not bonded and don’t carry the right liability insurance.
Really strong piece, Kimberly. I think the hidden second variable here is organizational absorptive capacity.
The winners you highlight, coding, support, search, are not just places where the models are strong. They’re also places where the work is bounded, outputs are verifiable, ROI is visible, and humans can stay in the loop without killing the economics.
That’s why technically promising use cases can still stall inside large enterprises. The task may fit AI, but the organization may not yet be able to absorb it into workflow, trust, measurement, and decision rights.
So the next question may be less “Where can AI work?” and more “Which organizations are actually built to absorb where it works?”
There’s always a phase where a new technology feels overhyped because people are looking for sweeping change while the real shift is happening in narrow corridors. The data here reads less like a revolution and more like a quiet accumulation of edge. Coding, support, search. Highly structured environments where feedback is tight and value can be measured quickly.
That’s how every durable trend starts. Not with mass adoption, but with pockets of undeniable efficiency that compound faster than expected. The mistake is expecting AI to replace entire roles overnight. What it’s actually doing is compressing time inside specific workflows. One engineer doing the work of three. One support system handling volume that used to require a floor of people.
Markets tend to price these shifts before they become obvious in headcount. By the time companies start reporting layoffs tied directly to AI, the real opportunity will have already passed. The signal is here in the margins, where output quietly expands while the surface still looks unchanged.
The sector breakdown is telling. Tech, legal, and healthcare leading makes sense since those workflows are more codified. The untold story is consumer and luxury. Adoption is slower, but when AI moves a merchandising or pricing decision that affects margin at scale, the ROI conversation changes fast. Those use cases are just starting to get real internal buy-in.
The pattern I keep seeing: enterprise AI actually works where the workflow is narrow, the data is clean, and success criteria are measurable. The failures cluster around broad “transformation” mandates with no clear benchmark. This data from a16z maps exactly to that.
Until the issue of enterprise context is addressed, many will remain in the pilot stage. Whether building a solution as a SaaS or developing it internally, it's essential to have an "enterprise context brain" to ensure that enterprise AI workflows are connected to outcomes that build trust with customers. I wrote an article about this topic. https://rmeerasahib.substack.com/p/agents-are-not-the-bottleneck-enterprise
This is phenomenal research. It was surprising that the financial analysis adoption is so low. In our experience, nothing quite beats GPT Pro at pulling together outrageously complex excel sheets.
Going against you is a recent survey of 5,000 knowledge workers showing adoption of AI is pathetically low outside of being a fancy Google substitute. My wife works for a large asset manager. I asked her “What tasks are you doing with AI?” Her answer was “None aside from taking notes in meetings.” Lawyers can never use AI because you immediately lose attorney client privilege the minute you feed a document into any public AI system. Microsoft Word has been correcting spelling and grammar for more than a decade. Ask Ken Griffin how many employees he has been able to shed because of AI. The answer is likely close to zero. People who attempt to vibe code hit snags all the time and they often leave huge security vulnerabilities. We were not coding line by line to begin with. We had templates and libraries of code. The people building LLMs will never be able to shed the thousands of consultants working off the books who teach these models legal and scientific concepts because those fields evolve over time. If AI adoption was huge, token prices would not be so low and Open AI would not be losing unspeakable sums. I use AI almost every day of the week. The cost per year is less than what I spend on Uber Eats in five days. If adoption was huge, I would be forced to pay $20,000 a year, not $120.
You can easily build financial models using AI, but how accurate are the numbers? Where did they come from? Were they adjusted for one-time charges? Currently you can pull data from a Bloomberg into Excel. That repeats most of the same party trick of building a model. But you can’t shed that expensive Bloomberg terminal or you would be flying blind in markets. AI models have a limited number of news sources because news is protected by copyright. Feed the WSJ into your LLM and Dow Jones will sue you.
These services often can’t sell into city or state governments because they are not bonded and don’t carry the right liability insurance.
Really strong piece, Kimberly. I think the hidden second variable here is organizational absorptive capacity.
The winners you highlight, coding, support, search, are not just places where the models are strong. They’re also places where the work is bounded, outputs are verifiable, ROI is visible, and humans can stay in the loop without killing the economics.
That’s why technically promising use cases can still stall inside large enterprises. The task may fit AI, but the organization may not yet be able to absorb it into workflow, trust, measurement, and decision rights.
So the next question may be less “Where can AI work?” and more “Which organizations are actually built to absorb where it works?”
There’s always a phase where a new technology feels overhyped because people are looking for sweeping change while the real shift is happening in narrow corridors. The data here reads less like a revolution and more like a quiet accumulation of edge. Coding, support, search. Highly structured environments where feedback is tight and value can be measured quickly.
That’s how every durable trend starts. Not with mass adoption, but with pockets of undeniable efficiency that compound faster than expected. The mistake is expecting AI to replace entire roles overnight. What it’s actually doing is compressing time inside specific workflows. One engineer doing the work of three. One support system handling volume that used to require a floor of people.
Markets tend to price these shifts before they become obvious in headcount. By the time companies start reporting layoffs tied directly to AI, the real opportunity will have already passed. The signal is here in the margins, where output quietly expands while the surface still looks unchanged.
The sector breakdown is telling. Tech, legal, and healthcare leading makes sense since those workflows are more codified. The untold story is consumer and luxury. Adoption is slower, but when AI moves a merchandising or pricing decision that affects margin at scale, the ROI conversation changes fast. Those use cases are just starting to get real internal buy-in.
The pattern I keep seeing: enterprise AI actually works where the workflow is narrow, the data is clean, and success criteria are measurable. The failures cluster around broad “transformation” mandates with no clear benchmark. This data from a16z maps exactly to that.
Until the issue of enterprise context is addressed, many will remain in the pilot stage. Whether building a solution as a SaaS or developing it internally, it's essential to have an "enterprise context brain" to ensure that enterprise AI workflows are connected to outcomes that build trust with customers. I wrote an article about this topic. https://rmeerasahib.substack.com/p/agents-are-not-the-bottleneck-enterprise
This is phenomenal research. It was surprising that the financial analysis adoption is so low. In our experience, nothing quite beats GPT Pro at pulling together outrageously complex excel sheets.