Great research - thanks for sharing. This one is certainly circulating on my VC dark social channels so it's definitely getting picked up by my network. 🤓👏
The ROI gap you've surfaced is the most important finding in this report and it points to a layer you're not measuring.
"Enterprises are still learning how to deploy AI effectively" and "don't know what 'good' looks like until they try it."
That's not a vendor selection problem. That's a human readiness problem.
You're tracking spend ($4.5M → $7M → $11.6M), model adoption (78% OpenAI, 44% Anthropic), and use case distribution. All valuable.
But none of that explains why ROI "doesn't match X discourse."
What's missing:
→ Do people know what's permitted? (Leadership clarity)
→ Is it safe to experiment and fail? (Psychological safety)
→ Does AI fit into how work actually gets done? (Workflow integration)
→ Are we measuring usage or impact? (Measurement design)
The 65% who "prefer incumbent solutions" aren't choosing Microsoft for capability. They're choosing it for trust, integration, and procurement simplicity. Human factors, not technical ones.
The enterprises capturing real ROI aren't the ones with the best models. They're the ones who diagnosed the human constraint first.
That's the layer this survey should add next year.
Can you please clarify that "using in production" means? Is this using the api's from these vendors inside production applications? And do we have any idea what types of applications these are?
Sharing the name of the third-party independent expert network vendor that conducted the survey would be useful wrt credibility.
Great research - thanks for sharing. This one is certainly circulating on my VC dark social channels so it's definitely getting picked up by my network. 🤓👏
Really useful breakdown of enterprise adoption dynamics.
It would also be interesting to see how AI infrastructure maturity and measurable business impact are evolving alongside model share.
Adoption is clearly accelerating. Seems the next phase may be about operating model redesign and sustained value capture.
True. But Anthropic’s trust advantage works differently.
They win hearts and minds across workloads, not by optimizing each use case but rather by being the intentional product choice when decisions matter.
Sometimes the meta-game beats the micro-optimizations.
Very insightful, thank you!
The ROI gap you've surfaced is the most important finding in this report and it points to a layer you're not measuring.
"Enterprises are still learning how to deploy AI effectively" and "don't know what 'good' looks like until they try it."
That's not a vendor selection problem. That's a human readiness problem.
You're tracking spend ($4.5M → $7M → $11.6M), model adoption (78% OpenAI, 44% Anthropic), and use case distribution. All valuable.
But none of that explains why ROI "doesn't match X discourse."
What's missing:
→ Do people know what's permitted? (Leadership clarity)
→ Is it safe to experiment and fail? (Psychological safety)
→ Does AI fit into how work actually gets done? (Workflow integration)
→ Are we measuring usage or impact? (Measurement design)
The 65% who "prefer incumbent solutions" aren't choosing Microsoft for capability. They're choosing it for trust, integration, and procurement simplicity. Human factors, not technical ones.
The enterprises capturing real ROI aren't the ones with the best models. They're the ones who diagnosed the human constraint first.
That's the layer this survey should add next year.
Can you please clarify that "using in production" means? Is this using the api's from these vendors inside production applications? And do we have any idea what types of applications these are?