20 Comments
User's avatar
Razan Khatib's avatar

I think this framing makes a lot of sense for existing customers of systems of record.

If a company already runs on Salesforce, SAP, Workday, etc., then the data, permissions, workflow history, compliance trails, and context inside those systems create a real moat.

But I wonder if that moat is more defensive than offensive.

For a company starting fresh in the agentic era, the question may not be “which system of record should we adopt?” but “what context, permissions, and execution layer do our agents need to operate safely?”

So these systems may remain hard to leave, but become harder to choose as the default starting point.

Gagik Yeghiazarian's avatar

Software is not losing its head. It is collapsing under the weight of its own complexity.

AI did not create this problem. It exposed it.

The future is probably not systems that endlessly generate more opaque code faster and faster, hoping verification somehow catches up.

The future is systems that are structurally understandable from the start: auditable, deterministic, evolvable.

Still learning.

But knowing.

Dale Thomas's avatar

Enterprise software is not losing its head. It is finally growing hands. The ERPs and CRMs underpin complex B2B processes, handling billions in annual transactions. They are not going anywhere. Sunk cost is real, integration debt is real, and ripping them out is a non-starter for any operator with a P&L.

The real question is why platforms like Salesforce Commerce Cloud, the Demandware lineage Salesforce paid 2.8B for in 2016, never reached complex B2B the way they reached retail. The answer is intelligence. Materials, specs, regulatory data, lot pricing, and rep workflows are too messy for a generic storefront - human intelligence was required. The conversion engine that worked for shoes and skincare cannot reason about a resin grade or a tolerance spec.

That is exactly what we've solved at Plastics.com. An operating system for plastics sales built on ontologies, workflows, and actions tuned for complex B2B. We sit on top of the systems of record (CRM/ERP) that already work, and we own the conversion event that the storefronts could never reach. Same shape as Demandware, pointed at a category that has waited two decades for its turn.

AltG: The Intelligence Buyout's avatar

Sharp framing. The factors you list — proprietary data, owning the action layer, real-world execution, multi-party network effects, technically underserved verticals — describe operating businesses already listed at 10–15x earnings as accurately as they describe the next generation of AI-native SaaS.

A diagnostics chain owns the action layer. An insurance distribution network sits in the multi-party flow. A specialty industrial services firm runs the undocumented SOPs agents need to operate inside.

The interesting question isn't whether new agentic SaaS will be defensible. It's whether the application layer needs to be built at all — when so much of what you describe is already listed, profitable, and trading at fixed-asset multiples.

Peter Tippett's avatar

Our thesis summed this up from 2 years ago, but for small business. AI Agents will need to actually see the whole business, not just one silo so they deliver answers in the right context otherwise they will lie with confidence.

And why we are doing small business, they have more freedom to change and the AI allows them to leap frog ahead as many of their systems are so old that they can't remove them. We go, they can keep them, but add a whole layer of value with smart API's where we have pulled data into a time-series knowledge graph that operates at agent speed and with consistency.

This then gives the agents deep context to work with and they're building UI as needed, ie on-demand disposal UI.

Later we will replace the old systems, but just as a slow takeover, not a painful rip-it-out model.

Robert Koller, CAIA's avatar

@Gagik Yeghiazarian named the property. @Razan Khatib named the question for agentic-native enterprises. What they are both circling is a deterministic substrate compiled from the source documents a business already runs on: contracts, policies, regulations, procedures. The substrate sits below the SoR @Seema Amble describes and answers Razan's question for agentic-native enterprises directly: not which SoR, but which compiled rule layer the agents call before they act.

Already running in regulated capital markets: €600M+ processed, 99.98% accuracy across 100,068 validations, zero compliance violations. Rules compiled from source, callable by any agent, traceable to the clause that authorized the action. Now extending into defense, where sovereign substrates are the GTM differentiation.

The agentic SoR Amble is pointing at is the consequence. The substrate compiled from prescriptive documents is the cause. Gagik's "structurally understandable from the start" is the right phrase for it.

The Synthesis's avatar

Compilation as moat works until the compiler itself becomes the commodity, which is the recurring pattern. Managed Agents took Fastly down eighteen percent in three sessions because every new primitive commoditizes whatever sits one layer beneath it. The rule substrate might be the cause this cycle and the casualty in the next.

Mitchell Kosowski's avatar

I think the "trust architecture" point deserves more weight than it gets here. When agents replace UIs, the permissioning and audit layer is the moat but equally as important: the new attack surface.

Compromise an agent's credentials and you've silently phished an entire org's system of record. The SoRs that survive won't just be headless rather they'll treat agent identity and policy as a first-class product, not a config tab.

Ken Heil's avatar

Software isn’t losing its head. Humans are losing the ability to distinguish between cognition and interface.

Most “AI products” aren’t intelligence breakthroughs — they’re UX wrappers around probabilistic systems. The real shift is that software no longer needs deterministic structure to feel useful. That changes how people perceive authority, truth, and expertise

Dave's avatar
2dEdited

this piece breaks down the question, “if we can see the winners emerging in the AI race, who are the losers?” applying the broad brush that all SaaS will lose or all SaaS will keep their moats with Agentic AI doesn’t follow the patterns of past market eruptions, client/server, cloud, web2.0, mobile.

The Synthesis's avatar

Right, each prior eruption sorted winners by a different chokepoint. The agent shift looks similar: when the agent itself goes free (OpenClaw hit 250K GitHub stars in four months), value migrates to whoever owns the proprietary data and workflow context. That filter splits SaaS into two piles fast. More in https://thesynthesisai.substack.com/p/the-free-agent.

Basit Tanveer's avatar

Not just becoming headless it need to adopt for machines, currently most of the software is made for humans.

Constrained Intelligence's avatar

A good example I’ve recently seen of an AI agent living on top of Salesforce is aiOla. They allow unstructured voice notes with tons of jargon (while driving between clients for example) —> Salesforce data entry. And then in reverse, it can pull contextual data from Salesforce and help prep sales people for their next meeting by connecting with their calendar etc.

Kingsley Uyi Idehen's avatar

Insightful post.

Additional thoughts:

"..betting that in an agentic world, its value lies in the data layer"

"..is there a new set of criteria?"

Defensibility in an era where AI agents, skills, and data spaces (databases, knowledge bases, filesystems, and APIs) are loosely coupled comes down to fine-grained, attribute-based access control.

This approach, which is HTTP-native (courtesy of the reawakened interest in 402 status code), leverages a knowledge graph constructed from machine-computable attributes of the human operator, AI agent, and data spaces. Together, these define the conditions for access to the following:

1. Agent

2. Agent skill

3. Tools used by an agent skill

4. Data spaces accessed by tools

Items 1–4 are Gillette business model–compliant in the sense that they preserve the powerful loose coupling of razor handle and razor blades—now extended to a new scale through agents, their associated skills, tools, and target data spaces.

Related

[1] https://kidehen.substack.com/p/using-attribute-based-access-control

Alec Pritzos's avatar

The compliance-critical column in that grid is the one that survives going headless. ERPs and payroll keep their stickiness because the moat is auditor and regulator workflows, not UI habit, and that doesn't change when agents are reading the schema directly. CRMs are where agentic disintermediation actually bites, because the durable layer was always the manager-ritual stack on top of the UI, not the underlying database. That points the next round of system-of-record competition at compliance-grade agent infrastructure, not at headless CRM repositioning.

Rahul Saxena's avatar

Systems of Record were built around humans operating through UIs. Their architecture is for ACID transactions, auditability, and workflow enforcement through the interface. Human coordination flowed top-down into data-entry workflows, while data flowed bottom-up from systems of record into systems of intelligence.

Agentic enterprises change the shape of that model. Coordination flows from higher-order objectives down through operational tiers, while data and signals move across a shared backplane. The enterprise becomes a system of interacting closed-loop control systems continuously sensing, deciding, acting, and learning.

As autonomous action moves outside the UI, the role of the interface shifts from procedural data entry toward signal, supervision, and intervention.

That changes where defensibility lives. The moat is no longer the screen or even the workflows. It is in the enterprise intelligence layers: workflow context, execution telemetry, exception handling, coordination logic, and the compounding data generated through autonomous execution.

Traditional DSS architectures (plan-schedule-dispatch) depend on expert human operators. Agentic systems use the lessons of DSS to move further toward autonomous operations supervised and tuned by humans.

The next enterprise platform is not another system of record. It is a system for coherent autonomous operations.

That is the direction we are building at RevInsight: https://revinsight.com/blog/systems-of-tiered-control-loops/.

Scenarica's avatar

The headless thesis has a valuation consequence that this piece implies but doesnt quite state. SaaS companies are priced on per-seat models. Salesforce charges $300/seat/month because a human sits in the UI and develops the muscle memory that makes switching painful. When an agent replaces that human the seat disappears, the muscle memory becomes irrelevent, and the pricing model has to shift from seats to API calls or data volume.

API access is structuraly lower-margin than seats because APIs are commoditised and interchangeable in a way that human habits arent. The agent doesnt care wether it reads from Salesforce or HubSpot or a well-designed Postgres schema. It routes to whichever system returns the best data fastest. The switching cost that justified per-seat SaaS premiums for twenty years evaporates the moment the user is a machine rather than a person.

The defensibility moving downward into data models and compliance and upward into networks and proprietary data generation is the right framework. But the middle layer where most SaaS revenue currently sits, the UI and the seat, is exactly whats getting hollowed out. The question for every SaaS investor reading this piece is how much of thier portfolio's valuation depends on humans continuing to sit in interfaces that agents are about to make optional.

ContextECF-The Context Fabric's avatar

Thanks Seema. Loved the article. Very timely discussion. Just published our pov on the topic.

I think our apps framework is going to change drastically.