I wonder if this is how the excellent German engineers who made all those pretty shower heads and ingeniously laid out rail networks for Dachau thought about their day jobs. Efficient, aligned with stated objectives… Good grief, are none of you people interested in who uses Palantir’s tech, and for what? I suspect you know — one of A16Z’s “new media team” or whatever told me privately once that “yeah, they’re basically straight up evil”. I guess the engineering is sufficiently shiny and the pile of money is sufficiently large. Good luck explaining it to your kids, though.
Yes. Great post. For me having lived a lot of my career building platforms its was just all head-nodding all the way; first time in a while I read an entire post end to end.
One comment: building a platform requires some founder/high-level person to be really quite technical and able to communicate to non-technical folks why there needs to resourcing on something that looks like it doesn't add overall product value in the short term; otherwise building those primitives are never going to be staffed appropriately...
This is a useful corrective to the overused “Palantir-for-X” framing and rightly emphasizes that Palantir’s differentiation is not high-touch delivery per se, but a platform-first architecture where forward-deployed engineering functions as a temporary adoption accelerator rather than a permanent revenue engine. The key risk the post highlights, and that investors should underwrite explicitly, is mistaking bespoke, engineer-heavy deployments for scalable software economics when the underlying product lacks reusable primitives, unified data models, and workflow abstractions. From a financial lens, the telltale signals are straightforward: improving subscription-to-services mix, declining engineering cost per dollar of ARR, and expansion-driven net retention that confirms platform leverage. Absent these dynamics, “Palantirization” is less a strategy and more a euphemism for a services-led business with capped margins and limited long-term defensibility.
21 years of unprofitability, bankrolled by Peter Thiel entirely for the first 3 years, long-term horizon essentially propelled by a SV incumbent. Heres a Gemini chat I had that gave me good context: https://share.google/aimode/ns2p5smpUCoapdDSJ
A lot of “Palantir for X” feels like brute‑forcing AI adoption with human FDEs; the AI‑native services opportunity, I think, is to replace that with opinionated agentic primitives—small, reusable AI systems that embed into workflows, learn across customers, and compound like software while still meeting enterprises where their data and processes actually live.
This piece offers invaluable perspective for tech founders. As someone involved in the software and consulting industry within healthcare, I found your analysis both enlightening and relatable. It effectively underscores the challenges I’ve been grappling with, providing a fresh perspective and potential solutions. Thank you for sharing your experience!
Understand that many problems are created by the ideas and actions of people who aren’t highly intelligent. So, like a parent observing a child who is having a problem, the solution is quite apparent. Sheer human stupidity (or mediocrity, and often greed or selfishness) is the source of most problems. Brainiacs, to put it simply, are good at spotting weak sauce.
Complex problems can be the creation of other brainiacs — we see this in law, politics, military, business, etc. But even then it’s not hard, because it’s not logical for smart people to create problems for the sake of problems. Behind it is some form of motivation. So revealing the motivation often reveals the solution to the problem.
One of the hallmarks of a high IQ is a sort of quantum-wiring in the brain that senses patterns and relationships between seemingly unconnected ideas, concepts, systems, etc. That’s a handy problem-solving talent. The really smart ones can turn that technique off and perceive . . . hyper-objectively? Being able to see the elements of a problem as they are without delusion, cognitive biases, etc. can strip away the impediments to a solution. That’s the “enlightenment” monks and others try to achieve. A few brainiacs are just born that way. Or maybe all of us are until we’re taught the rules.
This is where the “childlike wonder of the genius” stereotype comes from.
Metaphorically, the more intelligent people are, the more “multi-dimensional” they are able to think. Here’s an example: Your problem is represented as a square. If you’re only able to think two-dimensionally, you aren’t aware that the square is actually a cube. The three-dimensional thinker can see the cube, rotate it and find the answer on a side that is obscured from the two-dimensional thinker. A four-dimensional thinker can see all the cube’s outer sides at the same time, and the sides on the inside of the cube, and the negative space the cube creates, which reveals the sides of other cubes.
Whew! That was a long sentence. Here’s another way. Brainiacs can work with concepts the way you see this M.C. Escher drawing. Sort of.
So. To finally answer your question . . . we can only think (or problem-solve) up to the dimension-level we are able to comprehend. So while a two-dimensional thinker may be able to suspect, or even understand that the problem represented by a square is actually a cube, he’ll have to try very hard to conceive of it. While the brainiac sees all the sides naturally, turns the cube inside out and then shows you the side with the solution.
Great write up on what may not generalize with Palantir’s approach, especially on leveraging platform primitives (which I think many AI tools are missing today).
To the point about FDEs being a decade of highly trained individuals for a specific skill set, my hypothesis is that the tech industry as a whole will incentivize developing these skill sets. Palantir might have been first in operationalizing the technical + customer-centric dual threat, but I’m pretty convinced that the blurring of PDE functions will entail this also becoming a core competency for more PM/eng/design.
The near/medium term problem with deploying enterprise grade AI workflows will only get worse and our best solution is throwing more people at it, not agents.
Maybe the best way to think about this is the future of PDE will converge onto FDE skill sets for many enterprise AI tools. You need reality engineers.
At Palantir, there’s the idea of “decomp,” which is the art of breaking a problem down into its most critical subparts. To take this one step further, I’d encourage companies/startups to think less of “to FDE or not FDE,” and instead “decomp” what kind of network of roles and responsibilities makes the most sense for their product and company. We live in a world where it is easier than ever to blur the boundaries of traditional tech roles. You can be a customer-obsessed MLE, or a product-oriented infra engineer or some other crazy combination. You can demand this out of the people you hire, or go through the discovery process to figure out what this should look like.
While the success is partly the work of FDEs themselves, it’s also the culture of how the entire company operates. It’s a testament to the humility of the company to take a step back, tune out the noise, and say “hey, the traditional SWE/PM/manager model doesn’t work for us, we want to invent a role that is unique to the work we do” and that’s really where the value lies.
The resourcing tension Stefan flags can become the financial trap Tushar describes. The right technical voice helps to navigate it smartly but not without a benefit horizon.
Founders can't hire their way through the project-to-platform transition forever. They must establish primitives that automate meaningfully and deliver operational leverage. Otherwise engineering cost per ARR never improves.
As an operator I'd add one measurable signal: declining variance in deployment timelines (often a hallmark of engineering costs) across customers.
If every engagement is still a snowflake at customer 20, the primitives aren't actually reusable plumbing yet, regardless of what the deck says.
Good Lord a16z well outlined. @Tereza have we forgotten machine learning all of sudden or perhaps you hurt from the obvious. I do find this post educationally hilarious a touch of palantir-style marketing on steroids. Love it. 😂💕 Before we criticise let's rather consider how to harness it's effectiveness to serves a greater purpose.
I wonder if this is how the excellent German engineers who made all those pretty shower heads and ingeniously laid out rail networks for Dachau thought about their day jobs. Efficient, aligned with stated objectives… Good grief, are none of you people interested in who uses Palantir’s tech, and for what? I suspect you know — one of A16Z’s “new media team” or whatever told me privately once that “yeah, they’re basically straight up evil”. I guess the engineering is sufficiently shiny and the pile of money is sufficiently large. Good luck explaining it to your kids, though.
Same thought. Palantir is part of a political and cultural push. Nothing fun or smart about this
“We’re like Nazis, but for X!”
Yes. Great post. For me having lived a lot of my career building platforms its was just all head-nodding all the way; first time in a while I read an entire post end to end.
One comment: building a platform requires some founder/high-level person to be really quite technical and able to communicate to non-technical folks why there needs to resourcing on something that looks like it doesn't add overall product value in the short term; otherwise building those primitives are never going to be staffed appropriately...
This is a useful corrective to the overused “Palantir-for-X” framing and rightly emphasizes that Palantir’s differentiation is not high-touch delivery per se, but a platform-first architecture where forward-deployed engineering functions as a temporary adoption accelerator rather than a permanent revenue engine. The key risk the post highlights, and that investors should underwrite explicitly, is mistaking bespoke, engineer-heavy deployments for scalable software economics when the underlying product lacks reusable primitives, unified data models, and workflow abstractions. From a financial lens, the telltale signals are straightforward: improving subscription-to-services mix, declining engineering cost per dollar of ARR, and expansion-driven net retention that confirms platform leverage. Absent these dynamics, “Palantirization” is less a strategy and more a euphemism for a services-led business with capped margins and limited long-term defensibility.
Thanks for the inspo Mark!
Wrote a few words extending the thinking a bit in a couple of areas based on our on the ground experience: https://enterprisecontextmanagement.substack.com/p/the-saas-selloff-is-a-verdict-on
21 years of unprofitability, bankrolled by Peter Thiel entirely for the first 3 years, long-term horizon essentially propelled by a SV incumbent. Heres a Gemini chat I had that gave me good context: https://share.google/aimode/ns2p5smpUCoapdDSJ
https://open.substack.com/pub/jverheyden/p/ted-kaczynski-charles-reich-and-the?r=5f3kwg&utm_medium=ios&shareImageVariant=overlay
A lot of “Palantir for X” feels like brute‑forcing AI adoption with human FDEs; the AI‑native services opportunity, I think, is to replace that with opinionated agentic primitives—small, reusable AI systems that embed into workflows, learn across customers, and compound like software while still meeting enterprises where their data and processes actually live.
This piece offers invaluable perspective for tech founders. As someone involved in the software and consulting industry within healthcare, I found your analysis both enlightening and relatable. It effectively underscores the challenges I’ve been grappling with, providing a fresh perspective and potential solutions. Thank you for sharing your experience!
Well written piece on the topic. A lot of early-stage teams are falling into this trap at the moment and would benefit from reading this
Understand that many problems are created by the ideas and actions of people who aren’t highly intelligent. So, like a parent observing a child who is having a problem, the solution is quite apparent. Sheer human stupidity (or mediocrity, and often greed or selfishness) is the source of most problems. Brainiacs, to put it simply, are good at spotting weak sauce.
Complex problems can be the creation of other brainiacs — we see this in law, politics, military, business, etc. But even then it’s not hard, because it’s not logical for smart people to create problems for the sake of problems. Behind it is some form of motivation. So revealing the motivation often reveals the solution to the problem.
One of the hallmarks of a high IQ is a sort of quantum-wiring in the brain that senses patterns and relationships between seemingly unconnected ideas, concepts, systems, etc. That’s a handy problem-solving talent. The really smart ones can turn that technique off and perceive . . . hyper-objectively? Being able to see the elements of a problem as they are without delusion, cognitive biases, etc. can strip away the impediments to a solution. That’s the “enlightenment” monks and others try to achieve. A few brainiacs are just born that way. Or maybe all of us are until we’re taught the rules.
This is where the “childlike wonder of the genius” stereotype comes from.
Metaphorically, the more intelligent people are, the more “multi-dimensional” they are able to think. Here’s an example: Your problem is represented as a square. If you’re only able to think two-dimensionally, you aren’t aware that the square is actually a cube. The three-dimensional thinker can see the cube, rotate it and find the answer on a side that is obscured from the two-dimensional thinker. A four-dimensional thinker can see all the cube’s outer sides at the same time, and the sides on the inside of the cube, and the negative space the cube creates, which reveals the sides of other cubes.
Whew! That was a long sentence. Here’s another way. Brainiacs can work with concepts the way you see this M.C. Escher drawing. Sort of.
So. To finally answer your question . . . we can only think (or problem-solve) up to the dimension-level we are able to comprehend. So while a two-dimensional thinker may be able to suspect, or even understand that the problem represented by a square is actually a cube, he’ll have to try very hard to conceive of it. While the brainiac sees all the sides naturally, turns the cube inside out and then shows you the side with the solution.
Very interesting
Great write up on what may not generalize with Palantir’s approach, especially on leveraging platform primitives (which I think many AI tools are missing today).
To the point about FDEs being a decade of highly trained individuals for a specific skill set, my hypothesis is that the tech industry as a whole will incentivize developing these skill sets. Palantir might have been first in operationalizing the technical + customer-centric dual threat, but I’m pretty convinced that the blurring of PDE functions will entail this also becoming a core competency for more PM/eng/design.
The near/medium term problem with deploying enterprise grade AI workflows will only get worse and our best solution is throwing more people at it, not agents.
Maybe the best way to think about this is the future of PDE will converge onto FDE skill sets for many enterprise AI tools. You need reality engineers.
At Palantir, there’s the idea of “decomp,” which is the art of breaking a problem down into its most critical subparts. To take this one step further, I’d encourage companies/startups to think less of “to FDE or not FDE,” and instead “decomp” what kind of network of roles and responsibilities makes the most sense for their product and company. We live in a world where it is easier than ever to blur the boundaries of traditional tech roles. You can be a customer-obsessed MLE, or a product-oriented infra engineer or some other crazy combination. You can demand this out of the people you hire, or go through the discovery process to figure out what this should look like.
While the success is partly the work of FDEs themselves, it’s also the culture of how the entire company operates. It’s a testament to the humility of the company to take a step back, tune out the noise, and say “hey, the traditional SWE/PM/manager model doesn’t work for us, we want to invent a role that is unique to the work we do” and that’s really where the value lies.
Stefan Krawczyk and Tushar Kholia:
The resourcing tension Stefan flags can become the financial trap Tushar describes. The right technical voice helps to navigate it smartly but not without a benefit horizon.
Founders can't hire their way through the project-to-platform transition forever. They must establish primitives that automate meaningfully and deliver operational leverage. Otherwise engineering cost per ARR never improves.
As an operator I'd add one measurable signal: declining variance in deployment timelines (often a hallmark of engineering costs) across customers.
If every engagement is still a snowflake at customer 20, the primitives aren't actually reusable plumbing yet, regardless of what the deck says.
Good Lord a16z well outlined. @Tereza have we forgotten machine learning all of sudden or perhaps you hurt from the obvious. I do find this post educationally hilarious a touch of palantir-style marketing on steroids. Love it. 😂💕 Before we criticise let's rather consider how to harness it's effectiveness to serves a greater purpose.
Whatever we might be seeing most in the news, this same strategy is likely a big part of it.