Robotics Needs Fewer Roboticists*
*per capita. A case for deploying intelligent manipulators today.
America | Tech | Opinion | Culture | Charts
Robotics is keeping out the people it needs most. Over the last few years, the technology has advanced faster than most people expected, and yet real-world deployments remain stubbornly rare. That gap is not just a research problem. It is a people problem. The field has spent decades optimizing for a narrow profile of contributors, and now that deployment is finally within reach, it doesn’t have the builders required to get there.
If the goal is to see intelligent robots actually deployed and augmenting human labor at scale, then robotics needs fewer roboticists per capita. Not fewer people working on robots, but more operators, more people who care about customers and reliability, and more outsiders. It needs to look less like a research subfield and more like an industry.
While robotics is a broad field, this essay is primarily about robotic manipulators that learn from data-driven approaches rather than follow pre-programmed routines. This is where the gap between research promises and real-world deployment is currently widest.
The Moment Is Now
Most roboticists would agree real-world deployments remain the ultimate goal. The potential of robots is unlocked once they are doing useful work, repeatedly, in the messiness of the real world.
In practice, however, many robotics startups inherit the culture of research labs rather than of businesses. In robotics, novelty earns prestige. Reliability and customer obsession earn revenue. The two are rarely optimized together.
Intelligent robotic manipulators have been “almost there” for decades, and each generation felt one breakthrough away from robust generalization and real-world usefulness. Optimizing purely for research is rational when nothing is working. Hardware was expensive and bespoke, teleoperation was clunky and didn’t scale well, and robust and flexible learning-based techniques didn’t exist. If the core stack is brittle, deployments are premature.
What’s different now isn’t ambition. Robotics has always aimed for real-world deployment, but now, the tools are crossing thresholds required to make deployments viable. Pre-trained models provide strong priors for manipulation, and a relatively narrow post-training recipe built around behavior cloning and DAgger has emerged as a path to high success rates on targeted tasks. Vision-Language-Action (VLA) and Video-Action models (VAMs) are beginning to generalize beyond narrow demonstrations, showing early signs of adaptability across objects, environments, and task variations. At the same time, a new tier of low-cost hardware has emerged. Capable research-grade robotic arms are now available for a fraction of what platforms Franka or UR5e cost, moving robotic systems closer to economic viability.
Robots still fail in deployment. Today’s available models seldom meet cycle time requirements, they have limited memory, and our understanding of post-training and robustness is in its adolescence. But we’re no longer blocked by “nothing works.” The blocker is building systems that prioritize reliability, ease of integration, iteration speed, and unit economics. That is a different kind of problem, and it requires a different kind of builder.
The Missing Application Layer
The intelligence stack in robotics is improving rapidly, but there is a missing layer between current capabilities and economic impact. In software, foundation models unlocked value because companies built application layers on top of them, products integrated into workflows, sold to customers, and iterated in production. Robotics needs an equivalent layer.
Even if autonomy worked perfectly for every use case, deploying a robot would still require custom hardware, robust telemetry and monitoring, purpose-built sensors, custom end effectors, remote teleoperation infrastructure, fleet management tooling, safety systems, and escalation pathways. Waiting for autonomy to be perfect before building these systems is backward. These systems are what make autonomy economically meaningful in the first place.
Labor is the product, not full autonomy. If a robot can deliver reliable labor, even with teleoperated fallbacks, it creates value immediately. Yes, full autonomy reduces the teleoperator time required to deliver that labor, driving down operating costs over time. But that is a margin improvement, not a prerequisite for deployment.
Reframing the problem this way changes how companies are built. Teams can deploy today, learn customer workflows, develop custom hardware and software stacks, and build operational expertise while riding the future wave of autonomy improvements. The moat for a deployment-layer company comes from relationships, domain knowledge, and integration depth within a specific vertical, not just model performance.
The application layer gap is not just a technical gap. It is a people gap. Building it requires operators who understand customer workflows, engineers who care about reliability over novelty, and product builders who can translate messy real-world constraints into systems that actually ship. These are not the people robotics has historically emphasized. They are exactly the people the field’s current culture screens out.
The Talent Bottleneck
The field of robotics has a gatekeeping problem that the current moment can no longer afford.
One of the things that surprised me most while building my last company was how quickly bachelors-level interns were able to contribute. The work was hard, but they excelled because they were living inside the latest papers in a way that most industry engineers simply aren’t. University labs and clubs have a kind of information velocity that’s easy to underestimate from the outside. New techniques propagate fast, everyone is tinkering, and the culture rewards trying things before they’re proven. The interns who came out of those environments arrived with deep intuitions about methods that were only months old. They got up to speed and started contributing faster than anyone expected.
That experience shaped how I think about the cultural problem in robotics. Interest in the field has surged over the last couple of years. Where do these people go? Hiring has grown, but it’s been narrowly concentrated. A handful of large humanoid companies like Tesla, Figure, and 1X are absorbing much of the available talent. Outside of those gravitational wells, hiring has largely remained conservative and credential-heavy. I’ve met countless operators, software engineers, product builders, and ML engineers from adjacent fields who want to work on robotics but struggle to find a way in.
Graduate degrees have become soft requirements to touch a robot. Experienced roboticists are treated as scarce specialists to be insulated rather than force multipliers who can mentor and scale teams. The result is slower execution and fewer entry points for new contributors, at precisely the moment when the field needs more builders. Communities like Hugging Face have lowered the barrier to entry, but hiring practices haven’t fully caught up.
The irony is that narrow specialization is increasingly a liability. Many of the most important techniques are only months old. Most of the practical knowledge lives in hands-on experimentation rather than textbooks. Deployment rewards applied judgment: knowing how to curate data, debug complex failure modes, and iterate toward reliability. That kind of judgment is not exclusive to people with graduate degrees.
Building in robotics is extremely hard. It involves hardware quirks and software failure modes you don’t encounter in pure software. But difficulty alone doesn’t justify insulation. SpaceX routinely gives early-career engineers responsibility for systems that land rockets. Every fast-moving technical field grows by absorbing outsiders who bring new tools, abstractions, and instincts with them. If robotics remains resistant to cross-pollination, it guarantees its own bottleneck. We should invest more in early-career talent and candidates from adjacent fields. Prioritize slope over intercept.
Deployment is the Forcing Function
Deployments cannot be treated as the final step that happens after “real innovation” is complete. They need to become a forcing function that shapes the next cycle of research. Real-world use exposes failure modes that research environments rarely capture, constraints around reliability, maintainability, operator workflow, integration with existing tooling, and cost. Those pressures should flow back into every layer of the stack: hardware design, data infrastructure, training and evaluation pipelines, control software, and the models themselves. Deployment is not the final stage of innovation. It is the mechanism that guides where innovation should go next and demands a different kind of talent to drive it.
Waiting for universal embodiments and universal intelligence is a convenient way to never deploy anything. The goal isn’t to start with the perfect robot. It’s to start with a robot that can deliver value today, even if narrowly. The minimum viable deployment may be constrained, heavily teleoperated, or limited to a small slice of a workflow, but it grounds progress in reality. Every deployed system forces clarity around reliability, cost, operator experience, maintenance, and customer needs, problems that cannot be solved in isolation. The scaffolding required for early deployments, safety systems, monitoring, logging, remote intervention, onboarding, and operations infrastructure is not wasted effort. It becomes the foundation that future autonomy scales on.
Deployment also expands the robustness manifold: the set of tasks and conditions in which robotic systems can create value. Research proves capability on narrow tasks, but deployment exposes where entire systems actually break: latency bottlenecks in software, weaknesses in data pipelines, hardware limitations, and workflow mismatches with real operators. Each deployment generates feedback that shapes how models are trained, how software is written, and how hardware should be designed.
For builders, this creates a compounding strategic opportunity. The companies closest to deployment are often the ones best positioned to ride the next wave of model capabilities. The playbook already exists in software. Harvey, the legal AI platform now valued at $8 billion, started out in 2022 on GPT-3, a model that hallucinated, needed manual review on every output, and couldn’t handle complex legal reasoning. Rather than waiting, they spent that window embedding themselves inside law firms: training models on each firm’s private work product, hiring ex-BigLaw attorneys to sell and build, and signing Allen & Overy before GPT-4 even existed. The model was mediocre. The relationships, domain knowledge, and firm-specific data were not. When better models arrived, Harvey absorbed them on top of a distribution moat that no competitor starting fresh could replicate. Their co-founder, Gabe Pereyra, captured it well: “Don’t build for the current capabilities of models today—build for where the models are going to be. Tackle more complex versions of problems so that when better versions come out, they aren’t solved as a side effect.“ Robotics companies solving operationalization, customer integration, and deployment infrastructure today are the ones that will compound on every model improvement that comes next.
Researchers who have carried the field this far deserve enormous credit, and they should absolutely keep pushing the frontier. The community requires a summative transformation, not an in-place one. We need to welcome a new wave of builders whose primary job is deployment, iteration, and integration to solve the operationalization problem. Deploying robots into real businesses is itself an exciting first-class engineering problem; it is distinct from research, and one that demands its own dedicated class of builders.
A New Age of Robotics
Robotics needs fewer roboticists. Not overall, but per capita.
It needs operators, designers, infrastructure engineers, people who sell, support, and scale systems, and young talent willing to own deployments end-to-end. It needs to be more open as a community and more ruthless about grounding itself in the real world. Creating value is the benchmark that matters in the long run.
Past waves of robotics pushed the frontier forward. This next one has the chance to turn that progress into real, durable value, but only if the field is willing to let more people in and treat deployment as the forcing function it actually is.
Robots don’t change the world until they leave the lab. It’s time to build.
If you’re building in this space, or seriously thinking about building at the application layer, we’d love to hear from you. The next wave of progress will come from people who are willing to iterate in the real world.
Thank you to Kyle Vedder, Advait Patel, Utkarsh Singhal, Vishnu Mano, Shiza Charania, Nicholas Wade, Connor Soohoo, and Tracy Livengood for reviewing drafts and discussion..
This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not investment advice, nor is it intended for use by any investors or prospective investors in any a16z funds. This newsletter may link to other websites or contain other information obtained from third-party sources - a16z has not independently verified nor makes any representations about the current or enduring accuracy of such information. If this content includes third-party advertisements, a16z has not reviewed such advertisements and does not endorse any advertising content or related companies contained therein. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z; visit https://a16z.com/investment-list/ for a full list of investments. Other important information can be found at a16z.com/disclosures. You’re receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters you may unsubscribe immediately.










