9 Comments
User's avatar
SDF's avatar

I have so many problems with the justifications in this article that I'm not sure I want to keep receiving this Substack. I suppose it gives me insight into how people rationalize destroying our liberties, in addition to treating humans as cannon fodder and disregarding the sovereignty of other nations. The only point I viewed as valid (if it's true) is the language in the prior contracts. Any business person will tell you that contracts involving technology need the flexibility to be updated as the technology evolves.

Chris's avatar

Are you suggesting Ukrainians shouldn't be provided weapons to defend themselves from an unprovoked war put upon them by a totalitarian government?

SDF's avatar

No. That is not remotely what I stated.

surfbgull's avatar

Dear Under-Secretary Michael,

If a government of free people "has to have a monopoly on violence to protect the Country," then why did the free people of ours insist on the 2nd Amendment? Must free people also grant their government, in the name of "protecting the Country," broad, partisan, highly-classified discretion in the adoption of nascent technologies to fortify that monopoly, even over the warnings and objections of the inventors of those technologies? Seems like a Faustian bargain.

Jim Hillhouse's avatar

Talk about a one-sided discussion. I think at this point it would be necessary to interview Dario Amodei to get his view on some of the things Michael states that Anthropic has called lies. Not misstatements, lies.

We should remember Michael’s history as Chief Business Officer at Uber and while there his involvement in the “God View” tool that violated customers’ privacy by tracking them without permission, and his proposal of a $1 million campaign to investigate the personal lives of journalists critical of the company. All of this was verified in the Holder Report.

Also worth mentioning is his very close relationship with Sam Altman.

It is only fair that Amodei get the chance to offer his perspective on the AI dispute with DoD…or DoW, whatever.

PatX's avatar
Mar 4Edited

This person seems to forget that bureaucracy is a product of democracy. Its goal is to protect individuals from rash decisions.

Sure, it would be easier to discard all the rules when they annoy you. But we live in an adult world. Your actions have consequences, especially at this level. In bureaucracy, just like at the international level, rules and laws exist for a reason. They are not mere friction to be gamed or ignored.

So instead of spending your energy on working around them (or worse, breaking them) try to understand why they exist. Engage with the underlying principles and constraints, rather than treating them as obstacles to be bulldozed.

If AI vendors refuse to let their technology be used for war, then the question should be: Why do they refuse? What values, norms, or legal expectations are they signaling? That is the real conversation to have—not how to bypass contracts or pressure companies into behaving against their better judgment.

8Lee's avatar

TL;DR: We're going to keep investing in lots of companies; better keep up.

Christopher Wolff's avatar

The vendor-lock problem Emil describes isn't limited to AI models. The same single-threading exists in the communications layer underneath those models.

Every tactical mesh network the military deploys today runs ALOHA-derived flooding protocols — the same collision-prone architecture from 1971. At 50+ nodes, collisions scale quadratically and the mesh chokes. That's not a vendor problem. That's a physics problem baked into the protocol layer.

GPS-synchronized TDMA solves it by assigning deterministic time slots to every node. Zero collisions, zero flooding, works in GPS-denied environments. $200 in COTS silicon does what a $15K mil-spec radio does for mesh comms — without the procurement timeline Emil is trying to dismantle.

The "simple requirements, fixed price, fast development cycles" model he's describing is exactly how this gets deployed. Clear demand signal, risk-sharing with industry, and let the venture-backed companies compete on execution rather than paperwork.

The AI governance question matters. But the radio layer those AI models depend on is the bottleneck nobody in this conversation is addressing.

Chris's avatar

ML AI will NEVER be reliable enough for weapons applications without the assistance of brute-force traditional AI. ML AI is incapable of dealing with the fog of war. AI in weapon systems has proven this to be true for many years. The "new" ideas in ML AI of pseudo-random root-finding combined with gradient methods are very old ideas which are proven failures in weapon systems.