Monday, March 16, 2026

Anthropic’s Claude Enlists With Pentagon, Raising the Stakes for Aligned A.I. in National Security

Updated March 14, 2026, 6:00am EDT · NEW YORK CITY


Anthropic’s Claude Enlists With Pentagon, Raising the Stakes for Aligned A.I. in National Security
PHOTOGRAPH: NEWS, POLITICS, OPINION, COMMENTARY, AND ANALYSIS

The deployment of advanced AI in American military decision-making raises thorny questions about technological safeguards, national power, and the limits of corporate ethics.

In an age when New Yorkers can order Korean tacos or off-brand toothbrushes with a thumb-flick, the spectre of weaponised artificial intelligence seems the stuff of a science-fiction subway poster. Yet in 2025, Anthropic, a brainy start-up founded by apostates from OpenAI, quietly took on a contract to supply its large language model, Claude, for sensitive U.S. defence work. Military-intelligence practitioners are now betting that New York’s surfeit of tech talent could as easily build the tools of war as the next e-commerce app.

The news sounds arcane—a software company updating its client list—until one considers that Claude is the first AI certified to operate on American classified systems. Unlike ubiquitous consumer chatbots, Claude’s remit is not to write haikus or fix code bugs. Instead, think of analysts at Fort Hamilton or the NSA’s midtown New York outpost blasting through intercepted signal data using Claude’s cortex, synthesising terabytes in moments and surfacing patterns built for swift, consequential human judgment. Palantir, the defence contractor beloved by the Pentagon and eyed warily by civil libertarians, already offers Claude as a star player in its suite of military AI tools.

For New York, a hub of finance and technology and the home of both Wall Street wonks and policy mandarins, the impact is twofold. On the one hand, the city’s elite universities and surging start-up scene make it a logical recruitment pool for top-tier AI engineers. On the other, the risk of becoming the intellectual engine for tools that might shape, or upend, the global order is hardly abstract. The lines across academic, civilian, and defence innovation—in Midtown and beyond—are growing dangerously blurry.

There is money to be made—no surprise for a city built by ingenuity and capital. The government’s increasing reliance on AI to process, prioritise and even suggest military targets brings with it procurement contracts on a scale most tech start-ups only dream of. Still, among the social and academic circles where Dario Amodei, Anthropic’s anxious and high-minded CEO, once moved, the pivot from silicon idealism to military utility has portended awkward dinner parties and acute ethical hand-wringing. Amodei secured a deal that, at least on paper, prohibits using Claude for autonomous weaponry or domestic mass surveillance. In the world of government contracting, such stipulations are rare as hen’s teeth.

These formal bonds between Anthropic and the Pentagon have real teeth only so long as both parties agree the rules matter—no small feat in an era where state power has a history of stretching or simply rewriting legal boundaries. Amodei’s gambit is clear: better to enlist Claude early, under safeguards, than to see a future where the technology is forcibly nationalised or, worse, run wild under laxer hands (or rival superpowers). Such anxieties pervade New York’s wider venture and academic class, many of whom take it as gospel that American primacy ought to include AI dominance—preferably with some vestige of restraint.

Manhattan’s Faustian bargain with machine learning

Second-order risks abound for New York and its denizens. The city’s political culture, steeped as it is in progressive rhetoric, nonetheless has an outsized stake in maintaining its status as the R&D centre of both cutting-edge commerce and national power. As AI becomes the engine—the neural network, one might say—of intelligence analysis and even battlefield tactics, the calculus of war risks tipping further away from elected civilian oversight towards what amounts to code-governed acceleration. Those writing and selling this code, of course, count many New Yorkers among their ranks.

Nationally, America’s embrace of AI-enabled military intelligence is mirrored uneasily by rivals, most notably China, which is both a real and a rhetorical spur for contract-hungry executives and Pentagon hawks alike. While France and Britain toy with AI-driven logistics, and Israel leads in battlefield autonomy, the American approach—centralised, lawyered, and with the nominal brake of a “human in the loop”—has become an industry precedent. It is a hedge as much against error as against the spectre of adversarial copycats.

What few in the corridors of NYC’s tech-and-policy power admit is the fundamental volatility of these safeguards. A contract can prohibit Claude from engaging in surveillance, but legal regimes and funding priorities lurch with every election (or security panic). Secretaries of war come and go; so do algorithms, updates, and even corporate leadership. As every Manhattan lawyer knows, what is true today may be null tomorrow—a risk when so much power is enshrined in digital ink.

The “soul doc” that Anthropic scribes for Claude—a constitutional analogue purporting to bind its outputs to some higher law—may placate earnest ethicists and assuage urban anxiety. It is, however, only partially enforceable. In the end, powerful technology in state hands tends to migrate towards efficiency, not virtue. When the stakes include national survival, military advantage, or the avoidance of being outflanked by Beijing, city-bred scruples can look puny.

Yet not all is gloomy. If New Yorkers are at the heart of this uneasy realignment between AI and the state, they are also uniquely placed to shape it—through politics, advocacy, and old-fashioned whistleblowing (at least until the AI figures out who the whistleblowers are). The city’s blend of big egos and bigger principles might serve as a brake on the most reckless uses, even as it profits from the AI boom. Cynics will see only dollar signs and state power. Realists will see contests both commercial and moral, waged over zip codes as much as codebases.

The lesson, as ever, is less to decry technology’s advance than to insist on its tempering by rules and real accountability. For now, both the city and its engineers are walking a knife’s edge, weighing public benefit against immense private and national power. Whether Claude—or its inevitable successors—will be an instrument of wisdom or woe remains unclear. For Manhattan and the rest of America, that ambiguity is the oldest story of all. ■

Based on reporting from News, Politics, Opinion, Commentary, and Analysis; additional analysis and context by Borough Brief.

Stay informed on all the news that matters to New Yorkers.