Categories
defense & war privacy too much government

AI Within Limits

Paul Jacob on the fight over Claude.

I’ve never consulted “Claude.” It’s an artificial intelligence (AI), and these things give me the creeps. But I must soldier on.

Anthropic, the maker of Claude, is in a special position: it’s currently the only frontier AI model cleared for use on classified U.S. military systems. But Anthropic limits use of Claude by the government: no mass domestic surveillance of U.S. citizens (such as tracking protesters or political opponents) and no development of fully autonomous weapons (where AI makes lethal decisions without human oversight).

Two cheers for Claude?

Regardless of your Huzzah level, being in a special position puts Anthropic in the crosshairs: The Pentagon demands unrestricted “all lawful use” access, rejecting any such safeguards or limits. 

According to Elizabeth Nolan Brown, writing in Reason, the “U.S. Department of Defense is in a standoff with artificial intelligence developer Anthropic over the company’s refusal” to play along with the federal government’s willingness to press beyond the limits of the Constitution. 

“This refusal hasn’t gone over well with the Trump administration,” explains Ms. Brown, going on to write that Secretary of War Pete“Hegseth has reportedly demanded that Anthropic remove its restrictions on certain military uses or else face consequences.”

In recent years we’ve witnessed too many companies complying with out-of-control government. And while it has become common to “lash out at big corporations, we should focus our anger on the actual root of these problems: the government,” the Reason article concludes. 

As it turned out in the social media de-platforming scandal, “the real enemy of civil liberties here is the government actors who are doing the bad deeds, demanding that tech companies go along with them. . . .” 

As our previous president used to say, “Don’t.”

This is Common Sense. I’m Paul Jacob.


PDF for printing

Illustration created with Nano Banana

See all recent commentary
(simplified and organized)
See recent popular posts

One reply on “AI Within Limits”

If AI of some sort is considered critical to the purposes of the state, then anyone known to be blocking access to that AI would surely be killed.

I suspect that the US has access to all the relevant designs — hardware and software — for Claude, and that it the US is running something at least as effective at one or more secret installations. Anthropic should not acquiesce, but they and we should harbor no illusions about their embargo.

Leave a Reply

Your email address will not be published. Required fields are marked *