• About
  • Contact
Friday, March 6, 2026
The US Inquirer
No Result
View All Result
  • Login
  • Home
  • National
  • Politics
  • Business
  • Tech
  • Crime
  • World
PRICING
SUBSCRIBE
  • Home
  • National
  • Politics
  • Business
  • Tech
  • Crime
  • World
No Result
View All Result
The US Inquirer
No Result
View All Result
Home Politics

What’s behind the Anthropic-Pentagon feud

by Jennifer Jacobs Jo Ling Kent Caitlin Yilek
February 25, 2026
Reading Time: 5 mins read
0
What’s behind the Anthropic-Pentagon feud

RELATED POSTS

Rep. Tony Gonzales drops out of House runoff race after admitting affair with aide

Pentagon formally designates Anthropic a supply chain risk amid feud over AI guardrails

Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts. 

At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company’s CEO.

The Pentagon’s AI contracts 

The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security. 

Anthropic’s rivals, including OpenAI, Google and xAI were also awarded $200 million contracts by the Pentagon last year. 

Anthropic is currently the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir.

A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close. 

The Pentagon announced last month that it’s looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.”

Clash over the guardrails 

The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military’s use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. 

An Anthropic spokesperson said in a statement that the company “has not discussed the use of Claude for specific operations with the Department of War.”

Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News. 

And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said.  

When asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States.

Any company-imposed restrictions “could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.

On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable — the military or the AI company — a defense official said: Legality is the Pentagon’s responsibility as the end user.

What top leaders are saying  

Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company’s brand around safety and transparency. 

In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” 

“Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies,” he wrote. 

Amodei has long backed what he describes as “sensible AI regulation,” including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them.

The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls “excessive” state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of “fear-mongering” and suggested its interest in AI regulations is self-serving.

In a January speech, Defense Secretary Pete Hegseth derided what he views as “social justice infusions that constrain and confuse our employment of this technology.” 

“We will not employ AI models that won’t allow you to fight wars,” Hegseth declared. “We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” 

What’s next in the Anthropic v. Pentagon saga

Hegseth gave Anthropic until Friday to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News. 

Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds.

Or, if an agreement can’t be reached, defense officials have discussed declaring the company a “supply chain risk” to push it out of government, according to the sources. 

AI: Artificial Intelligence

More


Go deeper with The Free Press

In:

Share6Tweet4Share1

Jennifer Jacobs Jo Ling Kent Caitlin Yilek

Related Posts

Rep. Tony Gonzales drops out of House runoff race after admitting affair with aide
Politics

Rep. Tony Gonzales drops out of House runoff race after admitting affair with aide

March 5, 2026
Anthropic CEO: We’re trying to “deescalate” Pentagon AI standoff to reach agreement
Politics

Pentagon formally designates Anthropic a supply chain risk amid feud over AI guardrails

March 5, 2026
Arab states running dangerously low on interceptors in Iran war, officials say
Politics

Arab states running dangerously low on interceptors in Iran war, officials say

March 5, 2026
Inside the decision to remove Kristi Noem as DHS secretary
Politics

Inside the decision to remove Kristi Noem as DHS secretary

March 5, 2026
Watch Live: Pete Hegseth, Adm. Brad Cooper give news conference
Politics

Watch Live: Pete Hegseth, Adm. Brad Cooper give news conference

March 5, 2026
Kristi Noem out as Secretary of Homeland Security, Trump says
Politics

Kristi Noem out as Secretary of Homeland Security, Trump says

March 5, 2026
Next Post
At least 10 FBI staffers who worked on Mar-a-Lago case are fired, sources say

At least 10 FBI staffers who worked on Mar-a-Lago case are fired, sources say

Hillary Clinton to appear for Epstein deposition before House panel today

Hillary Clinton to appear for Epstein deposition before House panel today

Recommended Stories

Huckabee’s remarks on Israel and Middle East were taken out of context, embassy says

Huckabee’s remarks on Israel and Middle East were taken out of context, embassy says

February 22, 2026
Jesse Jackson, civil rights leader who ran for president, dies at age 84

Jesse Jackson, civil rights leader who ran for president, dies at age 84

February 17, 2026
Iran calls talks with U.S. “more constructive” as Trump’s threat looms

Iran calls talks with U.S. “more constructive” as Trump’s threat looms

February 17, 2026

Popular Stories

  • Trump confirms U.S. and Israel launching military strikes on Iran

    Trump confirms U.S. and Israel launching military strikes on Iran

    15 shares
    Share 6 Tweet 4
  • Watch Live: Pete Hegseth, Adm. Brad Cooper give news conference

    15 shares
    Share 6 Tweet 4
  • GOP senator joins police in ejecting protester from Capitol Hill hearing

    15 shares
    Share 6 Tweet 4
  • Probe into Biden autopen closed by D.C. U.S. Attorney’s Office, source says

    15 shares
    Share 6 Tweet 4
  • Democratic Reps. Green, Menefee advance to runoff in redrawn Texas district

    15 shares
    Share 6 Tweet 4
The US Inquirer

© 2023 The US Inquirer

Navigate Site

  • Home
  • About
  • Contact
  • Ethics
  • Fact Checking and Corrections Policies
  • Copyright
  • Privacy Policy
  • ISSN: 2832-0522

Follow Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • National
  • Politics
  • Business
  • Tech
  • Crime
  • World

© 2023 The US Inquirer

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?