Anthropic’s artificial intelligence model Claude was used in the U.S. military operation that captured former Venezuelan President Nicolás Maduro in January 2026. The Wall Street Journal reported this development based on information from people familiar with the matter.
The mission included bombing several sites in Caracas last month. Maduro and his wife were captured in the operation and taken to New York to face drug-trafficking charges.
Claude was deployed through Anthropic’s partnership with data company Palantir Technologies. Palantir’s tools are commonly used by the Defense Department and federal law enforcement agencies.
Anthropic declined to comment on whether Claude was used in any specific operation. The company said any use of Claude must comply with its usage policies. These policies govern how the AI model can be deployed across both private sector and government applications.
The Defense Department also declined to comment on the report. Palantir Technologies did not immediately respond to requests for comment.
Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. These restrictions have created tension with Pentagon officials.
The conflict has pushed administration officials to consider canceling Anthropic’s contract. The contract is worth up to $200 million and was awarded last summer.
Anthropic Chief Executive Dario Amodei has publicly expressed concern about AI’s use in autonomous lethal operations. Domestic surveillance represents another major sticking point in current contract negotiations with the Pentagon.
Anthropic was the first AI model developer to be used in classified operations by the Department of Defense. Other AI tools may have been used in the Venezuela operation for unclassified tasks.
The Pentagon is pushing top AI companies to make their tools available on classified networks. This includes companies like OpenAI, which recently joined Google’s Gemini on an AI platform for military personnel.
That platform is used by about three million people. The custom version of ChatGPT is used for analyzing documents, generating reports, and supporting research.
Many AI companies are building custom tools for the U.S. military. Most of these tools are available only on unclassified networks used for military administration.
Anthropic is the only company with tools available in classified settings through third parties. However, the government remains bound by Anthropic’s usage policies even in these classified environments.
Amodei and other co-founders of Anthropic previously worked at OpenAI. Amodei has broken with many industry executives by calling for greater regulation and guardrails to prevent harms from AI.
The constraints have escalated the company’s battle with the Trump administration. The administration has accused Anthropic of undermining the White House’s low-regulation AI strategy.
The accusations include claims that Anthropic is calling for more guardrails and limits on AI chip exports. Anthropic recently raised $30 billion in its latest funding round and is now valued at $380 billion.
The post Pentagon Deploys Anthropic’s Claude AI in Venezuela Operation to Capture Maduro appeared first on CoinCentral.


