>>
Technology>>
Artificial intelligence>>
Microsoft Backs Anthropic in P...Microsoft filed an amicus brief supporting Anthropic's lawsuit against the Pentagon's unprecedented "supply chain risk" designation, warning the blacklisting could disrupt military AI operations and harm national security.
Microsoft has thrown its weight behind Anthropic in a high-stakes legal battle with the Pentagon, urging a federal judge to grant a temporary restraining order blocking the Defense Department's unprecedented decision to label the AI Company a national security "supply chain risk." In an amicus brief filed Tuesday in San Francisco federal court, Microsoft argued that the blacklisting typically reserved for foreign adversaries like Huawei threatens to disrupt critical AI capabilities used by the U.S. military and could undermine America's leadership in artificial intelligence.
The dispute erupted after Anthropic refused to remove guardrails on its Claude AI model that prohibit its use for fully autonomous weapons and mass domestic surveillance. Defense Secretary Pete Hegseth formally designated Anthropic a supply chain risk last week, and President Donald Trump ordered all federal agencies to cease using Anthropic's technology within six months. Anthropic filed suit Monday, calling the designation "unprecedented and unlawful" retaliation for its ethical stance.
Microsoft, which has invested up to $5 billion in Anthropic and integrates its models into systems provided to the military, warned that without a restraining order, contractors would be forced to immediately reconfigure existing products potentially "hampering US warfighters at a critical point in time" amid escalating Iran conflict . The company noted the Pentagon gave itself six months to phase out Anthropic technology but provided no transition period for contractors, creating immediate compliance burdens.
More than three dozen AI researchers from OpenAI and Google, including chief scientist Jeff Dean, filed a separate amicus brief supporting Anthropic. The case centers on whether the government can punish companies for refusing to allow their AI to be used for autonomous warfare and surveillance a question with profound implications for the entire tech industry.