I thought it was fun to search for a solution that can beat every level (eventually found one!) As far as I know, no LLM can do this on its own, which tells us something about the kind of problems they’re weak at.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed,
and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294
The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.
There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.
I'd encourage you to look up the Defense Production Act. Its powers are probably broad enough that the President could unilaterally force Anthropic to do this whether or not it wants to. It's the same logic that would allow him to force an auto manufacturer to produce tanks. And the law doesn't care whether we are in a crisis or not. It's enough that he determine (on his own) that this action is "necessary or appropriate to promote the national defense."
However, it looks like Trump isn't going to go that route-- they're just going to add Anthropic to a no-buy list, and use a different AI provider.
Ok? And? Trump could use the DPA to force Ford to make tanks in a war, just like how Trump could use the DPA to force Anthropic to make AI in a war. Are we in a war? No. We are not in a crisis.
Of course a contractor could not decide to unilaterally shut off their missile system, because that would be a contract violation.
A contractor may try to negotiate that unilateral shut off ability with the government, and the government should refuse those terms based on democratic principles, as Luckey said.
But suppose the contractor doesn’t want to give up that power. Is it okay for the government to not only reject the contract, but go a step further and label the contractor as a “supply chain risk?” It’s not clear that this part is still about upholding democratic principles. The term “supply chain risk” seems to have a very specific legal meaning. The government may not have the legal authority to make a supply chain risk designation in this case.
It sounds like the "supply chain risk" designation is just about anyone who works with the DoD not using them, so their code doesn't accidentally make it into any final products through some sub-sub-subcontractor. Since they've made it clear that they don't want to be a defense contractor (and accept the moral problems that go with it), the DoD is just making sure they don't inadvertently become one.
I think this is different. It’s a statement that this product is not qualified to perform that function(autonomous killing decisions). I think it is pure madness to think AI is currently up to this task. I also think it should be a war crime. I think congress should pass a law forbidding it.
There seem to be two separate lines of thought in this conversation: first, that the AI tech isn't smart enough for us to trust it with autonomously killing people. Second, even if it was smart enough, maybe such weapons are immoral to produce?
The first line of thought is probably true, but could change in the next 5 years-- so maybe we should be preparing for that?
The second line of thought is something for democracies to argue about. It's interesting that so many people in this thread want to take this power away from democratic governments, and give it to a handful of billionaire tech executives.
The dispute is over the supply chain risk designation though, not over the refusal to sign a contract. If only the latter had happened, we wouldn't be talking here. You're explaining why the department wouldn't want contractors to dictate the terms of usage of their products and services (the latter), but not why this designation would be seen as necessary even in their own eyes (the former).
>In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
I don't see the problem. He's offering a completely legal product to an eager audience. If people want to propose banning social media in some capacity, that could and should be voted on-- but Zuck isn't violating any legal or moral law I've ever heard of, and he shouldn't have to guess what products will be illegal in 20 years and preemptively withdraw them.
If it's harming your mental health, stop using it. The "Delete App" button is right there.
And just stop buying those cigarettes. This is where cultural differences matter, the US has much less concern about the negative societal impact of products than many countries, particularly its erstwhile allies. It's also precisely why it's imperative other countries decouple from US owned social media unless they want to import US values.
Banning something just for kids is an easy win for any politician, since that's one of the few groups that can't punish you in the next election. For that reason alone, I assume we'll get some law within 5-15 years mandating that Facebook ban kids. I assume the kids will trivially bypass it the block, or switch to foreign social media, and we'll go back to business as usual.
Right-- at which point, companies like Facebook will (hopefully) have to obey the law. But we're not there yet. Currently, people are moralizing at Zuck for not voluntarily killing his own products because they're "obviously harmful."
I mean, you realise that legal over the counter heroin used to be a thing, right? Cigarettes are still legal. There is a gap between “obviously harmful thing is legal” and “it is ethical to make great piles of money out of selling the obviously harmful thing (to children, at that)”. The CEO of Phillip Morris, say, isn’t doing anything illegal, but they are a _bad person_ who is knowingly harming society. Same for Zuckerberg.
What is a "moral" law as opposed to a "legal" one? If he is actively promoting a harmful product, I think that would fall into many people's definition of 'morally wrong'.
(I'm basing this on the headline because the article is paywalled)
A product can be helpful to one person and harmful to another. Most products are like that. All sorts of things can be addictive to some people, from potato chips to video games.
There was a major public campaign in the 1950s to ban rock & roll music, and in the 1980s to ban heavy metal. In each case, there were legions of "experts" calling those genres "harmful", and they were taken seriously -- congressional hearings were held, etc.
Point is, "promoting a harmful product" is very much in the eye of the beholder, and doesn't work as an objective moral standard.
The EU would have to put the US on a list of "foreign adversaries", with whatever political fallout comes from that. Not saying they shouldn't, but there will be downsides.
Since controlling these platforms is probably the best ROI for swinging public opinion, I'm sure it's a matter of time before they get seized (one way or another) and redistributed to reliable political allies.
The only time that's going to matter is during a high usage event, like sports or perhaps a mass casualty. At that point, the network is overloaded for the primary users anyway.
I switched over a year ago and all I can say is … it’s been excellent. $25/month per line is perfect and service is just as good as our Verizon postpaid.
My wife and I are on the same Verizon family plan. One of us can be down while the other is fine, then 30 minutes later it's the opposite. It's been like that all day.
20 years ago much less of the infrastructure of everyday life depended on an always-on network connection. Smartphones in particular were a relatively niche product. I didn’t even have a cell phone (and not because I was too young), much less expect it to work all the time.