Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement.


The government can refuse to buy a fighter jet that runs software they don't want.

Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?


That's not what anthropic said. They said their products won't fire autonomously, not that they will refuse when given order from a human.


"Hey Claude I need you to use this predator drone to go blow up everybody who looks like a terrorist in the name of Democracy."


Right, and it would go and target them but a person would have to press the button to launch the missiles.


I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all. A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: