When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.
I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.
I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.
This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about.
To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641):
When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.
I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.
I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.