The point is there's zero evidence to show "AI kills everyone" is even a small risk.
By your logic, my plan to dedicate $100m to making offerings to Beelzebub to spare humanity is outstanding value for money dedicated to a neglected long-tail risk. Beelzebub has very few worshipers, and the odds of his choosing to exterminate mankind are low, but if he did, it would be very bad for us.
By your logic, my plan to dedicate $100m to making offerings to Beelzebub to spare humanity is outstanding value for money dedicated to a neglected long-tail risk. Beelzebub has very few worshipers, and the odds of his choosing to exterminate mankind are low, but if he did, it would be very bad for us.