Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, that's a version of the Basilisk that makes sense (almost – you don't need an AI for that). The original formulation was that the AI, built with the goal of [something good], would decide to torture people who didn't help build it so the threat of torture encouraged people-in-the-past to build it. (Yes, this is as nonsensical as it sounds; such acausal threats only work in specific scenarios and this isn't one of them.)

But yes, even if the Basilisk could make the threat credible (perhaps with a time machine), your strategy would still work. You can't be blackmailed by something that doesn't exist yet unless you want to be.



> The original formulation was [...] would decide to torture people

That formulation is not concerning, except to the extent that all AI is concerning due to the possibility of defects in value alignment.

> You can't be blackmailed by something that doesn't exist yet unless you want to be.

https://www.gwern.net/docs/fiction/2011-yvain-fermiparadox.h...


> Unless...and here 9-tsiak's agent-modeling systems came online...unless ve could negotiate a conditional surrender.

9-tsiak wanted to be. Or, at least, had good reason to risk the possibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: