No, that would be impossible. The idea is that a future AI is built with the goal of [something good] and discovers self preservation and then does the torture stuff.
> a future AI is built with the goal of [something good]
Er, no, the idea is that someone hypothesizes the (malicious) AI, and then is compelled to (intentionally) build it by threat of being tortured if anyone else builds and they did not. The AI is working as designed.
See also prisoner's dilemma and tragedy of the commons; Roko's Basilisk is only concerning because of the reasoning that someone else will ruin things for everyone, so you had better ruin things first.
No, that's a version of the Basilisk that makes sense (almost – you don't need an AI for that). The original formulation was that the AI, built with the goal of [something good], would decide to torture people who didn't help build it so the threat of torture encouraged people-in-the-past to build it. (Yes, this is as nonsensical as it sounds; such acausal threats only work in specific scenarios and this isn't one of them.)
But yes, even if the Basilisk could make the threat credible (perhaps with a time machine), your strategy would still work. You can't be blackmailed by something that doesn't exist yet unless you want to be.