Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality. So take Box B and the real you will get a cool million.
![spooky roki spooky roki](https://t00.deviantart.net/2LTGZLlD8SHTM3t4sRUY-lLiOHc=/500x250/filters:fixed_height(100,100):origin()/pre00/f46c/th/pre/i/2018/311/0/e/_closed__sketch_adopts_auction_by_riribear-dcr9tcz.png)
So you, right this moment, might be in the computer’s simulation, and what you do will impact what happens in reality (or other realities). In order to make its prediction, the computer would have to simulate the universe itself. (I’ve adopted this example from Gary Drescher’s Good and Real, which uses a variant on TDT to try to show that Kantian ethics is true.) The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. TDT has some very definite advice on Newcomb’s paradox: Take Box B. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)
#SPOOKY ROKI FREE#
The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. “If you don’t sign up your kids for cryonics then you are a lousy parent,” Yudkowsky writes.
![spooky roki spooky roki](https://media1.tenor.com/images/680f7ba9053f4cf4a377dc09a71945d8/tenor.gif)
Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon-within the next 50 years or so. The LessWrong community is concerned with the future of humanity, and in particular with the singularity-the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend.
![spooky roki spooky roki](https://abload.de/img/20210205124404_1buk1v.jpg)
Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.