The transcription of a talk by Maciej Cegłowski that I’ve dug out and re-read over and over again since he gave it in 2016. He addresses the question of whether an artificial intelligence will be developed that far surpasses our own intelligence and, if it will, whether that will mean the destruction of humanity. It’s a question that has absorbed and terrified some notable names in the world of technology:

“The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously. Stephen Hawking, Elon Musk, and a whole raft of Silicon Valley investors and billionaires find this argument persuasive.”

Cegłowski then proceeds to set fire to the arguments in favour of superintelligence in a straightforward and provocative way. (I particularly like “the argument from Slavic pessimism”.)