The End of AGI + The Rise of Collective Human Intelligence
Ten years ago, when Nick Bostrom’s now-famous book, Superintelligence, was published, it made a bit of a splash. Many were the readers then, and many more are the citations now. I applaud a fellow philosopher for his commercial success, but even good things must sometimes end. My philosophical career has been nowhere near as impactful or success-filled, but I feel confident nonetheless in the critique I intend to venture here. In the discussion to follow, I have the distinct advantage of being not only correct, but possessing empirical evidence that strongly supports my claims-unlike every AGI Doom enthusiast on earth.
For Bostrom’s ideas, the problems started with ChatGPT 3.5, which was a runaway success that invaded the hearts and minds of computer users everywhere. On November 30, 2022, ChatGPT invaded my social circle in the Web3 community and by mid-2023, I was writing fairly heavy articles about it. A longtime student of consciousness and philosophy, I was in hog heaven — but not because AGI or ASI was on the way. I was excited about the humanistic implications of these technologies, and I am happy to say that the interceding time has done everything to reify my initial read of the situation, and nothing to disprove it. This essay will, as briefly as possible, explain what I’ve learned along the way.