75. Moar Trash-talking Singularitarianism 🔗
June 15, 2021
In which I trash-talk singularitarianism again — arguing that AGI is an eschatological motte-and-bailey, that clever people overestimate the cosmic importance of cleverness, and that the circularity of the smartest people worrying about superintelligence is a kind of anthropic narcissism.
🔗
There is no AI alignment problem.
🔗
I encourage you to draw the inference that I’m not one of the smartest people you know. 🤣
🔗
AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics.
🔗
It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either.
I should add: I don’t believe humans are AGIs either.
🔗
In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid.
🔗
At least the really silly foundation on IQ and psychometrics is withering away. I think the Bostrom style simulationist foundation is at least fun to think about though even sillier taken literally. But it highlights the connection to the hard problem of consciousness.
🔗
I’ve been following this conversation since the beginning about 15 years ago, and I feel I need to re-declare my skepticism every few years, since it’s such a powerful attractor around these parts. Like periodically letting my extended religious family know I’m not religious.
🔗
It’s interesting that the AGI ideology only appeared late into the AI winter, despite associated pop-tropes (Skynet etc) being around much longer. AGI is a bit like the philosopher’s stone of AI. It has sparked interesting developments just as alchemy did chemistry.
🔗
Re: AI alignment in a more practical sense of deep learning biases and explainability, I’ve seen nothing new on the ethics front that is not a straightforward extrapolation of ordinary risk management and bureaucratic biases.
🔗
The tech is interesting and new, as is the math. The ethics side is important, but so far nothing I’ve read there strikes me as important and new or particularly unique to AI systems. Treating it as such just drives a new obscurantism.
🔗
Previous thread about this from February. Wish I’d indexed all my threads over the past 5-6 years to see if my positions have evolved or shifted.
there are no general intelligences
🔗
But to the original QT, a new point I’m adding here is the non-trivial observation that those who most believe in AGIs also happen to be convinced they are the smartest people around (and apparently manage to convince some around them, though Matt appears to be snarking).
🔗
Circular like the anthropic principle. You notice that earth is optimized to sustain human life. Your first thought is, a God created this place just for us. Then you have the more sophisticated thought that if it weren’t Goldilocks optimal we wouldn’t be around to wonder why…
🔗
But notice that the first thought posits a specific kind of extrapolation — an egocentric extrapolation. “God” is not a random construct but an extrapolation of an egocentric self-image as “cause” of Goldilocks zone.
The second thought makes it unnecessary to posit that.
The second thought makes it unnecessary to posit that.
🔗
Flip it around to be teleological. In this case, a certain class of people do well in a pattern of civilization. If you assume that pattern is eternal, that class of people suggest evolution to an alluring god-like omega point and a worry that machines will get there first.
🔗
But as a skeptic, you wonder… if this civilization didn’t have this pattern, these people wouldn’t be around worrying about superintelligence. Some other group would be. Top dogs always fight imaginary gods.
🔗
No accident that the same crowd is also most interested in living forever. A self-perpetuation drive shapes this entire thought space.
🔗
This world is your oyster, you’re winning unreasonably easily and feel special. You want it to continue. You imagine going from temporarily successful human to permanently successful superhuman. Attribution bias helps pick out variables to extrapolate and competitors to fear.
🔗
The alt explanation is less flattering. You’re a specialized being adapted to a specialized situation that is transient on a cosmic scale but longer than your lifespan. But it is easy and tempting to confuse a steady local co-evolution gradient (Flynn effect anyone?) for Destiny.
🔗
I’m frankly waiting for a different kind of Singularity. One comparable to chemistry forking off from alchemy because it no longer needed the psychospiritual scaffolding of transmutation to gold or elixir of life to think about chemical reactions.
🔗
I’m glad this subculture inspired a few talented people to build interesting bleeding edge systems at Open AI and Deep Mind. But the alchemy is starting to obscure the chemistry now.
🔗
My own recent attempt to develop a “chemistry, not alchemy” perspective
Superhistory, Not Superintelligence - by Venkatesh Rao
Superhistory, Not Superintelligence - by Venkatesh Rao
🔗
The many sad or unimpressive life stories of Guinness record IQ types illustrates that intelligence has diminishing returns even in our own environment. If you think you’d be 2x more successful if you were 2x smarter you might be disappointed.
🔗
It’s a bit like me being good at 2x2s and worrying that somebody will discover the ultimate world-destroying 2x2. Except most strengths don’t tempt you into such conceits or narcissistic projections.
🔗
Intelligence, physical strength, adversarial cunning, and beauty are among the few that do tempt people this way. Because they are totalizing aesthetic lenses on the world. When you have one of these hammers in your hand, everything looks like a nail.