Some of my thoughts, filtered slightly for public consumption.

Against AI Risk

If you are not familiar with the Rationalist community or AI risk, this post will mean little to you.

I keep running into the Rationalist community of late, and almost every time I encounter the idea of "AI risk". Many[0] Rationalists, seemingly very intelligent and thoughtful people[1], believe the primary threat to Humanity is the possibility of a super-intelligent AI, one so far beyond humans that the pursuit of its own goals incidentally destroys us. I believe this fixation[2] is is deeply misguided.

The arguments I've seen[3] can be boiled down to two points:

  1. The danger posed by an entity is a function of how powerful it is and how misaligned its objectives are with ideal human values.
  2. A sufficiently advanced AI would be so powerful that even the slightest misalignment with human values would result in human extinction, or very close.

I actually agree on both points, but disagree that they form a strong argument to worry about AI risk—at least as usually defined by Rationalists. The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses. I can't say how far off these are, but surely they are nearer than super-intelligent general AI.

Nor is AI a unique threat viewed through this lens. Technology exists to make people and institutions more powerful. This is a good thing to the extent that the people and institutions in question are "good". But AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.

My plea to Rationalists is to consider these problems first. Technology in general poses an existential risk on a much shorter time-scale than super-intelligent AI does. We as a species will need general solutions to this problem. We will need to prevent "radicals"—for increasingly tame definitions of the term—from acquiring ever-more-common technology. At the same time we will need protection from our protectors, whose power will only increase and become easier to abuse. Society will need radical transformation. I suspect that after this transformation, AI risk will be a radically different problem, if it still exists at all.

It may be tempting to argue that even if AI risk is not one of the most important problem to work on today, it still deserves more attention than it gets now[4]. On the surface this is a reasonable argument. Of course society has room for people to consider multiple problems, and it is even healthy to do so. But I would expect this argument to be least persuasive to Rationalists. For one, while it makes sense on a societal level, on an individual level (assuming, safely, that this post does not convince everyone) it's a classic failure to treat opportunity cost as true cost. For another, I suspect that most Rationalists have a conviction that their attention is far more valuable than average. This is not a criticism; personally I find it hard to go through life believing otherwise. But this conviction carries with it a duty not to squander your attention.


  1. ^

    The weasel-word "many" is intentional, as I have no concrete idea how many Rationalists worry about AI risk. Anecdotally it seems the majority are, but my experience is colored by selection bias (often I don't know someone is a Rationalist until AI risk comes up) and confirmation bias. I would love to see statistics on this.

  2. ^

    Talking with Rationalists often feels like looking into a mirror to me—but the reflection is inexplicably off, like I've stepped into the Twilight Zone. We have the same interests, the same problems, favor the same style of writing, gravitate towards the same media, and combined these lead to very similar worldviews. But then I catch a glimpse of something completely bizarre—polyamory, cuddle piles, or most often, AI risk. Part of my motivation for this post is an emotional desire to reconcile this unsettling reflection by correcting it, although I do not dismiss the possibility that I will be the one persuaded.

  3. ^

    I believe the sheer coolness of the idea of AI risk, combined with the (plausible) view that Rationalists are uniquely well-equipped to fight it, causes this fixation. Elizier Yudkowsky and his Machine Intelligence Research Institute probably acted as a catalyst. Unfortunately a full examination of how AI risk came to consume the community is beyond the scope of this post.

  4. ^

    The best summary of these arguments is from Scott Alexander at SlateStarCodex, an excellent example of a smart person I agree with on almost everything, but whose concern with AI risk baffles me.

  5. ^

    In the same post, Scott Alexander notes that Floyd Mayweather was paid ten times more for a single boxing match than has ever been spent on studying AI risk. But this comparison is pointless—you have no influence over the money going to Floyd Mayweather to punch people (assuming you don't pay to watch boxing), while you have control over your own money and attention going to study AI risk.