Some of my thoughts, filtered slightly for public consumption.

Against AI Risk

If you are not familiar with the Rationalist community or AI risk, this post will mean little to you.

I keep running into the Rationalist community of late, and almost every time I encounter the idea of "AI risk". Many[0]

Rationalists, seemingly very intelligent and thoughtful people[1], believe the primary threat to Humanity is the possibility of a super-intelligent AI, one so far beyond humans that the pursuit of its own goals incidentally destroys us. I believe this fixation[2] is is deeply misguided.

The arguments I've seen[3]

can be boiled down to two points:

  1. The danger posed by an entity is a function of how powerful it is and how misaligned its objectives are with ideal human values.
  2. A sufficiently advanced AI would be so powerful that even the slightest misalignment with human values would result in human extinction, or very close.

I actually agree on both points, but disagree that they form a strong argument to worry about AI risk—at least as usually defined by Rationalists. The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses. I can't say how far off these are, but surely they are nearer than super-intelligent general AI.

Nor is AI a unique threat viewed through this lens. Technology exists to make people and institutions more powerful. This is a good thing to the extent that the people and institutions in question are "good". But AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.

My plea to Rationalists is to consider these problems first. Technology in general poses an existential risk on a much shorter time-scale than super-intelligent AI does. We as a species will need general solutions to this problem. We will need to prevent "radicals"—for increasingly tame definitions of the term—from acquiring ever-more-common technology. At the same time we will need protection from our protectors, whose power will only increase and become easier to abuse. Society will need radical transformation. I suspect that after this transformation, AI risk will be a radically different problem, if it still exists at all.

It may be tempting to argue that even if AI risk is not one of the most important problem to work on today, it still deserves more attention than it gets now[4]

. On the surface this is a reasonable argument. Of course society has room for people to consider multiple problems, and it is even healthy to do so. But I would expect this argument to be least persuasive to Rationalists. For one, while it makes sense on a societal level, on an individual level (assuming, safely, that this post does not convince everyone) it's a classic failure to treat opportunity cost as true cost. For another, I suspect that most Rationalists have a conviction that their attention is far more valuable than average. This is not a criticism; personally I find it hard to go through life believing otherwise. But this conviction carries with it a duty not to squander your attention.