3 Comments
User's avatar
Robot Parrot's avatar

One very simple reason: the future is an exponentially branching tree in many ways important to AI. Two people can agree only if they are thinking about the same thing. But if the future branches exponentially in important ways, the probability could be ~0% that _anyone_ is actually thinking about the same scenario as you.

(This is a worst-case extreme — realistically most scenarios won’t have 0% of people thinking about the same scenarios as you — but hopefully illustrates the fundamental dynamic.)

Expand full comment
Robot Parrot's avatar

I suspect the way to get around this is to _reach back into the past_ — when someone expresses concern about an AI scenario, go and learn _everything they mention is relevant to them considering the scenario in the first place_. Converging on what future scenarios are taken seriously requires converging on our understanding of history (easier said than done).

Expand full comment
Robot Parrot's avatar

(Implicit assumption here: if two people have _exactly the same_ understanding of history, including all research that’s been done in the past on economics, political science, et cetera, then they will find exactly the same future scenarios plausible.)

Expand full comment