3) is a perfectly reasonable objection, and I can understand why people say "it's too far in the future, we have more pressing issues right now". But 2): "dumb and inept enough to interpret its instructions absolutely literally, like a magic genie" is not a reasonable objection, unless you have some compelling reason for why the orthogonality thesis is false. Why should an AI care about what we want from it, unless we're exceedingly careful about programming it so that its utility function is perfectly aligned with human desires; and is "exceeding care" a feature, now or ever, of how we approach AI engineering?
Even if the smartest hypothetical AI can perfectly extrapolate the mental states of every human who ever lived, we still die in a cloud of nanobots if it isn't programmed ever-so-carefully to care about what we want.
Even if the smartest hypothetical AI can perfectly extrapolate the mental states of every human who ever lived, we still die in a cloud of nanobots if it isn't programmed ever-so-carefully to care about what we want.