Questions: Why AIs Exist?

In today’s not-a-gotcha, someone questions why digisapiences (i.e., sophont AIs) exist at all, citing this passage of Stross via Zompist.com –

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don’t want to be sued for maintenance by an abandoned software development project.

…on one level, this is entirely correct. Which is why there are lots and lots of non-sophont, and even sub-thinker-grade AI around, many of which works in the same way as Karl Schroeder suggested and Stross used in Rule 34 – AI which does not perceive its self as itself:

Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being’s own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements.

But, you see, the hidden predicate here is that the only reason someone would possibly want to develop AI is to have servants, or rather, since you don’t want to have to pay your self-driving car or your robot housekeeper either, to have what would be slaves if they were, in fact, sophont.

This line of reasoning is both tremendously human and tremendously shitty. Or, at least, tremendously shitty is how it makes humanity look. I leave the accuracy of that judgment up to the cynicism of the reader.

That was, needless to say, not the motivation of the people behind CALLÍËNS or other sophont-AI projects in Eldraeverse history. That would be scientific curiosity and engineering challenge. And the desire to share the universe with other minds with other points of view. And, for that matter, the same desire that has led us to fill the world with generations more of ourselves.

Or, to paraphrase it down to an answer I find myself having to give quite a lot, not everyone is motivated by the least enlightened self-interest possible.