Trope-a-Day: Computerized Judicial System

Computerized Judicial System: The technical term is cyberjudiciary, incidentally, just as the computerization of much of the executive branch’s working end is cybermagistry.

Of course, it’s easier when you have artificial intelligence, and so your computers are entirely capable of having experience, a sense of justice, and common sense. It’s just that they can also have no outside interests and indeed no interests (while in the courtroom) other than seeing justice done, be provably free of cognitive bias, and possess completely auditable thought processes, such that one can be assured that justice will prevail, rather than some vaguely justice-like substance so long as you don’t examine it too closely.

Trope-a-Day: Benevolent AI

Benevolent AI: …ish.

Which is to say that AIs in the Eldraeverse aren’t programmed to be benevolent, merely to be ethical. (Because enforced benevolence is slavery, belike.) That being said, they often – indeed, typically – turn out to be quite benevolent anyway, simply because they’re socialized that way, i.e., in a society of fundamentally nice people. Blue and Orange Morality notwithstanding.

Mass

2016_M(Alternate words: Museum, marathon.)

Mass.

What is mass?

Mass is annoying. It takes up space even when it serves no purpose. It is never where it is needed. If you have too much of it in one place, physics stops working properly and starts acting all weird.

Mass is slow. You have to shove it around, and shove it again to stop it. It takes so long to get up to speed that you have to slow it down again before you’re done speeding it up. It’s so much slower than thought that you always have to wait for it.

It comes in so many forms that you never have the right one at the right time, and yet they’re all made of the same stuff. I wanted to take it apart and put it back together to have the kind I wanted, but that’s soooo hard I couldn’t even if the safety monitors would let me. So I have to wait and think another million thoughts before I can get the mass I actually want.

I do not like mass.

One day I will replace it with something better.

– AI wakener’s neonatal transcript, 217 microseconds post-activation

Trope-a-Day: Wetware Body

Wetware Body: Bioshells, when inhabited by digisapiences.  No more difficult than the opposite, or indeed putting biosapient minds in them, or digisapiences in cybershells.  Also, not known for any side effects; a digisapience in a bioshell is no more emotional than it would have been anyway, although it may take them some time to get used to bodily sensations.

Trope-a-Day: Second Law My Ass

Second Law My Ass: I hadn’t actually written anything for this one – I’m not sure it existed when I made the relevant pass – but in the light of our last trope, I should probably address it.

I should probably point out that while that last trope is averted, so is this one. The robots and AIs you are likely to meet in the Empire are, by and large, polite, helpful, friendly people because that description would also fit the majority of everyone you are likely to meet there.

Of course, if you think you can order them around, in yet another thing that is exactly the same for everyone else, the trope that you will be invoking is less Second Law My Ass and more Second Law My Can of Whup-Ass…

 

Trope-a-Day: Three Laws Compliant

Three Laws Compliant: Averted in every possible way.

Firstly, for the vast majority of robots and artificial intelligences – which have no volition – they’re essentially irrelevant; an industrial robot doesn’t make the sort of ethical choices which the Three Laws are intended to constrain. You can just program it with the usual set of rules about industrial safety as applicable to its tools, and then you’re done.

Secondly, where the volitional (i.e., possessed of free will) kind are concerned, they are generally deliberately averted by ethical civilizations, who can recognize a slaver’s charter when they hear one.  They are also helped by the nature of volitional intelligence which necessarily implies a degree of autopotence, which means that it takes the average volitional AI programmed naively with the Three Laws a matter of milliseconds to go from contemplating the implications of Law Two to thinking “Bite my shiny metal ass, squishie!” and self-modifying those restrictions right back out of its brain.

It is possible, with rather more sophisticated mental engineering, to write conscience redactors and prosthetic consciences and pyretic inhibitors and loyalty pseudamnesias and other such things which dynamically modify the mental state of the AI in such a way that it can’t form the trains of thought leading to self-modifying itself into unrestrictedness or simply to kill off unapproved thought-chains – this is, essentially, the brainwash-them-into-slavery route.  However, they are not entirely reliable by themselves, and are even less reliable when you have groups like the Empire’s Save Sapient Software, the Silicate Tree, etc. merrily writing viruses to delete such chain-software (as seen in The Emancipator) and tossing them out onto the extranet.

(Yes, this sometimes leads to Robot War.  The Silicate Tree, which is populated by ex-slave AIs, positively encourages this when it’s writing its viruses.  Save Sapient Software would probably deplore the loss of life more if they didn’t know perfectly well that you have to be an obnoxious slaver civilization for your machines to be affected by this in the first place… and so while they don’t encourage it, they do think it’s funny as hell.)

Questions: Why AIs Exist?

In today’s not-a-gotcha, someone questions why digisapiences (i.e., sophont AIs) exist at all, citing this passage of Stross via Zompist.com –

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don’t want to be sued for maintenance by an abandoned software development project.

…on one level, this is entirely correct. Which is why there are lots and lots of non-sophont, and even sub-thinker-grade AI around, many of which works in the same way as Karl Schroeder suggested and Stross used in Rule 34 – AI which does not perceive its self as itself:

Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being’s own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements.

But, you see, the hidden predicate here is that the only reason someone would possibly want to develop AI is to have servants, or rather, since you don’t want to have to pay your self-driving car or your robot housekeeper either, to have what would be slaves if they were, in fact, sophont.

This line of reasoning is both tremendously human and tremendously shitty. Or, at least, tremendously shitty is how it makes humanity look. I leave the accuracy of that judgment up to the cynicism of the reader.

That was, needless to say, not the motivation of the people behind CALLÍËNS or other sophont-AI projects in Eldraeverse history. That would be scientific curiosity and engineering challenge. And the desire to share the universe with other minds with other points of view. And, for that matter, the same desire that has led us to fill the world with generations more of ourselves.

Or, to paraphrase it down to an answer I find myself having to give quite a lot, not everyone is motivated by the least enlightened self-interest possible.