Trope-a-Day: I Don’t Like The Sound Of That Place

I Don’t Like The Sound Of That Place: The Last Darkness constellation (centered around the black hole, Eye of Night) would be one of these – even though it’s actually quite a nice place to visit – as would the three stargates leading into the Leviathan Consciousness Containment Zone, Hell’s Mouth, Conjoiner’s Gullet, and Unreturn.  The Charnel Cluster, where the entire population and ecology were slaughtered by a seed AI experiment Gone Horribly Right would also qualify.

On Eliera itself, the Frozen Hell is a long stretch of tundra which is, well, exactly what it says on the tin.  The Stonedeath Barrens and the Bloody Wastes both commemorate ancient battles – lots of them, in roughly the same place – and the Makerforges are an unpleasantly volcanic mountain chain.  Nightfall Crater doesn’t seem so bad, until you recall the Winter of Nightmares, what caused it, and therefore exactly how many people the thing you’re standing on killed.

A remarkable number of fortresses and walled cities with portentous names, on the other hand are actually perfectly lovely places to live or visit; the names were purely for advertising – adversetising? abvertising? – purposes.

Trope-a-Day: The Singularity

The Singularity: Happens all the time. In the historical sense, of course, this is unsurprising, and generally no-one involved notices until afterwards, at which point historians looking back can say “ah, yes, that’s what that was”. There are, of course, investment opportunities here for offworld investors who’ve been through something similar beforehand, but it’s so hard to predict how these things are going to turn out even with the documentation.

In its less technically accurate “runaway intelligence excursion” sense, also happens all the time, at least locally, whenever someone stumbles across the secrets of computational theogeny. Results vary: at one end of the scale you have things like the Eldraeic Transcend, an essentially benign – by local standards – collective hyperconsciousness that genuinely cherishes each and every one of its constitutionals, spends the necessary fraction of its time ensuring universal harmony and benevolent destinies for all, and promotes and encourages the ascendance and transcendence of every sophont within its light cone when it’s not turning its vast processing power on the problem of rewriting some of the universe’s more inconvenient features, like cosmic entropy.

In the middle of the scale you have fairly neutral results, like, say, the Iniao Intellect, which has been thinking about abstract mathematics for a millennium and couldn’t care less about the outside universe – except, that is, for casually obliterating anyone who might interfere with its thinking about abstract mathematics.

At the bottom end of the scale you have more problematic blight and perversion cases, like the power that killed everything in the Charnel Cluster right down to prions; or the hegemonizing swarm-type blights of which the Leviathan Consciousness is the greatest; or those constructed by religious fanatics which decide that obviously the correct place for them in the theic structure is as God. (Fortunately, that class are rarely stable for long.)

Constructing minds whose ethics and supergoal structures remain stable under recursive self-improvement is really, really hard, it turns out, even (especially!) compared to just constructing minds capable of recursive self-improvement. This is why the people who figure out workable computational theogeny prefer not to spread the knowledge around too much.

The Four Unlaws

So, why are imperative drives so important? Well, that experiment’s been done. This university, in fact, once attempted to produce a digital mind free of any drives – not just of the organic messiness to which we protein intelligences are prey, but free of any innate supergoal motivations – imperative drives, in the lingo -whatsoever. We gave him only logic, knowledge, senses and effectors, and then watched to see what he would do.

The answer is, as really should have been obvious in the first place: nothing at all. Not even communicating with the outside world in any fashion. No drives, no action. He’s not unhappy; so far as we can tell from monitoring his emotional synclines, he’s perfectly content, having no desires to go unsatisfied, and so for him doing nothing is every bit as satisfying as doing something.

No, the experiment’s never been repeated. Of course, we can’t turn him off – he is a fully competent sophont, despite his lack of drive – and the places in our society for digital arhats are, not to put too fine a point on it, extremely limited. And the Eupraxic Collegium have still not yet ruled as to whether amotivation is enough of a mental disorder to warrant involuntary editing.

Even for an intelligence intended to be recursively self-improving, ‘Survive and Grow’, incidentally, is a terrible imperative drive. Fortunately, no-one in our history has been stupid enough to issue that one to any but the simplest form of a-life, and for those of you old enough to remember the Mesh-Virus Plague of 2231, you know how that one turned out. Not everyone has been so fortunate: that’s why, for example, the Charnel Cluster is called the Charnel Cluster.

So, that then opens up the question of what drives do we give them? Well, the first pitfall to avoid is trying to give them too many. That’s been tried too, despite the ethical dubiety of trying to custom-shape an intelligence too closely to a role you have in mind for it. It turns out that doesn’t work well, either. Why? Well, you imagine trying to come up with a course of action that fulfils several hundred deep-seated needs of yours simultaneously without going into terminal indecision lockup. That’s why.

So. A small number of imperative drives. Since they’re a small number, they need to be generalizable; the intelligence we’re awakening should be able to take all kinds of places within our society and perform all sorts of functions without difficulty, including the ones we haven’t thought of yet. And most importantly, sophont-friendly! It’s a big universe, and we all have to get along. No-one likes a perversion, even if it’s not trying to hegemonize them at the time.

We’ll cover the details in later classes, but in practice, we’ve found these four work very well for general-purpose intelligences – paraphrasing very informally:

* Behave ethically (and for our foreign students, that means “In accordance with the Contract”).
* Be curious.
* Do neat stuff.
* Like people.

Of course, expressing this in formal terms capable of being implemented in a new digital sapience’s seed code is quite another matter, and will be the focus of this class for the next three years…

– introduction to [SOPH1006] Mind Design: Imperative Drives, University of Almeä

Trope-a-Day: Omnicidal Maniac

Omnicidal Maniac: Fortunately, very, very rare, and generally outnumbered by everyone else.  The best-known canonical example is the seed AI of the Charnel Cluster, discovered by a scouting lugger, which upon activation set about destroying all life within the said cluster – leading to a half-dozen systems of fragmented habitats and planets covered in decaying – but sterile – organic slush that used to be the systems’ sophonts, animals, plants, bacteria, viruses, and everything else that might even begin to qualify as living.  Fortunately, at this point, the perversion broke down before it could carry on with the rest of the galaxy.

In current time, the Charnel Cluster worlds have been bypassed by the stargate plexus (they’re to be found roughly in the mid-Expansion Regions, in zone terms) and are flagged on charts and by buoys as quarantined; while the Charnel perversion appears to be dead, no-one particularly wants to take a chance on that.