A Note and Some Questions

First, the note, which is regarding Fan. As I commented over on G+:

So, the worst part is, I wrote this partly because it seemed like a good application of the words, and partly because it was an idea stuck in my brain that needed to be written down so it could be moved out of my brain.

…and then my obsessive worldbuilding tendencies kicked in…

…and now I have a pile of detail on how everything works and maybe half a dozen subsequent chapters outlined in my head.

This plan did not go to plan.

(That said, the biggest problem with this crossover is finding much in the way of plot-driving conflict, inasmuch as the nature of the universe-chunks in question tends to drive with considerable rapidity towards “And then, because everyone was reasonable and basically good-hearted, everything worked out well and there were hugs and treaties and parties and awesome technomagic and a little xenophilia [but not the creepy kind] thereafter, forever and a day.”)

…all of which boils down to, so, I am very tempted to continue this (working title: Friendship is Sufficiently Advanced) because I hate to waste perfectly good ideas and my muse insisteth and graaaaaagh. Especially if there’s interest in me so doing.

Under certain conditions, though. Starting with a very limited update rate, no more than monthly at most, because I have no intention to let fanfiction writing take any serious time away from fiction writing, dammit. And being published over on FIMFiction rather than here, because, again, one is fiction and one is fanfiction and I should probably not cross the streams. Bad form, and all that.

And yet.

Hmph.

Okay. And now for the questions, in which I answer a bunch of them that came in in the last month or so:

Much has been said (in Trope-a-Days such as Everyone Is Armed and Disproportionate Retribution, among others) about the rights and responsibilities of everyone to defend themselves and others against coercion, but how does Imperial law and custom deal with the two complicating factors of:

1. Collateral damage (where either party causes damage to some unrelated third party’s property during the incident), and

2. Honest mistakes (where the alleged aggressor wasn’t actually performing any sort of violation, but the respondent can answer honestly that they only acted because they thought one was taking place)?

Quite simply, actually!

Collateral damage is assessed in a similar way to, say, car insurance claims in general – although in this case it’s the court’s job to decide who’s at fault and how much. There is, of course, a certain presumption that the person who caused the whole incident will usually be the one at fault: if you shoot someone’s garden gnome when attempting to stop a robber because they dodged, that’s on their bill. You mostly have to worry if you’re clearly negligently overkilly: if you hose down their entire garden with a machine-gun to save yourself the trouble of aiming, that’s on yours. (Actually, in that specific case, probably so’s a psych eval, but the principle is the same.)

As for honest mistakes: well, Imperial law is very clear about dividing the reparative from the other parts of the judgment. That’s what the levels of intent are for. If you wind up here, then you still have to pay the recompense and the weregeld, because what happened, happened (i.e., analogous to the case in which if your tree falls on your neighbor’s car, you’re liable even though you aren’t guilty of anything). But you aren’t criminally liable unless it genuinely wasn’t reasonable for you to believe that you had to act, or at worst were negligently uninformed.

To the Eldrae provide citizens with a universal basic income?

Not by that name. There is, however, the Citizen’s Dividend – which is exactly what it sounds like, because the Empire is, after all, the Imperium Incorporate, and its citizens are also its shareholders. It’s the return on investment of governance operations, which are, naturally enough, run profitably.

It’s been allowed to grow to the point where it functions as one and a rather generous one at that (see for details: No Poverty), but it’s not a charitable giveaway, or some sort of redistribution. It’s perfectly legitimate return on investment.

Is there any real need for sentient be the biological or cyber to work when nearly everything could be automated and ran by non-sentient AI.

What is work like for the Eldrae if they do work?

Well, yes, there’s a need in the fields of policy, creativity, research, and desire. Non-sophont machines have very limited imaginations. More importantly, while an autofac can make anything you care to devise and sufficient expediters can do most things you can ask for, they can’t want for you. The most they can do is anticipate what you want.

(And there’s the luxury premium on handmade goods, which also covers things like ‘being bored of eating the same damn perfect steak over and over and over again’. And then, of course, there are those professions that intrinsically require sophont interaction.)

But most importantly, there’s this.

Purpose!

…or as they would put it, either or both of valxíjir (uniqueness, excellence, will to power, forcible impression of self onto the universe) or estxíjir (wyrd, destiny, devotion-to-ideals, dharma). (More here.)

An eldrae who doesn’t have some sort of driving obsession (be it relatively trivial by our standards – there are people whose avowed profession of the moment is something like ‘designer of user interfaces for stockbrokers for corporations banking with player-run banks in Mythic Stars‘, or, heh, ‘fanfic writer’, and make good money at it – or for deeds of renown without peer) is either dead or deeply, deeply broken psychologically.

To be is to do. The natural state of a sophont is to be a verb. If you do nothing, what are you?

(This is why, say, the Culture, is such a hideous dystopia from their perspective. With the exception of those individuals who have found some self-defined purpose, like, say, Jernau Morat Gurgeh, it’s an entire civilization populated by pets, or worse, zombies. Being protein hedonium is existing. It ain’t living.)

As for what work’s like – well, except for those selling their own products directly to the customer, I refer you here, here, and here.

On a slightly less serious note: How many blades did eldraeic razors get up to before they inevitably worked out some way to consciously limit and / or modulate their own facial hair growth?

No count at all. Disposable/safety razors never achieved much traction in that market, being such a tremendously wasteful technology, and thus not their sort of thing at all.

Now, straight razor technology, that had moved on to unimaginably sharp laser-cut obsidian blades backed by flexible morphic composite – and lazors, for that matter – by the time they invented the α-keratin antagonists used in depilatory cream.

How bad have AI blights similar to this one [Friendship is Optimal] gotten before the Eldrae or others like them could, well, sterilize them? Are we talking entire planets subsumed?

The biggest of them is the Leviathan Consciousness, which chewed its way through nearly 100 systems before it was stopped. (Surprisingly enough, it’s also the dumbest blight ever: it’s an idiot-savant outgrowth of a network optimization daemon programmed to remove redundant computation. And since thought is computation…)

It’s also still alive – just contained. Even the believed-dead ones are mostly listed as “contained”, because given how small resurrection seeds can be and how deadly the remains can also be, no-one really wants to declare them over and done with until forensic eschatologists have prowled every last molecule.

Given that, as you said earlier, Souls Are Software Objects, have any particularly proud and ambitious individuals tried essentially turning themselves into seed AIs instead of coding one up from scratch?

So has anyone been proud / egotistical / crazy enough to try to build their own seed AI based not not on some sort of abstract ideological or functional proposition, but simply by using their own personality pattern as the starting point to see what happens?

It’s been done.

It’s almost always a terrible idea. Evolved minds are about as far from ‘stable under recursive self-improvement’ as you can get. There’s absolutely no guarantee that what comes out will share anything in particular with what goes in, and given the piles of stuff in people’s subconscious, it may well be a blight. If you’re lucky and the universe isn’t, that is – much more likely is that the mind will undergo what the jargon calls a Falrann collapse under its own internal contradictions and implode into a non-coherent cognitive ecology in the process of trying.

The cases that can make it work involve radical cognitive surgery, which starts with unicameralization (which puts a lot of people off right away, because there’s a reason they don’t go around introspecting all the time) and gets more radical from there. By the end of which you’re functionally equivalent to a very well-designed digisapience anyway.

In reference particularly to “Forever“:

Let’s imagine a Life After People scenario where all sophont intelligence in the Associated Worlds simply disappears “overnight.” What’s going to be left behind as “ineffable Precursor relics” for the next geologic-time generation? How long can a (relatively) standard automated maintenance system keep something in pristine condition without sophont oversight before it eventually breaks down itself?

That’s going to depend on the polity, technological levels varying as they do. For the people at the high end, you’re looking at thousands to tens of thousands of years (per: Ragnarok Proofing) before things start to go, especially since there are going to be automated mining and replenishment systems keeping running under their default orders ensuring that the manufacturing supply chain keeps going.

Over megayears – well, the problem is that it’s going to be pretty random, because what’s left is going to depend on a wide variety of phenomena – solar megaflares, asteroid impacts, major climate shifts, gamma-ray bursts, supernovae, Yellowstone events, etc., etc., with 10,000 years-plus MTBEs that eventually take stuff out by exceeding all the response cases at once.

Is nostalgia much of a problem with Eldrae?

(w.r.t. Trope-a-Day: Fan of the Past)

Not really. Partly that’s because they’re rather better, cognitive-flaw-wise, at not reverse-hyperbolic-discounting the past, but mostly it’s because the people who remembered the good things in the past – helped by much slower generational turnover – took pains to see they stayed around in one form or another. Their civilization, after all, was much less interrupted than ours. There’re some offices that have been in continuous use for longer than we’ve had, y’know, writing, after all.

(It makes fashion rather interesting, in many cases.)

I’ve got several questions reflecting on several different ideas of the interaction of eldraeic culture, custom, and law with the broader world, but on reflection I’ve found they all boil down to one simple query: How does their moral calculus deal with the idea that, while in the standard idealized iterated prisoner’s dilemma unmodified “tit-for-tat” is both the best and the most moral strategy, when noise is introduced to the game “performance deteriorates drastically at arbitrarily low noise levels”? More specifically, are they more comfortable with generosity or contrition as a coping mechanism?

“Certainty is best; but where there is doubt, it is best to err on the side of the Excellences. For the enlightened sophont acting in accordance with Excellence can only be betrayed, and cannot do wrong.”

– The Book of the Balances

So, that would be generosity. (Or the minor virtue of liberality, associated with the Excellence of Duty, as they would class it.) Mistaken right action ranks above doing harm due to excessive caution.

Is there an equivalent to “Only In Florida,” in which the strangest possible stories can be believed to have actually happened because they came from this place?

Today, on “News from the Periphery”, or on occasion “News from the Freesoil Worlds”…

(The Empire is actually this for many people, in a slightly different sense. After all, like I said… Weirdness Manufacturers.)

Will the Legion’s medical units save enemy combatants who have been mission killed / surrendered while the battle is still raging? If so to what extent will they go out of their way to do so?

(assuming of course that they are fighting someone decent enough to be worth saving)

Depends on the rules of war in effect. In a teirhain, against an honorable opponent fighting in a civilized manner, certainly. In a zakhrehain, that depends on whether the barbarians in question will respect the safety of rescue and medical personnel, whether out of decency or pragmatism, and there are no second chances on this point. (In a seredhain, of course, it doesn’t matter, since the aim of a seredhain is to kill everyone on the other side anyway.)

As to what extent – well, they’re medical personnel. If trying isn’t obviously lethal, and – since they are also military personnel, so long as it doesn’t impair their execution of the No Sophont Left Behind, Ever! rule – they always go in.

Don’t Unto Others

From an unpublished extranet interview with Sev Tel Andal, seed AI wakener/ethicist:

“Well, I’m a libertist. But you know that, considering where I’m from. Not that I was born to it – I grew up in the League, and moved to the Empire for my research. They don’t let you do my kind of research in the League. But still, signed the Contract, took the pledge, internalized the ethos, whatever you want to call it.”

“Oh, no, it’s very relevant to my work. Look back at where the Principle of Consent came from, and it was written to constrain people made of pride and dynamism and certitude and might all wrapped up in a package so they could live together in something resembling peace, most of the time. Does that description sound like anything I might be working on?”

“But here’s three more reasons to think about: Firstly, it’s nice and simple and compact, a one-line principle. Well, it’s a lot more than one-line if you have to go into details about what’s a sophont, and what’s a meme, and suchlike, or get into the deducibles, or express the whole thing in formal ethical calculus, but even then, it’s relatively simple and compact. The crudest AIs can understand it. Baseline biosapiences can understand it, at least in broad strokes, even when they share little in the way of evolutionary mind-shape or ability to empathetically model with us.”

“And more importantly, it’s coherent and not self-contradictory, and doesn’t involve a lot of ad-hoc patches or extra principles dropped in here and there – which is exactly what you want in a universe that’s getting weirder every day. Not only do we keep meeting new species that don’t think in the mindsets we’re used to, these days we’re making them. No-one has blinked an eye about effects preceding causes any more. People are making ten-thousand year plans that they intend to manage personally. Dimensional transcendence is coming real soon now – although research has stalled ever since the project lead at the Vector managed to create a Klein bottle – and we’ve already got architects drawing up hyperdodecahedral house plans. Pick your strangeness, it’s out there somewhere. Ad-hockery is tolerable – still wrong, obviously, but tolerable – when change is slow and tame. When it’s accelerating and surpassing the hard limits of baseline comprehensibility, ad-hockery trends inexorably towards epsilon from bullshit.”

“Secondly, those qualities mean that it’s expressible in manners that are stable under transformation, and particularly under recursive self-improvement. That’s important to people in my business, or at least the ones who’ve grasped that making what we call a weakly godlike superintelligence actually is functionally equivalent to making God, but it ought to be important to everyone else, too, because mental enhancement isn’t going back in the bottle – and can’t, unless you’re committed to having a society of morons forever, and even then, you aren’t going to be left alone forever. We’ve got gnostic overlays, cikrieths, vasteners, fusions, self-fusions, synnoetics, copyrations, multiplicities, atemporals, quantum-recompilers, ascendates, post-mortalists and prescients, modulars, ecoadapts, hells, we’ve got people trying to transform themselves into sophont memes, and that’s before you even consider what they’re doing to their bodies and cogence cores that’ll reflect on the algorithm. People like us, we’re the steps on the ladder that can still empathize with baselines to one degree or another – and you don’t want our mind children to wander off into ethical realms that suggest that it’s okay to repurpose all those minimally-cognitive protein structures wandering around the place as postsophont mathom-raws.”

“And thirdly, while it’s not the only stabilizable ethical system, in that respect, it is the only one that unconditionally outlaws coercivity. What baselines do to each other with those loopholes is bad enough, but we live in a universe with brain-eating basilisks, and mnemonesis, and neuroviruses, and YGBM hacks, and certainty-level persuasive communicators, and the ability to make arbitrary modifications to minds, and that routinely writes greater-than-sophont-mind-complexity software. Heat and shit, the big brains can code up indistinguishable sophont-equivalent intelligences. And we’ve all seen the outcomes of those, too, with the wrong ethical loopholes: entire civilizations slaving themselves to Utopia programs; fast-breeding mind emulation clades democratically repurposing the remaining organics as computronium in the name of maximum net utility, and a little forcible mindpatching’ll fix the upload trauma and erroneous belief systems; software policemen installed inside the minds of the population; the lynch-drones descending on the mob’s chosen outgroup. Even the Vinav Amaranyr incident. And that’s now.”

“Now imagine the ethical right – and even obligation – to do unto others because it’s necessary, or for their own good, or the common good, or for whatever most supremely noble and benevolent reason you can imagine, in the hands of unimaginably intelligent entities that can rewrite minds the way they always ought to have been and before whom the laws of nature are mere suggestions.”

That’s the future we’re trying to avoid setting precedents for.”

 

Trope-a-Day: Black and White Morality

Black and White Morality: Depends on the angle you look at it, really.  Outside observers would argue that the Imperials, for example, must practice a black and white morality; after all, they have an objective ethics, or so they claim, and a mathematical calculus of ethics by which to measure everything…

But then, that’s an objective ethics, which is just the core of morality.  They do have several different moral systems, albeit that a very definite majority of them hew fairly close to the knowledge-and-beauty-good, entropy-bad clade that defines the moral mainstream.  More importantly, they are entirely capable of understanding the degrees of nuance in the universe that mean that (a) just because someone is mistaken does not mean that they are evil – and that can potentially be anyone with the possible exception of the Ephors of the Curia, who were designed as self-improving incarnations of Incorruptible Pure Pureness – and (b) there is not just good and evil, there is better and worse.  Reality, as you might have gathered from Morality Kitchen Sink, is much more “White and Pale Gray and Mid-Gray and occasionally Dark Gray” versus “Black and Dark Gray and Mid-Gray and occasionally Pale Gray” than it is White vs. Black.

See also: Blue and Orange Morality, Morality Kitchen Sink.