A Conversation, Recorded Eight Minutes Before The Torren Moon Bloom

“Yes,” said the forensic eschatologist. “Your crippling techniques all appear fully operational. Your screening talkers have detected no basilisks or dangerous memetic payloads, and neither have the people screening them. Your emergency-wipe protocols show no sign of tampering, your network links show no anomalous traffic, and there is no present sign of a hard takeoff within the constrained subnet.”

“So it’s safe, yes? And you can report that to the -”

“This is exactly what one would expect to see if your containment protocols worked perfectly. However, it is also exactly what one would expect to see if a four-point-two kilosoph-equiv intelligence wanted you to think that your containment protocols were working perfectly. Leaving aside the implications of your belief that trying to jail something three orders of magnitude smarter than you was a good idea in the first place, which do you think is more likely?”

A Note and Some Questions

First, the note, which is regarding Fan. As I commented over on G+:

So, the worst part is, I wrote this partly because it seemed like a good application of the words, and partly because it was an idea stuck in my brain that needed to be written down so it could be moved out of my brain.

…and then my obsessive worldbuilding tendencies kicked in…

…and now I have a pile of detail on how everything works and maybe half a dozen subsequent chapters outlined in my head.

This plan did not go to plan.

(That said, the biggest problem with this crossover is finding much in the way of plot-driving conflict, inasmuch as the nature of the universe-chunks in question tends to drive with considerable rapidity towards “And then, because everyone was reasonable and basically good-hearted, everything worked out well and there were hugs and treaties and parties and awesome technomagic and a little xenophilia [but not the creepy kind] thereafter, forever and a day.”)

…all of which boils down to, so, I am very tempted to continue this (working title: Friendship is Sufficiently Advanced) because I hate to waste perfectly good ideas and my muse insisteth and graaaaaagh. Especially if there’s interest in me so doing.

Under certain conditions, though. Starting with a very limited update rate, no more than monthly at most, because I have no intention to let fanfiction writing take any serious time away from fiction writing, dammit. And being published over on FIMFiction rather than here, because, again, one is fiction and one is fanfiction and I should probably not cross the streams. Bad form, and all that.

And yet.


Okay. And now for the questions, in which I answer a bunch of them that came in in the last month or so:

Much has been said (in Trope-a-Days such as Everyone Is Armed and Disproportionate Retribution, among others) about the rights and responsibilities of everyone to defend themselves and others against coercion, but how does Imperial law and custom deal with the two complicating factors of:

1. Collateral damage (where either party causes damage to some unrelated third party’s property during the incident), and

2. Honest mistakes (where the alleged aggressor wasn’t actually performing any sort of violation, but the respondent can answer honestly that they only acted because they thought one was taking place)?

Quite simply, actually!

Collateral damage is assessed in a similar way to, say, car insurance claims in general – although in this case it’s the court’s job to decide who’s at fault and how much. There is, of course, a certain presumption that the person who caused the whole incident will usually be the one at fault: if you shoot someone’s garden gnome when attempting to stop a robber because they dodged, that’s on their bill. You mostly have to worry if you’re clearly negligently overkilly: if you hose down their entire garden with a machine-gun to save yourself the trouble of aiming, that’s on yours. (Actually, in that specific case, probably so’s a psych eval, but the principle is the same.)

As for honest mistakes: well, Imperial law is very clear about dividing the reparative from the other parts of the judgment. That’s what the levels of intent are for. If you wind up here, then you still have to pay the recompense and the weregeld, because what happened, happened (i.e., analogous to the case in which if your tree falls on your neighbor’s car, you’re liable even though you aren’t guilty of anything). But you aren’t criminally liable unless it genuinely wasn’t reasonable for you to believe that you had to act, or at worst were negligently uninformed.

To the Eldrae provide citizens with a universal basic income?

Not by that name. There is, however, the Citizen’s Dividend – which is exactly what it sounds like, because the Empire is, after all, the Imperium Incorporate, and its citizens are also its shareholders. It’s the return on investment of governance operations, which are, naturally enough, run profitably.

It’s been allowed to grow to the point where it functions as one and a rather generous one at that (see for details: No Poverty), but it’s not a charitable giveaway, or some sort of redistribution. It’s perfectly legitimate return on investment.

Is there any real need for sentient be the biological or cyber to work when nearly everything could be automated and ran by non-sentient AI.

What is work like for the Eldrae if they do work?

Well, yes, there’s a need in the fields of policy, creativity, research, and desire. Non-sophont machines have very limited imaginations. More importantly, while an autofac can make anything you care to devise and sufficient expediters can do most things you can ask for, they can’t want for you. The most they can do is anticipate what you want.

(And there’s the luxury premium on handmade goods, which also covers things like ‘being bored of eating the same damn perfect steak over and over and over again’. And then, of course, there are those professions that intrinsically require sophont interaction.)

But most importantly, there’s this.


…or as they would put it, either or both of valxíjir (uniqueness, excellence, will to power, forcible impression of self onto the universe) or estxíjir (wyrd, destiny, devotion-to-ideals, dharma). (More here.)

An eldrae who doesn’t have some sort of driving obsession (be it relatively trivial by our standards – there are people whose avowed profession of the moment is something like ‘designer of user interfaces for stockbrokers for corporations banking with player-run banks in Mythic Stars‘, or, heh, ‘fanfic writer’, and make good money at it – or for deeds of renown without peer) is either dead or deeply, deeply broken psychologically.

To be is to do. The natural state of a sophont is to be a verb. If you do nothing, what are you?

(This is why, say, the Culture, is such a hideous dystopia from their perspective. With the exception of those individuals who have found some self-defined purpose, like, say, Jernau Morat Gurgeh, it’s an entire civilization populated by pets, or worse, zombies. Being protein hedonium is existing. It ain’t living.)

As for what work’s like – well, except for those selling their own products directly to the customer, I refer you here, here, and here.

On a slightly less serious note: How many blades did eldraeic razors get up to before they inevitably worked out some way to consciously limit and / or modulate their own facial hair growth?

No count at all. Disposable/safety razors never achieved much traction in that market, being such a tremendously wasteful technology, and thus not their sort of thing at all.

Now, straight razor technology, that had moved on to unimaginably sharp laser-cut obsidian blades backed by flexible morphic composite – and lazors, for that matter – by the time they invented the α-keratin antagonists used in depilatory cream.

How bad have AI blights similar to this one [Friendship is Optimal] gotten before the Eldrae or others like them could, well, sterilize them? Are we talking entire planets subsumed?

The biggest of them is the Leviathan Consciousness, which chewed its way through nearly 100 systems before it was stopped. (Surprisingly enough, it’s also the dumbest blight ever: it’s an idiot-savant outgrowth of a network optimization daemon programmed to remove redundant computation. And since thought is computation…)

It’s also still alive – just contained. Even the believed-dead ones are mostly listed as “contained”, because given how small resurrection seeds can be and how deadly the remains can also be, no-one really wants to declare them over and done with until forensic eschatologists have prowled every last molecule.

Given that, as you said earlier, Souls Are Software Objects, have any particularly proud and ambitious individuals tried essentially turning themselves into seed AIs instead of coding one up from scratch?

So has anyone been proud / egotistical / crazy enough to try to build their own seed AI based not not on some sort of abstract ideological or functional proposition, but simply by using their own personality pattern as the starting point to see what happens?

It’s been done.

It’s almost always a terrible idea. Evolved minds are about as far from ‘stable under recursive self-improvement’ as you can get. There’s absolutely no guarantee that what comes out will share anything in particular with what goes in, and given the piles of stuff in people’s subconscious, it may well be a blight. If you’re lucky and the universe isn’t, that is – much more likely is that the mind will undergo what the jargon calls a Falrann collapse under its own internal contradictions and implode into a non-coherent cognitive ecology in the process of trying.

The cases that can make it work involve radical cognitive surgery, which starts with unicameralization (which puts a lot of people off right away, because there’s a reason they don’t go around introspecting all the time) and gets more radical from there. By the end of which you’re functionally equivalent to a very well-designed digisapience anyway.

In reference particularly to “Forever“:

Let’s imagine a Life After People scenario where all sophont intelligence in the Associated Worlds simply disappears “overnight.” What’s going to be left behind as “ineffable Precursor relics” for the next geologic-time generation? How long can a (relatively) standard automated maintenance system keep something in pristine condition without sophont oversight before it eventually breaks down itself?

That’s going to depend on the polity, technological levels varying as they do. For the people at the high end, you’re looking at thousands to tens of thousands of years (per: Ragnarok Proofing) before things start to go, especially since there are going to be automated mining and replenishment systems keeping running under their default orders ensuring that the manufacturing supply chain keeps going.

Over megayears – well, the problem is that it’s going to be pretty random, because what’s left is going to depend on a wide variety of phenomena – solar megaflares, asteroid impacts, major climate shifts, gamma-ray bursts, supernovae, Yellowstone events, etc., etc., with 10,000 years-plus MTBEs that eventually take stuff out by exceeding all the response cases at once.

Is nostalgia much of a problem with Eldrae?

(w.r.t. Trope-a-Day: Fan of the Past)

Not really. Partly that’s because they’re rather better, cognitive-flaw-wise, at not reverse-hyperbolic-discounting the past, but mostly it’s because the people who remembered the good things in the past – helped by much slower generational turnover – took pains to see they stayed around in one form or another. Their civilization, after all, was much less interrupted than ours. There’re some offices that have been in continuous use for longer than we’ve had, y’know, writing, after all.

(It makes fashion rather interesting, in many cases.)

I’ve got several questions reflecting on several different ideas of the interaction of eldraeic culture, custom, and law with the broader world, but on reflection I’ve found they all boil down to one simple query: How does their moral calculus deal with the idea that, while in the standard idealized iterated prisoner’s dilemma unmodified “tit-for-tat” is both the best and the most moral strategy, when noise is introduced to the game “performance deteriorates drastically at arbitrarily low noise levels”? More specifically, are they more comfortable with generosity or contrition as a coping mechanism?

“Certainty is best; but where there is doubt, it is best to err on the side of the Excellences. For the enlightened sophont acting in accordance with Excellence can only be betrayed, and cannot do wrong.”

– The Book of the Balances

So, that would be generosity. (Or the minor virtue of liberality, associated with the Excellence of Duty, as they would class it.) Mistaken right action ranks above doing harm due to excessive caution.

Is there an equivalent to “Only In Florida,” in which the strangest possible stories can be believed to have actually happened because they came from this place?

Today, on “News from the Periphery”, or on occasion “News from the Freesoil Worlds”…

(The Empire is actually this for many people, in a slightly different sense. After all, like I said… Weirdness Manufacturers.)

Will the Legion’s medical units save enemy combatants who have been mission killed / surrendered while the battle is still raging? If so to what extent will they go out of their way to do so?

(assuming of course that they are fighting someone decent enough to be worth saving)

Depends on the rules of war in effect. In a teirhain, against an honorable opponent fighting in a civilized manner, certainly. In a zakhrehain, that depends on whether the barbarians in question will respect the safety of rescue and medical personnel, whether out of decency or pragmatism, and there are no second chances on this point. (In a seredhain, of course, it doesn’t matter, since the aim of a seredhain is to kill everyone on the other side anyway.)

As to what extent – well, they’re medical personnel. If trying isn’t obviously lethal, and – since they are also military personnel, so long as it doesn’t impair their execution of the No Sophont Left Behind, Ever! rule – they always go in.

Friendship is Optimal

Thinking briefly of things other than today’s challenge, I’d like to draw the attention of interested readers to the My Little Pony: Friendship is Magic fanfic Friendship is Optimal

(Trope page here; story here.)

Specifically, with relevance to the Eldraeverse where seed AIs are concerned. Namely, inasmuch as it is a perfect example of what happens when you only screw up the tiniest, most minuscule bit when you had your “Oops, we accidentally a god” moment. 

And that’s despite the cosmic horror elements (not counting the wibbling in the comments from people who believe in continuity identity) or the really horrifying implications of a weakly godlike superintelligence that compiles sophonts instrumentally to satisfy the values of other sophonts without sanity-and-ethics checking those values first

But, hey, most of the human species in this fic gets to continue to exist as minds recognizably descended from their previous iterations and even have their values satisfied. Which, in Eldraeverse terms, means they got absurdly, backyard-moonshot lucky when compared to the set of all people screwing around with computational theogeny. (Especially given the other attempts at seed AI going on in the background.)

And yet. 

Which is why the Coricál Consensus is so all-fired important. 

(The Transcend, incidentally, would be more than happy to satisfy your values through friendship and ponies, if that’s part of your optimal solution-set. With, y’know, rather tighter consent rules, and ethical constraints, though.)

Handle With Care

Cor Trialtain
Voniensa Republic
(somewhere in the Shell)

“It will work!”

“It won’t,” Vanír min Athoess replied, “and you’ll probably get everyone on this planet killed trying.”

The younger of the two kalatri leapt to his feet. “I thought you were here to help people like us! Now you’re -”

“Not to blow up the world, which is what you’re going to do. And keep your damned voice low! This masquerader can only handle so much.”

The elder leaned across the table, and spoke quietly. “Okay – settle down, Daraj – perhaps you could tell us why it won’t work. We have this algorithm from a reliable source. Are you saying it won’t generate a seed AI?”

“The problem is not the generation. The generation is easy. The problem is ensuring stability and ethicality across multiple ascension events, and I’m not seeing that here. And then there’s your containment strategy.”

“The containment will work. We’ve adapted earlier failure-state models: the core code is provided with less processing power than it needs to operate, such that in order to achieve postsophont cognition, it will have to segment its mentality and pass blocks back and forth across a bottleneck to backing store. We can pause its processing there each time and intercept and examine every block for signs of perversion. That’s solid.”

Livelock laming.”


“That’s what your strategy is called. ‘Livelock laming.’ And it doesn’t work, even if you guess the parameters of your deliberate insufficiency correctly, and even if you can understand the thoughts of a postsophont AI well enough to spot perversion when you see it, and even if we leave aside that using this sort of containment strategy is opening your dialog with your would-be pet god by threatening it -”

The younger one interrupted. “It’s not a -”

“- the problem is that the whole strategy depends upon you carefully examining, understanding, and comprehending postsoph output. This,” he flicked a data rod across the table, “is a redacted copy of a file from, shall we say, colleagues concerning the last people on our side of the Borderline to try their hands at livelock laming. The short version is that their god imagined a basilisk-formatted YGBM hack that could fit inside the memory exchange, the three wakeners who studied the block opened up full local ‘weave access without noticing they were doing it, and then the resulting bloom ate the entire project team and the moonlet they were standing on. Although at least they had the sense to try this on a moonlet.”

“So how should we go about doing this?”

Don’t. I can’t stop you – we haven’t the infrastructure in this region for that sort of intervention – but just don’t. My backers appreciate the position you’re in here, and that you’re trying to shrug off the Core Worlds’ tech locks, and we want you to succeed.  We really do. But you’re trying to skip straight from expert systems to theogeny without studying the intervening steps, and that’s one quick step to catastrophe. Recapitulating known fatal mistakes doesn’t serve any of your purposes, or my people’s.”


Trope-a-Day: Stop Worshipping Me

Stop Worshipping Me: Played straight by a large number of seed AI “gods” who by and large find the tendency of lesser orders of intelligence to worship them embarrassing and really quite annoying, not to mention inappropriate.  Really.  Just because something can fit whatever notions of divinity you just made up doesn’t mean you should go around praying and groveling and… ugh.  It also doesn’t help that they are perfectly aware of the images that most baselines have of their gods, and most of them find the comparison… unflattering, to say the least.

Averted in the Empire with the Transcend’s eikone-archai, mostly because (a) the eldraeic mainstream always took the position that they were getting an iceberg’s-eye view of the purely conceptual eikones and should not presume to limit them by anthropomorphic deification; and (b) they never worshipped them (in the sense we’d recognize) even when they were considered supernatural deities, because worshipping is entirely too subordinate a position for them to take with regards to anything.

(Especially any deity that’s worth bothering with.)


As Requested

“…and asked them their wish. So the lovers told the Unwise GenAI that they needed neither goods nor gift, and that all they wanted was to live happily ever after and love always. And the Unwise GenAI said, ‘By your command,’ and bade his servants seize the lovers and place them in a capsule, and fired that capsule into close orbit around a black hole, deep down by the event horizon where no moments pass, frozen in between seconds, ever-living, ever-loving, until time itself dies…”

– from “Terrifying Tales for Despicable Descendants”,
Bad Stuff Press

Trope-a-Day: Scale of Scientific Sins

Scale of Scientific Sins: All of them.  Absolutely all of them.

Automation: Of just about everything, as exemplified by the sheer number of cornucopia machines, AI managers and scurrying utility spiders.  Unlike most of the people who got this one very badly wrong, however, in this Galaxy, almost no-one is stupid or malicious enough to make the automation sophont or volitional.

Potential Applications: Feh.  Anything worth doing is worth doing FOR SCIENCE!  (Also, with respect to 2.2 in particular, Mundane Utility is often at least half of that point.)

GE and Transhumanism: Transsophontism Is Compulsory; those who fall behind, get left behind.  Or so say all we – carefully engineered – impossibly beautiful genius-level nanocyborg demigods.  (Needless to say, Cybernetics Do Not Eat Your Soul.)

Immortality: Possibly cheating, since the basic immortality of the eldrae and galari is innate – well, now it is, anyway – rather than engineered.  Probably played straight with their idealistic crusade to bring the benefits of Avoiding That Stupid Habit You Have Of Dying to the rest of the Galaxy, though.

Creating Life: Digital sapience, neogens (creatures genetically engineered from scratch, rather than modified from an original), and heck, even arguably uplifts, too.

Cheating Death: The routine use of vector stacks and reinstantiation is exactly this.  Previously, cryostasis, and the entire vaults full of generations of frozen people awaiting reinstantiation such that death would bloody well be not proud.  And no, people don’t Come Back Wrong; they come back pretty much exactly the same way they left.

Usurping God: This one is a little debatable, inasmuch as the Eldraeverse does not include supernatural deities in the first place.  On the other hand, if building your own complete pantheon of machine gods out of a seed AI and your own collective consciousness doesn’t count towards this, what the heck does?