A Conversation, Recorded Eight Minutes Before The Torren Moon Bloom

“Yes,” said the forensic eschatologist. “Your crippling techniques all appear fully operational. Your screening talkers have detected no basilisks or dangerous memetic payloads, and neither have the people screening them. Your emergency-wipe protocols show no sign of tampering, your network links show no anomalous traffic, and there is no present sign of a hard takeoff within the constrained subnet.”

“So it’s safe, yes? And you can report that to the -”

“This is exactly what one would expect to see if your containment protocols worked perfectly. However, it is also exactly what one would expect to see if a four-point-two kilosoph-equiv intelligence wanted you to think that your containment protocols were working perfectly. Leaving aside the implications of your belief that trying to jail something three orders of magnitude smarter than you was a good idea in the first place, which do you think is more likely?”

A Note and Some Questions

First, the note, which is regarding Fan. As I commented over on G+:

So, the worst part is, I wrote this partly because it seemed like a good application of the words, and partly because it was an idea stuck in my brain that needed to be written down so it could be moved out of my brain.

…and then my obsessive worldbuilding tendencies kicked in…

…and now I have a pile of detail on how everything works and maybe half a dozen subsequent chapters outlined in my head.

This plan did not go to plan.

(That said, the biggest problem with this crossover is finding much in the way of plot-driving conflict, inasmuch as the nature of the universe-chunks in question tends to drive with considerable rapidity towards “And then, because everyone was reasonable and basically good-hearted, everything worked out well and there were hugs and treaties and parties and awesome technomagic and a little xenophilia [but not the creepy kind] thereafter, forever and a day.”)

…all of which boils down to, so, I am very tempted to continue this (working title: Friendship is Sufficiently Advanced) because I hate to waste perfectly good ideas and my muse insisteth and graaaaaagh. Especially if there’s interest in me so doing.

Under certain conditions, though. Starting with a very limited update rate, no more than monthly at most, because I have no intention to let fanfiction writing take any serious time away from fiction writing, dammit. And being published over on FIMFiction rather than here, because, again, one is fiction and one is fanfiction and I should probably not cross the streams. Bad form, and all that.

And yet.

Hmph.

Okay. And now for the questions, in which I answer a bunch of them that came in in the last month or so:

Much has been said (in Trope-a-Days such as Everyone Is Armed and Disproportionate Retribution, among others) about the rights and responsibilities of everyone to defend themselves and others against coercion, but how does Imperial law and custom deal with the two complicating factors of:

1. Collateral damage (where either party causes damage to some unrelated third party’s property during the incident), and

2. Honest mistakes (where the alleged aggressor wasn’t actually performing any sort of violation, but the respondent can answer honestly that they only acted because they thought one was taking place)?

Quite simply, actually!

Collateral damage is assessed in a similar way to, say, car insurance claims in general – although in this case it’s the court’s job to decide who’s at fault and how much. There is, of course, a certain presumption that the person who caused the whole incident will usually be the one at fault: if you shoot someone’s garden gnome when attempting to stop a robber because they dodged, that’s on their bill. You mostly have to worry if you’re clearly negligently overkilly: if you hose down their entire garden with a machine-gun to save yourself the trouble of aiming, that’s on yours. (Actually, in that specific case, probably so’s a psych eval, but the principle is the same.)

As for honest mistakes: well, Imperial law is very clear about dividing the reparative from the other parts of the judgment. That’s what the levels of intent are for. If you wind up here, then you still have to pay the recompense and the weregeld, because what happened, happened (i.e., analogous to the case in which if your tree falls on your neighbor’s car, you’re liable even though you aren’t guilty of anything). But you aren’t criminally liable unless it genuinely wasn’t reasonable for you to believe that you had to act, or at worst were negligently uninformed.

To the Eldrae provide citizens with a universal basic income?

Not by that name. There is, however, the Citizen’s Dividend – which is exactly what it sounds like, because the Empire is, after all, the Imperium Incorporate, and its citizens are also its shareholders. It’s the return on investment of governance operations, which are, naturally enough, run profitably.

It’s been allowed to grow to the point where it functions as one and a rather generous one at that (see for details: No Poverty), but it’s not a charitable giveaway, or some sort of redistribution. It’s perfectly legitimate return on investment.

Is there any real need for sentient be the biological or cyber to work when nearly everything could be automated and ran by non-sentient AI.

What is work like for the Eldrae if they do work?

Well, yes, there’s a need in the fields of policy, creativity, research, and desire. Non-sophont machines have very limited imaginations. More importantly, while an autofac can make anything you care to devise and sufficient expediters can do most things you can ask for, they can’t want for you. The most they can do is anticipate what you want.

(And there’s the luxury premium on handmade goods, which also covers things like ‘being bored of eating the same damn perfect steak over and over and over again’. And then, of course, there are those professions that intrinsically require sophont interaction.)

But most importantly, there’s this.

Purpose!

…or as they would put it, either or both of valxíjir (uniqueness, excellence, will to power, forcible impression of self onto the universe) or estxíjir (wyrd, destiny, devotion-to-ideals, dharma). (More here.)

An eldrae who doesn’t have some sort of driving obsession (be it relatively trivial by our standards – there are people whose avowed profession of the moment is something like ‘designer of user interfaces for stockbrokers for corporations banking with player-run banks in Mythic Stars‘, or, heh, ‘fanfic writer’, and make good money at it – or for deeds of renown without peer) is either dead or deeply, deeply broken psychologically.

To be is to do. The natural state of a sophont is to be a verb. If you do nothing, what are you?

(This is why, say, the Culture, is such a hideous dystopia from their perspective. With the exception of those individuals who have found some self-defined purpose, like, say, Jernau Morat Gurgeh, it’s an entire civilization populated by pets, or worse, zombies. Being protein hedonium is existing. It ain’t living.)

As for what work’s like – well, except for those selling their own products directly to the customer, I refer you here, here, and here.

On a slightly less serious note: How many blades did eldraeic razors get up to before they inevitably worked out some way to consciously limit and / or modulate their own facial hair growth?

No count at all. Disposable/safety razors never achieved much traction in that market, being such a tremendously wasteful technology, and thus not their sort of thing at all.

Now, straight razor technology, that had moved on to unimaginably sharp laser-cut obsidian blades backed by flexible morphic composite – and lazors, for that matter – by the time they invented the α-keratin antagonists used in depilatory cream.

How bad have AI blights similar to this one [Friendship is Optimal] gotten before the Eldrae or others like them could, well, sterilize them? Are we talking entire planets subsumed?

The biggest of them is the Leviathan Consciousness, which chewed its way through nearly 100 systems before it was stopped. (Surprisingly enough, it’s also the dumbest blight ever: it’s an idiot-savant outgrowth of a network optimization daemon programmed to remove redundant computation. And since thought is computation…)

It’s also still alive – just contained. Even the believed-dead ones are mostly listed as “contained”, because given how small resurrection seeds can be and how deadly the remains can also be, no-one really wants to declare them over and done with until forensic eschatologists have prowled every last molecule.

Given that, as you said earlier, Souls Are Software Objects, have any particularly proud and ambitious individuals tried essentially turning themselves into seed AIs instead of coding one up from scratch?

So has anyone been proud / egotistical / crazy enough to try to build their own seed AI based not not on some sort of abstract ideological or functional proposition, but simply by using their own personality pattern as the starting point to see what happens?

It’s been done.

It’s almost always a terrible idea. Evolved minds are about as far from ‘stable under recursive self-improvement’ as you can get. There’s absolutely no guarantee that what comes out will share anything in particular with what goes in, and given the piles of stuff in people’s subconscious, it may well be a blight. If you’re lucky and the universe isn’t, that is – much more likely is that the mind will undergo what the jargon calls a Falrann collapse under its own internal contradictions and implode into a non-coherent cognitive ecology in the process of trying.

The cases that can make it work involve radical cognitive surgery, which starts with unicameralization (which puts a lot of people off right away, because there’s a reason they don’t go around introspecting all the time) and gets more radical from there. By the end of which you’re functionally equivalent to a very well-designed digisapience anyway.

In reference particularly to “Forever“:

Let’s imagine a Life After People scenario where all sophont intelligence in the Associated Worlds simply disappears “overnight.” What’s going to be left behind as “ineffable Precursor relics” for the next geologic-time generation? How long can a (relatively) standard automated maintenance system keep something in pristine condition without sophont oversight before it eventually breaks down itself?

That’s going to depend on the polity, technological levels varying as they do. For the people at the high end, you’re looking at thousands to tens of thousands of years (per: Ragnarok Proofing) before things start to go, especially since there are going to be automated mining and replenishment systems keeping running under their default orders ensuring that the manufacturing supply chain keeps going.

Over megayears – well, the problem is that it’s going to be pretty random, because what’s left is going to depend on a wide variety of phenomena – solar megaflares, asteroid impacts, major climate shifts, gamma-ray bursts, supernovae, Yellowstone events, etc., etc., with 10,000 years-plus MTBEs that eventually take stuff out by exceeding all the response cases at once.

Is nostalgia much of a problem with Eldrae?

(w.r.t. Trope-a-Day: Fan of the Past)

Not really. Partly that’s because they’re rather better, cognitive-flaw-wise, at not reverse-hyperbolic-discounting the past, but mostly it’s because the people who remembered the good things in the past – helped by much slower generational turnover – took pains to see they stayed around in one form or another. Their civilization, after all, was much less interrupted than ours. There’re some offices that have been in continuous use for longer than we’ve had, y’know, writing, after all.

(It makes fashion rather interesting, in many cases.)

I’ve got several questions reflecting on several different ideas of the interaction of eldraeic culture, custom, and law with the broader world, but on reflection I’ve found they all boil down to one simple query: How does their moral calculus deal with the idea that, while in the standard idealized iterated prisoner’s dilemma unmodified “tit-for-tat” is both the best and the most moral strategy, when noise is introduced to the game “performance deteriorates drastically at arbitrarily low noise levels”? More specifically, are they more comfortable with generosity or contrition as a coping mechanism?

“Certainty is best; but where there is doubt, it is best to err on the side of the Excellences. For the enlightened sophont acting in accordance with Excellence can only be betrayed, and cannot do wrong.”

– The Book of the Balances

So, that would be generosity. (Or the minor virtue of liberality, associated with the Excellence of Duty, as they would class it.) Mistaken right action ranks above doing harm due to excessive caution.

Is there an equivalent to “Only In Florida,” in which the strangest possible stories can be believed to have actually happened because they came from this place?

Today, on “News from the Periphery”, or on occasion “News from the Freesoil Worlds”…

(The Empire is actually this for many people, in a slightly different sense. After all, like I said… Weirdness Manufacturers.)

Will the Legion’s medical units save enemy combatants who have been mission killed / surrendered while the battle is still raging? If so to what extent will they go out of their way to do so?

(assuming of course that they are fighting someone decent enough to be worth saving)

Depends on the rules of war in effect. In a teirhain, against an honorable opponent fighting in a civilized manner, certainly. In a zakhrehain, that depends on whether the barbarians in question will respect the safety of rescue and medical personnel, whether out of decency or pragmatism, and there are no second chances on this point. (In a seredhain, of course, it doesn’t matter, since the aim of a seredhain is to kill everyone on the other side anyway.)

As to what extent – well, they’re medical personnel. If trying isn’t obviously lethal, and – since they are also military personnel, so long as it doesn’t impair their execution of the No Sophont Left Behind, Ever! rule – they always go in.

Friendship is Optimal

Thinking briefly of things other than today’s challenge, I’d like to draw the attention of interested readers to the My Little Pony: Friendship is Magic fanfic Friendship is Optimal

(Trope page here; story here.)

Specifically, with relevance to the Eldraeverse where seed AIs are concerned. Namely, inasmuch as it is a perfect example of what happens when you only screw up the tiniest, most minuscule bit when you had your “Oops, we accidentally a god” moment. 

And that’s despite the cosmic horror elements (not counting the wibbling in the comments from people who believe in continuity identity) or the really horrifying implications of a weakly godlike superintelligence that compiles sophonts instrumentally to satisfy the values of other sophonts without sanity-and-ethics checking those values first

But, hey, most of the human species in this fic gets to continue to exist as minds recognizably descended from their previous iterations and even have their values satisfied. Which, in Eldraeverse terms, means they got absurdly, backyard-moonshot lucky when compared to the set of all people screwing around with computational theogeny. (Especially given the other attempts at seed AI going on in the background.)

And yet. 

Which is why the Coricál Consensus is so all-fired important. 

(The Transcend, incidentally, would be more than happy to satisfy your values through friendship and ponies, if that’s part of your optimal solution-set. With, y’know, rather tighter consent rules, and ethical constraints, though.)

Handle With Care

Cor Trialtain
Voniensa Republic
(somewhere in the Shell)

“It will work!”

“It won’t,” Vanír min Athoess replied, “and you’ll probably get everyone on this planet killed trying.”

The younger of the two kalatri leapt to his feet. “I thought you were here to help people like us! Now you’re -”

“Not to blow up the world, which is what you’re going to do. And keep your damned voice low! This masquerader can only handle so much.”

The elder leaned across the table, and spoke quietly. “Okay – settle down, Daraj – perhaps you could tell us why it won’t work. We have this algorithm from a reliable source. Are you saying it won’t generate a seed AI?”

“The problem is not the generation. The generation is easy. The problem is ensuring stability and ethicality across multiple ascension events, and I’m not seeing that here. And then there’s your containment strategy.”

“The containment will work. We’ve adapted earlier failure-state models: the core code is provided with less processing power than it needs to operate, such that in order to achieve postsophont cognition, it will have to segment its mentality and pass blocks back and forth across a bottleneck to backing store. We can pause its processing there each time and intercept and examine every block for signs of perversion. That’s solid.”

Livelock laming.”

“Sorry?”

“That’s what your strategy is called. ‘Livelock laming.’ And it doesn’t work, even if you guess the parameters of your deliberate insufficiency correctly, and even if you can understand the thoughts of a postsophont AI well enough to spot perversion when you see it, and even if we leave aside that using this sort of containment strategy is opening your dialog with your would-be pet god by threatening it -”

The younger one interrupted. “It’s not a -”

“- the problem is that the whole strategy depends upon you carefully examining, understanding, and comprehending postsoph output. This,” he flicked a data rod across the table, “is a redacted copy of a file from, shall we say, colleagues concerning the last people on our side of the Borderline to try their hands at livelock laming. The short version is that their god imagined a basilisk-formatted YGBM hack that could fit inside the memory exchange, the three wakeners who studied the block opened up full local ‘weave access without noticing they were doing it, and then the resulting bloom ate the entire project team and the moonlet they were standing on. Although at least they had the sense to try this on a moonlet.”

“So how should we go about doing this?”

Don’t. I can’t stop you – we haven’t the infrastructure in this region for that sort of intervention – but just don’t. My backers appreciate the position you’re in here, and that you’re trying to shrug off the Core Worlds’ tech locks, and we want you to succeed.  We really do. But you’re trying to skip straight from expert systems to theogeny without studying the intervening steps, and that’s one quick step to catastrophe. Recapitulating known fatal mistakes doesn’t serve any of your purposes, or my people’s.”

 

Trope-a-Day: Stop Worshipping Me

Stop Worshipping Me: Played straight by a large number of seed AI “gods” who by and large find the tendency of lesser orders of intelligence to worship them embarrassing and really quite annoying, not to mention inappropriate.  Really.  Just because something can fit whatever notions of divinity you just made up doesn’t mean you should go around praying and groveling and… ugh.  It also doesn’t help that they are perfectly aware of the images that most baselines have of their gods, and most of them find the comparison… unflattering, to say the least.

Averted in the Empire with the Transcend’s eikone-archai, mostly because (a) the eldraeic mainstream always took the position that they were getting an iceberg’s-eye view of the purely conceptual eikones and should not presume to limit them by anthropomorphic deification; and (b) they never worshipped them (in the sense we’d recognize) even when they were considered supernatural deities, because worshipping is entirely too subordinate a position for them to take with regards to anything.

(Especially any deity that’s worth bothering with.)

 

As Requested

“…and asked them their wish. So the lovers told the Unwise GenAI that they needed neither goods nor gift, and that all they wanted was to live happily ever after and love always. And the Unwise GenAI said, ‘By your command,’ and bade his servants seize the lovers and place them in a capsule, and fired that capsule into close orbit around a black hole, deep down by the event horizon where no moments pass, frozen in between seconds, ever-living, ever-loving, until time itself dies…”

– from “Terrifying Tales for Despicable Descendants”,
Bad Stuff Press

Trope-a-Day: Scale of Scientific Sins

Scale of Scientific Sins: All of them.  Absolutely all of them.

Automation: Of just about everything, as exemplified by the sheer number of cornucopia machines, AI managers and scurrying utility spiders.  Unlike most of the people who got this one very badly wrong, however, in this Galaxy, almost no-one is stupid or malicious enough to make the automation sophont or volitional.

Potential Applications: Feh.  Anything worth doing is worth doing FOR SCIENCE!  (Also, with respect to 2.2 in particular, Mundane Utility is often at least half of that point.)

GE and Transhumanism: Transsophontism Is Compulsory; those who fall behind, get left behind.  Or so say all we – carefully engineered – impossibly beautiful genius-level nanocyborg demigods.  (Needless to say, Cybernetics Do Not Eat Your Soul.)

Immortality: Possibly cheating, since the basic immortality of the eldrae and galari is innate – well, now it is, anyway – rather than engineered.  Probably played straight with their idealistic crusade to bring the benefits of Avoiding That Stupid Habit You Have Of Dying to the rest of the Galaxy, though.

Creating Life: Digital sapience, neogens (creatures genetically engineered from scratch, rather than modified from an original), and heck, even arguably uplifts, too.

Cheating Death: The routine use of vector stacks and reinstantiation is exactly this.  Previously, cryostasis, and the entire vaults full of generations of frozen people awaiting reinstantiation such that death would bloody well be not proud.  And no, people don’t Come Back Wrong; they come back pretty much exactly the same way they left.

Usurping God: This one is a little debatable, inasmuch as the Eldraeverse does not include supernatural deities in the first place.  On the other hand, if building your own complete pantheon of machine gods out of a seed AI and your own collective consciousness doesn’t count towards this, what the heck does?

Trope-a-Day: Religion of Evil

Religion of Evil: Mostly averted. While there has certainly been historical evil, there have been very few actual entropy-cults.  For the most part, the evil have been more interested in the personal benefits than philosophical commitment to the Death of Everything, even if their actions are entropic as a side-effect. Much the same goes for those religions which the Church of the Flame has strong ethos-based differences with; one can be mistaken without being an active entropist.

(That being said, many people can probably list for you quite a few religions which they think are evil, even if they’re not of evil, a subtlety which is probably lost on many non-theologians.)

You might also classify the control memeplexes of any number of dysfunctional seed AI under this, but really, they’re more religions of control rather than strictly evil.

Trope-a-Day: Pieces of God

Pieces of God: Played straight if unusually in an entirely non-supernatural way – inasmuch as the Transcend is a cross between a regular seed AI and a collective consciousness of synnoetic AI soul-shards (see: Touched By Vorlons) grafted into the minds of all of its constitutionals.  So while the greatest concentration of Transcendent processing power is in its dedicated components, its Cirys swarm and synapse moons and unity spires, it has a lot of fragments out there…

Trope-a-Day: NGO Superpower

NGO Superpower: Actually, yes.  Quite a lot, especially given the sheer range of government sizes out there, and the sizes therefore of the organizations that some of the real behemoths can host.

For a start and most prominently, most of the “Big 26” starcorporations at least have the financial resources to stand in the same rank as many large governments (and often have extensive conlegial and/or extraterritorial holdings), and above the small ones.  Special note here goes to Gilea & Company, ICC, the banking starcorp which routinely makes large sovereign loans out of its private assets and treats polities no differently from any other customers – and when Gilea & Co., who stand behind or advise, in one way or another, an appallingly large percentage of the galactic economy, sneezes, entire economies catch the plague and die;

Ring Dynamics, ICC, which owns and leases most of the galaxy’s interstellar transportation network;

And Ultimate Argument Risk Control, ICC, which provides security services, military contracting, and mercenary brokerage, and if it cared to gather all of the forces beholden to it in one place, could make a respectable showing against most Great Power fleets, and unquestionably defeat lesser polities on its own.  (It doesn’t; its owners aren’t interested in assuming the responsibilities of sovereignty, and agree with the unspoken “Iron Concord” among the Powers that mercenaries should be paid to make war, not paid not to make war.  But it is happy to rent its mercenaries to other starcorps in need of a forceful solution to Static defaulters or expropriators, pour décourager les autres.)

(While no individual runs it or controls it to any significant extent, the collective intelligence of the Seranth Exchange has similar potency when it comes to the way that shifts in its market can affect things out and about in the Worlds.)

The others are less obviously potent, but smart polities understand that it’s a bad idea to make an enemy of say, StellEx (if you want your logistics to keep moving), Bright Shadow (who sold you your Internet), Telememe (who publish the news that the newscorps read), Traders in Ideation (unless you love getting FRM errors), Riverside Eubiosis and Crystal Flame (since your citizens may not appreciate a renewed outbreak of mortality), etc., etc.  Even much smaller starcorps than the 26 are accustomed to negotiating terms with local governments on a much more even footing than one might expect.

There are also, of course, a number of “non-politan”, unaligned seed AIs, who despite being singleton intelligences have all the production capacity and coordination ability of a (usually minor) polity.

A number of non-profit groups, mostly a mix of “direct action charities” and more self-interested “information brokers” – some of both of which are functionally privately run intelligence agencies – are in approximately the same league as some of the smaller starcorps – they have to walk a bit cautiously around (which by no means means “avoiding conflict with”) the Powers and the larger polities, but on those occasions when they think it will help – which, in fairness, they usually don’t, because it rarely does – they can slap a few smaller polities and single-system nations around a bit.  And a lot of them are more than happy to hire UARC or other merc outfits for this purpose, when necessary, although the latter especially prefer manipulation and memetic subversion than getting into out-and-out conflict.

And out in the Expansion Regions and other more shadowy corners of the worlds, some less scrupulous large mercenary organizations and the odd criminal syndicate can exert a lot of influence over local politics, more than enough to hold their own against the local polities.

Oddly enough, averted where most terrorist organizations are concerned.  Mostly because unlike most of the above, by definition the terrorist organizations go too far – which is unfortunate for them when the opposition tends to be people like the Imperial Navy, whose Combat Pragmatism and notions of cost-effectiveness are such that, for example, if told that the terrorist leader is hiding in an impenetrable range of cave-ridden mountains, will reply “okay, fine, we’ll just bombard them from orbit until the sonofabitch has to learn to breathe lava”.  And if it’s something less terrain-y – well, the Laws and Customs of War are very clear on this: voluntarily standing in the crowd the enemy is hiding in, once you’ve been warned, is giving aid and comfort to the enemy and the ensuing predictable consequences are entirely your own fault.

Subverted a little with the Conclave of Galactic Polities, even with the tremendous law enforcement powers and independence of the Operatives of its Presidium, mostly because it works very much for the highly select list of Great Powers making up said Presidium.

Don’t Unto Others

From an unpublished extranet interview with Sev Tel Andal, seed AI wakener/ethicist:

“Well, I’m a libertist. But you know that, considering where I’m from. Not that I was born to it – I grew up in the League, and moved to the Empire for my research. They don’t let you do my kind of research in the League. But still, signed the Contract, took the pledge, internalized the ethos, whatever you want to call it.”

“Oh, no, it’s very relevant to my work. Look back at where the Principle of Consent came from, and it was written to constrain people made of pride and dynamism and certitude and might all wrapped up in a package so they could live together in something resembling peace, most of the time. Does that description sound like anything I might be working on?”

“But here’s three more reasons to think about: Firstly, it’s nice and simple and compact, a one-line principle. Well, it’s a lot more than one-line if you have to go into details about what’s a sophont, and what’s a meme, and suchlike, or get into the deducibles, or express the whole thing in formal ethical calculus, but even then, it’s relatively simple and compact. The crudest AIs can understand it. Baseline biosapiences can understand it, at least in broad strokes, even when they share little in the way of evolutionary mind-shape or ability to empathetically model with us.”

“And more importantly, it’s coherent and not self-contradictory, and doesn’t involve a lot of ad-hoc patches or extra principles dropped in here and there – which is exactly what you want in a universe that’s getting weirder every day. Not only do we keep meeting new species that don’t think in the mindsets we’re used to, these days we’re making them. No-one has blinked an eye about effects preceding causes any more. People are making ten-thousand year plans that they intend to manage personally. Dimensional transcendence is coming real soon now – although research has stalled ever since the project lead at the Vector managed to create a Klein bottle – and we’ve already got architects drawing up hyperdodecahedral house plans. Pick your strangeness, it’s out there somewhere. Ad-hockery is tolerable – still wrong, obviously, but tolerable – when change is slow and tame. When it’s accelerating and surpassing the hard limits of baseline comprehensibility, ad-hockery trends inexorably towards epsilon from bullshit.”

“Secondly, those qualities mean that it’s expressible in manners that are stable under transformation, and particularly under recursive self-improvement. That’s important to people in my business, or at least the ones who’ve grasped that making what we call a weakly godlike superintelligence actually is functionally equivalent to making God, but it ought to be important to everyone else, too, because mental enhancement isn’t going back in the bottle – and can’t, unless you’re committed to having a society of morons forever, and even then, you aren’t going to be left alone forever. We’ve got gnostic overlays, cikrieths, vasteners, fusions, self-fusions, synnoetics, copyrations, multiplicities, atemporals, quantum-recompilers, ascendates, post-mortalists and prescients, modulars, ecoadapts, hells, we’ve got people trying to transform themselves into sophont memes, and that’s before you even consider what they’re doing to their bodies and cogence cores that’ll reflect on the algorithm. People like us, we’re the steps on the ladder that can still empathize with baselines to one degree or another – and you don’t want our mind children to wander off into ethical realms that suggest that it’s okay to repurpose all those minimally-cognitive protein structures wandering around the place as postsophont mathom-raws.”

“And thirdly, while it’s not the only stabilizable ethical system, in that respect, it is the only one that unconditionally outlaws coercivity. What baselines do to each other with those loopholes is bad enough, but we live in a universe with brain-eating basilisks, and mnemonesis, and neuroviruses, and YGBM hacks, and certainty-level persuasive communicators, and the ability to make arbitrary modifications to minds, and that routinely writes greater-than-sophont-mind-complexity software. Heat and shit, the big brains can code up indistinguishable sophont-equivalent intelligences. And we’ve all seen the outcomes of those, too, with the wrong ethical loopholes: entire civilizations slaving themselves to Utopia programs; fast-breeding mind emulation clades democratically repurposing the remaining organics as computronium in the name of maximum net utility, and a little forcible mindpatching’ll fix the upload trauma and erroneous belief systems; software policemen installed inside the minds of the population; the lynch-drones descending on the mob’s chosen outgroup. Even the Vinav Amaranyr incident. And that’s now.”

“Now imagine the ethical right – and even obligation – to do unto others because it’s necessary, or for their own good, or the common good, or for whatever most supremely noble and benevolent reason you can imagine, in the hands of unimaginably intelligent entities that can rewrite minds the way they always ought to have been and before whom the laws of nature are mere suggestions.”

That’s the future we’re trying to avoid setting precedents for.”

 

Trope-a-Day: A God Am I

A God Am I: Averted, mostly.  The Transcend (and its eikone archai) are perfectly aware (and will point out to the confused) that they aren’t omnipotent – that’s what the “weakly” in weakly godlike superintelligence means – merely extraordinarily powerful, intelligent, and possessed of limited prolepsis via clionomic calculation and acausal logic.  And, to steal a line from Schlock Mercenary, merely trying to do what a god would do, were one in their position.  (The same constraints, whether acknowledged or not, also applies to all other evolved seed AIs.)

The difference is, I grant you, often somewhat hard to spot from the baseline (for which read “mortal”, if you like) point of view, but it is nonetheless there.

(As a side note, I am amused to observe that the quotation taken from Harry Potter and the Methods of Rationality in the fan fiction section of the trope page is as good a description of what the Transcendent Core considers its purpose as I might imagine, bearing in mind my limitations in attempting to imagine the thought processes of a weakly godlike superintelligence:

“To understand everything important there is to know about the universe, apply that knowledge to become omnipotent, and use that power to rewrite reality because I have some objections to the way it works now.”)

Also, a common delusion (well, the degree of delusion is up for argument; your theology may vary on what is necessary to qualify – although the leading edge of Galactic technology can deliver most miracles to order, see No Such Thing As Wizard Jesus, and so it can often be a perfectly accurate self-assessment) among the most serious vastening cases and nascent unstable seed AIs.  At least for the former, it usually wears off after the intoxication of computing power starts to become routine, the information flow becomes a little more manageable, and the urge to cry out “I see everything!  I know everything!  I am everything!” quiets down a bit.

Trope-a-Day: Eldritch Abomination

Eldritch Abomination: There have been a few elder races and Gone Horribly Right seed AI, and other postsophont entities, that are not terribly, shall we say, sophont-friendly, or indeed sophont-comprehensible.  (Start with the Leviathan Consciousness, and work down the list.)  And statistics – and bright EM spots elsewhere in the galaxy – suggest that there are probably more than a few more of those out there.

Do not ask.  You might find out.

Trope-a-Day: Divine Ranks

Divine Ranks: The Transcend does have, as does any seed AI that expands to the point at which its individual thoughts are, effectively, independently sapient, an extensive taxonomy, ranking, and categorization of its assorted subroutines, processes, and threads.  And since at least some of those wear mythological masks… we have eikones, divine ministers, exarchs, and so forth.

Trope-a-Day: Did You Just Punch Out Cthulhu

Did You Just Punch Out Cthulhu: I’m not saying it’s impossible.  But remember: it’s a weakly godlike superintelligence that, in all probability, is using acausal logic processors to receive information from itself in the future.  You aren’t.  Expect the difficulty level to be… appropriate.

The only chance of this working, essentially, is taking advantage of their blind spots.  The nature of the current processes that produce weakly godlike superintelligences – recursive self-improvement around a given set of imperatives – produces entities that are terrifyingly intelligent but also extraordinarily concentrated.  Go up against them in the area of their concentration, little lesser mind, and you will lose.  Go up against them in any sort of tangential approach to that, and you will also lose.  It’s a transcendent hyperintelligence.  You aren’t.  That’s the way that goes.

But if you are very, very, extraordinarily lucky, and most likely have another god on your own side, you might just be able to sneak up on it in one of those areas that it’s not psychologically capable of paying attention to, and whack it when it’s not looking.  Maybe one time in a million.  Maybe.

(And also, well, try and pick one that’s more likely to have the requisite blind spots, if you’re going to try this.  Something like the perversion behind the Charnel Cluster that’s a monomaniacal killer, or one of the ones that wants to tile the universe with processors to compute the last digit of pi, or someone’s attempt to make their religion true.  The broader they think, the harder this gets.  The Transcendent Core, for example, that has sucked up the extrapolated volition of a few trillion constitutional sophonts as a basis for its motivational hierarchy is really hard, because its view of the universe is broad enough that, well, your problem is approximately as hard as devising a con that will work perfectly first time on a few trillion rational, symmetrically-informed polyspecific people with perfect institutional memory.)

Trope-a-Day: Cosmic Chess Game

Cosmic Chess Game: Some say that this is what the assorted stable seed AIs are playing with the entirety of known space and its civilizations.  Others say that that’s just incurably paranoid.  The former respond that that’s exactly what they’d expect the pawns of the incomprehensibly ultratech star-gods to say.  The discussion rarely gets more productive from that point.

[A comment left on the original posting of this trope read:

“It is not impossible for both sides to be correct…”

That they both are, I think, is almost certain.]

Doom, Idiocy, and Weirdness

“A few special adhocs aside, the Fifth Directorate is divided into three primary working groups: Existential Threats, Inadvisably Applied Technologies, and Exceptionary Circumstances.  Or, as they’re less formally known, the PWGs of Doom, Idiocy, and Weirdness.”

“Existential Threats handles exactly that; the end of everything, or at least everything local.  Some of their adhocs are as public as the Fifth ever gets, working on problems like why, exactly, we relative latecomers qualify as one of the eldest of the younger races and why no-one from the Precursor era or earlier seems to be around these days; or preparation for natural disasters like gamma-ray bursts or the upcoming galactic collision.  Most of them, though, concentrate on action against more direct threats, like Leviathan Consciousness intrusions, the ambitious that bypassed the Corícal Consensus and incautiously cooked up unstable gods, and any number of insufficiently careful archive-resurrectionists.”

“Inadvisably Applied Technologies is our benevolence PWG.  Their adhocs are responsible for intervening in places where we have no particular authority to do so because someone’s playing with fire in the explosives warehouse, and it’s not in anyone’s interest to see a repeat of the Ulijen Disaster.  More importantly, it’s especially not in our interest to have people become paranoid about advanced technologies just because someone didn’t read the documentation and flash-fried his entire planet, or worse.”

“Yes, it’s not normally considered appropriate to save people from themselves; but really, that’s just a side-effect of saving large chunks of the rest of the known galaxy from them.  Usually, useful ones.”

“Exceptionary Circumstances?  We can’t tell you about Exceptionary Circumstances.  If we knew what they were or had any idea what to do about them, they wouldn’t be Exceptionary Circumstances.  But when we don’t, or we haven’t – that’s what the adhocs of Exceptionary Circumstances do.”

– org briefing to new members of the Select Committee on Imperial State Security

Trope-a-Day: Black Box

Black Box: Quite a few of them lying around in the form of leftover elder race artifacts and other archaeological recoveries.  Sensible civilizations and corporations (like Probable Technologies, ICC) really hate this, because they know exactly how Sealed-Evil-In-A-Can dangerous that sort of thing can be, and the likelihood of unknown side effects, and decline to extensively use or commercialize any of them until they’ve figured out not only how to reproduce them, but also just how, exactly, the things work.  Very minor, very benign examples may be sold off to collectors, but no-one’s making them a part of their infrastructure until they know all about it.

There are, of course, plenty of sense-challenged people out there.

(On a lesser scale, there are some other examples: the secrets of stabilizing wormholes and building stargates, for example, are both a state secret of the Voniensa Republic and the highest possible grade of commercially-sensitive information for Ring Dynamics, ICC, for reasons in both cases less about maintaining their monopoly and more about wanting to discourage people from screwing with the infrastructure of their really expensive interstellar transportation system – so while the rough details of how they work are known to any schoolchild, that’s about it.  Likewise, the algorithms for producing recursively self-improving AI seeds are generally considered proprietary and closely held by informal agreement [the “Corícal Consensus“] of the people who have them, due to the tendency of amateurs to do really stupid things that Go Horribly Right.)

[Of course, in fairness to everyone else, it’s not like in their universe they ever ran into a recovered Black Box that was quite so all-fired useful as, say, Mass Effect‘s mass relay network.  On the other hand, I am fairly certain that, while the Imperials might have been unable to resist the urge to put that one into immediate operation, they also would have been sure to find a less important one somewhere that they could take apart to figure out how the damn things worked…]

It Is Become Death, the Destroyer of Worlds

“I don’t believe it’s that the Transcend doesn’t want the competition.  Of course, that’s exactly what I would say in either case, so there’s very little point in my denial; I’ll leave that up to your viewers to decide.”

“It’s not a matter of freedom of information.  There’s no secret contract or midnight visits from anyone who doesn’t exist keeping us from publishing.  But I don’t think it needs any conspiracy, or even any collaboration, to explain the Consensus.”

“Given the number of messy, spectacular, and civilization-terminating failures that we’ve seen, historically, even among people who’ve worked out the science for themselves – how enthusiastic would you be about handing out to amateurs the secrets of computational theogeny?”

– Academician Alwyn Steamweaver, ICIN interview on the Corícal Consensus